代写代考 COMP3530 Maximum submodular coverage – cscodehelp代写
Maximum submodular coverage
This week we study the maximum submodular coverage problem, a generalization of the maximum coverage problem. We also intro- duce the concept of approximation algorithms.
11.1 Maximum coverage
We start by discussing the maximum coverage problem. Given a collection of subsets S = {S1,…,Sm} of a ground set U, we want to pick k subsets whose union has maximum cardinality:
maximize T∈C T such that C ⊆ S and |C| = k
The problem is known to be NP-hard, so we will design an ap-
proximation algorithm for it.
Definition 11.1. For a maximization problem, a ρ-approximation is an algorithm that runs in polynomial time and returns a solution whose cost is at least ρ times the cost of an optimal solution, where ρ ≤ 1.
For minimization problems, we define a ρ-approximation to be an algorithm that runs in polynomial time and returns a solution whose cost is at most ρ times the cost of the optimum, where ρ ≥ 1.
Consider the following greedy algorithm that builds a solution by iteratively picking the set maximizing the number of newly covered elements.
The algorithm is too shortsighted to be able to find an optimal solution in general, but it is good approximation.
In general, greedy is not optimal. In the above instance, if k = 2, greedy may pick S3 and S1 covering 1.5l elements, while the optimal solution uses S1 and S2 to cover 2l elements.
© Copyright 2021 Julián Mestre.
1: 2: 3: 4: 5: 6:
functiongreedy(k,U,S) C←∅
while |C| < k do
let S ∈ S be a subset maximizing ST∈C T add S to C
This work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License.
week 11 – maximum submodular coverage
discrete optimization 2
Theorem 11.1. Algorithm greedy is a 1 − e1 -approximation for the maximum coverage problem.
Let O1,O2,...,Ok be the subsets used by an optimal solution,
Recall that e ≈ 2.718 is a mathematical constant that pops up in many places, such as
i) e = ∞ 1 , n=1 n!
and let opt be the value of the optimal solution; that is, opt = n
| j Oj|. Similarly, let A1,A2,...,Ak be the sets chosen by greedy iii) e is such that e 1 dx = 1, i 1x
sortedintheordertheyarechosen.Finally,letXi= j=1Aj.
We start by noting that the first subset chosen by the algorithm
must have maximum cardinality among all subsets, in particular |A1| ≥ |Oj| for all j = 1,...,k. Therefore,
iv) n!=√2πnnn1+O(1), en
|X1| ≥ max|Oj| ≥ j j ≥ opt.
and many others.
Here we use the fact that the cardinal- ity of the union of two sets is less than or equal to the sum of the cardinalities of the two sets.
ii) e = limn→∞ 1 + 1 n,
Now, for the second subset we note that | A2 X1| ≥ |Oj X1| for
all j = 1,...,k. Therefore,
|A2X1|≥max|OjX1|≥ j j 1 ≥ j j 1 ≥ opt−|X1|,
|O X | | O X | jkkk
opt+(k−1)|X1| opt k−1 |X2|=|X1|+|A2X1|≥ k ≥ k 1+ k
Using induction and following a similar line of reasoning, one can prove that for all i = 1,...,k we have
opt k−1 k−1i−1 |Xi|≥ k 1+ k +···+ k .
To finish off the proof of the theorem, we just need to note that
opt k−1 k−1k−1 |Xk|≥ k 1+ k +···+ k
Here we use the fact that a geometric series collapses to
2 l 1−al+1 1+a+a +···+a = 1−a .
and that the series 1 − n1 n is increas- ing and converges to 1e .
1k =opt 1− 1−k ,
>opt1−lim 1− n→∞
1 =opt1− .
1 − n1 n n
11.2 Maximum submodular coverage
Given a subset function f : 2U → R defined on the power set of a ground set U, we consider the problem of finding a subset C of cardinality k maximizing f; that is,
maximize f(C) such that C ⊆ U and |C| ≤ k. (11.1)
Definition 11.2. A function f : 2U → R is monotone non-decreasing if ∀A⊆B⊆U : f(A)≤f(B).
week 11 – maximum submodular coverage
discrete optimization 3
Definition 11.3. A function f : 2U → R is submodular if f(A+x)−f(A) ≥ f(B+x)−f(B)∀A ⊆ B ⊆ U,x ∈ UB.
Definition 11.4. A function f : 2U → R is normalized if f(∅) = 0.
When the function f in (11.1) is monotone non-decreasing, sub- modular, and normalized we call (11.1) the maximum submodular coverage problem. In this section we study the following simple greedy algorithm that builds a solution by iteratively picking an element maximizing the increase in f value.
Given an instance of the maximum coverage problem (k′, U′, S ′), where S′ = {S1,…,Sm)}, we can construct an instance (k, U, f) of maximum submodular coverage by defining
U = {1,…,m}, and f(C) = i∈C Si .
The function f is monotone non- decreasing and submodular.
Note that greedy and greedy- submodular behave exactly the same way on (k′,U′,S′) and (k,U,f), so we can regard Theorem 11.2 as a strict generalization of Theorem 11.1.
1: 2: 3: 4: 5: 6:
functiongreedy-submodular(k,U,f) C←∅
while |C| < k do
let x ∈ S be an element maximizing f(C + x) − f(C) add x to C
Theorem 11.2. greedy-submodular is a
for maximum submodular coverage provided Line 4 can be carried out efficiently.
Let o1, o2, . . . , ok be the elements that make up an optimal solu- tion, and let opt = f({o1,...,ok}). Similarly, let a1,a2,...,ak be the elements chosen by greedy-submodular sorted in the order they were chosen.
We start by noting that the first element chosen by the algorithm must have maximum f-value among singletons, in particular
f({a1}) ≥ f({oj}) = f({oj}) − f(∅), for all j = 1,...,k. Therefore,
Here we use the fact that because f is submodular we can lowerbound
f({oj}) − f(∅),
which is the benefit of adding oj to the
empty set, with f({o1,...,oj})−f({o1,...,oj−1},
f({a1 }) ≥ max f({oj}) − f(∅) , j
f({o }) − f(∅) ≥jj,
-approximation
f({o ,...,o })−f({o ,...,o })
≥ j 1 j f({o1,...,ok})−f(∅)
seto ,...,o . 1 j−1
which is the benefit of adding oj to the
For the second element we have for all j = 1,...,k,
f({a2, a1}) − f({a1}) ≥ f({oj, a1}) − f({a1})
week 11 – maximum submodular coverage
discrete optimization 4
Therefore,
f({a1, a2}) − f({a1}) ≥ max f({oj, a1}) − f({a1})
Here we use the fact that because f is submodular we can lowerbound
f({a1 , oj }) − f({a1 }),
which is the benefit of adding oj to the
set {a1}, with
f({a1,o1,...,oj})−f({a1,o1,...,oj−1},
which is the benefit of adding oj to the
set a1,o1,...,oj−1 .
f({a ,o })−f({a }) j1j1
f({a ,o ,...,o })−f({a ,o ,...,o
≥j 11 j 11 j−1 k
≥ f({a1,o1,...,ok})−f({a1}) k
≥ f({o1,...,ok})−f({a1}) k
= opt−f({a1}). k
where the second to last line from the monotonicity of f. Hence, opt+(k−1)f({a1}) opt k−1
f({a1,a2})≥ k ≥ k 1+ k
Using induction and following a similar line of reasoning, one
can prove that for all i = 1,...,k we have
opt k−1 k−1i−1
f({a1,...,ai})≥ k 1+ k +···+ k . At this point the rest of the proof is identical to that of Theo-
11.3 Applications
Capacitated sensor cover
Imagine you want to collect data measurements from a given set of locations by deploying a collection of sensors. Suppose that a sensor placed at some location x can collect data from a location y if x and y are at most at distance R from each other. Furthermore, assume that due to technological constraints, a sensor can log data from at most l locations among those that at distance R from itself.
Given the set of locations that we want to monitor and the set of potential locations for the sensors, we would like to select a subset of k locations where to place sensors and an assignment of locations in need of monitoring to the sensors that maximizes the number of locations monitored.
We can model this problem with a sumodular coverage instance (U, f), where U is the set of locations where sensors can be placed, and f(S) is the maximum number of locations that can be moni- tored by placing sensors in the locations S given the capacity and proximity constraints of the sensors.
If we can show that f is monotone submodular and that it can be
computed efficiently, then we know that greedy-submodular is an
1 − 1 -approximation for our problem. Indeed, this is the case. e
Lemma 11.1. The function f is monotone non-decreasing, submodular, and can be computed in polynomial time.
week 11 – maximum submodular coverage
discrete optimization 5
We start by addressing the computability of f(S). To that end, we build a layered flow network. The vertices in the network are as fol- lows: s, the set of chosen sensor locations S, the set of locations we want to monitor, and t. The edges are as follows: Vertex s is con- nected to each x ∈ S with an edge having capacity l (the number of locations x can monitor), for every sensor location x and location y to be monitored we connected x and y with an edge with capacity 1 if d(x, y) ≤ R, and every location y to be monitored is connected to t with an edge with capacity 1. The value of a maximum flow is the maximum number of locations that can be monitored with S and the flow captures the assignment of locations to sensors that attains that coverage value.
Clearly, f is monotone non-decreasing. In Exercise 9 you have to prove that f is submodular, which finishes the proof of the lemma.
Influence maximization
We now cover a surprising application of maximum submodular coverage. A social network is a directed graph where nodes corre- spond to people and edges correspond personal relations. We want to model the phenomenon of a trend spreading through a social network. We say a node is active the corresponding person has has adopted the trend and inactive otherwise.
Initially every node in the network is inactive. When a node be- comes active, it tries to influence its neighbors so that they become active too. If the influence attempt is successful the newly activated node tries to influence its neighbors potentially causing a cascade of activations throughout the network.
Let us define more formally the cascading process. Let G = (V , E) be a directed graph with edge probabilities p : E → [0, 1]. Vertices can be active or inactive. Once a vertex becomes active, it remains active for ever. When a node u becomes active it tries to influence its neighbors: For each v ∈ δout(u), if v is inactive, the vertex becomes active with probability pu,v.
This cascading process models the adoption of a new technology or a new product. A company selling the product may want to target a few individuals in the network, for example by giving them a free sample, hoping that these individuals will trigger a cascade leading to many people buying the product.
Define σ(S) to be the expected number of vertices that ultimately become active starting from the seed set S. The influence maximiza- tion problem is to find a subset S of k vertices maximizing σ(S),
max σ(S) such that |S| ≤ k. S⊆V
As it turns out, the function σ is monotone and submodular, so we can apply1 Theorem 11.2 to get a constant factor approximation for the influence maximization problem.
Lemma 11.2. The function σ is monotone non-decreasing, submodular, and normalized.
The network used to compute f(S): 1
locations to be monitored
Notice that a vertex tries to influence its neighbors once, and only after itself becomes active. This means that the cascading process dies out after a finite number of steps.
We can think of k as our advertising budget—we can only afford to “bribe” k people.
1 While it is not clear how to com- pute σ(S) exactly, we can estimate
its value up to a (1 ± δ) factor by running enough simulations of the cascading process and averaging the outcomes. As Exercise 2 shows, this
is good enough to get a 1− 1e −ε- approximation for small enough δ > 0.
week 11 – maximum submodular coverage
discrete optimization 6
The key to proving this lemma is to characterize σ is a different way. Notice that the expectation is taken over influence attempts— when u tries to influence v it succeeds with probability puv. The idea is to condition on the outcome of all these attempts, even if they are not really needed. Because all these events are independent
σ(S) = pe (1 − pe)
F⊆E e∈F e∈/F
where rF(S) is the set of vertices reachable from S in the graph (V,F). Notice that |rF(S)| is the influence of S given that only the influences in the edge set F are successful.
To show that σ is monotone non-decreasing, we just need to note that |rF(·)| is monotone non-decreasing and that a positive linear combination of monotone non-decreasing functions is also monotone non-decreasing.
To show that σ is submodular, we similarly note that |rF(·)| sub- modular and that a positive linear combination of submodular functions is also submodular.
1. Complete the proof of Theorem 11.1 by showing that opt k−1 k−1i−1
|Xi|≥ k 1+ k +···+ k for all i = 1,…,k.
2. In some applications the optimization problem defined in Line 4
of greedy-submodular is itself NP-hard, but admits an α-approximation
for some α < 1. Prove that a greedy algorithm based on this weaker
oracle yields a 1 − 1 -approximation. eα
3.Provethatf:2U →RissubmodularifandonlyifforallS,T ⊆ U it holds that
f(S)+f(T) ≥ f(S∩T)+f(S∪T).
4. Come up with an instance of the maximum coverage problem where the optimal solution covers the whole ground set while
greedy only manages to cover a 1 − 1 − k fraction of the
ground set.
week 11 – maximum submodular coverage
discrete optimization 7
5. Provethatiff : 2U → Randg : 2U → Raresubmodular,and α, β ≥ 0, then h(S) = αf(S) + βg(S) is also submodular.
6. Let G = (V,E) be a directed graph with edge capacities c : E → Z+ ands,t∈V. ForanysubsetofedgesF⊆Edefineg(F)tobethe value of a maximum s − t flow in (V , F). Prove that g is monotone non-decreasing but not submodular.
7. Let G = (V,E) be an undirected graph. For any subset of vertices S ⊂ V, define g(S) = |E(S)|. Prove that g is supermodular2.
8. Let G = (V,E) be an undirected graph with edge capacities
c : E → Z+ and s, t ∈ V. For any subset of vertices S ⊆ V {s, t}
define g(S) = e∈δ(S+s) ce. Prove that g is submodular but not monotone non-decreasing.
* 9. Prove that the function f used in the capacitated sensor covering application is submodular.
*10. Let(U,I)beamatroidandf : 2U → Rbemonotonenon- decreasing, submodular, and normalized. Consider the problem
max f(S) such that S ∈ I. S⊆U
Modify greedy-submodular in order to get a 12 -approximation for this problem.
2 A function f is supermodular if −f is submodular. Or equivalently, if ∀A ⊆ B ⊆ U and x ∈ U we have
f(A+x)−f(A) ≤ f(B+x)−f(B).
week 11 – maximum submodular coverage
discrete optimization 8
Solution of selected exercise
2. Let o1, o2, . . . , ok be the elements that make up an optimal solu- tion, and let opt = f({o1,...,ok}). Similarly, let a1,a2,...,ak be the elements chosen by greedy-submodular sorted in the order they were chosen.
We start by noting that the first element chosen is such that
f({a1}) ≥ α · f({oj})
for all j = 1,...,k. By following the same logic as in the proof of
Theorem 11.2 we get
f({a1})≥α· k . For the second element we have
f({a2, a1}) − f({a1}) ≥ α · f({oj, a1}) − f({a1}), for all j = 1,...,k, which similarly leads to
Therefore,
f({a1,a2})−f({a1}) = α· opt−f({a1}). k
α k−α f({a1,a2})≥kopt· 1+ k
Using induction and following a similar line of reasoning, one can prove that for all i = 1,...,k we have
α k−α k−αi−1 f({a1,...,ai}) ≥ k opt · 1+ k +···+ k ,
αi =opt· 1− 1−k
Therefore,
f({a1,...,ak}) ≥ opt· 1− 1− k ≥ opt· 1− eα ,
so the algorithm is a 1 − 1 -approximation as claimed. eα
αk 1