Puneet Varma (Editor)

Maximum coverage problem

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

The maximum coverage problem is a classical question in computer science, computational complexity theory, and operations research. It is a problem that is widely taught in approximation algorithms.

Contents

As input you are given several sets and a number k . The sets may have some elements in common. You must select at most k of these sets such that the maximum number of elements are covered, i.e. the union of the selected sets has maximal size.

Formally, (unweighted) Maximum Coverage

Instance: A number k and a collection of sets S = S 1 , S 2 , , S m . Objective: Find a subset S S of sets, such that | S | k and the number of covered elements | S i S S i | is maximized.

The maximum coverage problem is NP-hard, and cannot be approximated within 1 1 e + o ( 1 ) 0.632 under standard assumptions. This result essentially matches the approximation ratio achieved by the generic greedy algorithm used for maximization of submodular functions with a cardinality constraint.

ILP formulation

The maximum coverage problem can be formulated as the following integer linear program.

Greedy algorithm

The greedy algorithm for maximum coverage chooses sets according to one rule: at each stage, choose a set which contains the largest number of uncovered elements. It can be shown that this algorithm achieves an approximation ratio of 1 1 e . Inapproximability results show that the greedy algorithm is essentially the best-possible polynomial time approximation algorithm for maximum coverage.

Known extensions

The inapproximability results apply to all extensions of the maximum coverage problem since they hold the maximum coverage problem as a special case.

Weighted version

In the weighted version every element e j has a weight w ( e j ) . The task is to find a maximum coverage which has maximum weight. The basic version is a special case when all weights are 1 .

maximize e E w ( e j ) y j . (maximizing the weighted sum of covered elements). subject to x i k ; (no more than k sets are selected). e j S i x i y j ; (if y j > 0 then at least one set e j S i is selected). y j { 0 , 1 } ; (if y j = 1 then e j is covered) x i { 0 , 1 } (if x i = 1 then S i is selected for the cover).

The greedy algorithm for the weighted maximum coverage at each stage chooses a set that contains the maximum weight of uncovered elements. This algorithm achieves an approximation ratio of 1 1 e .

Budgeted maximum coverage

In the budgeted maximum coverage version, not only does every element e j have a weight w ( e j ) , but also every set S i has a cost c ( S i ) . Instead of k that limits the number of sets in the cover a budget B is given. This budget B limits the total cost of the cover that can be chosen.

maximize e E w ( e j ) y j . (maximizing the weighted sum of covered elements). subject to c ( S i ) x i B ; (the cost of the selected sets cannot exceed B ). e j S i x i y j ; (if y j > 0 then at least one set e j S i is selected). y j { 0 , 1 } ; (if y j = 1 then e j is covered) x i { 0 , 1 } (if x i = 1 then S i is selected for the cover).

A greedy algorithm will no longer produce solutions with a performance guarantee. Namely, the worst case behavior of this algorithm might be very far from the optimal solution. The approximation algorithm is extended by the following way. First, define a modified greedy algorithm, that selects the set S i that has the best ratio of weighted uncovered elements to cost. Second, among covers of cardinality 1 , 2 , . . . , k 1 , find the best cover that does not violate the budget. Call this cover H 1 . Third, find all covers of cardinality k that do not violate the budget. Using these covers of cardinality k as starting points, apply the modified greedy algorithm, maintaining the best cover found so far. Call this cover H 2 . At the end of the process, the approximate best cover will be either H 1 or H 2 . This algorithm achieves an approximation ratio of 1 1 e for values of k 3 . This is the best possible approximation ratio unless N P D T I M E ( n O ( log log n ) ) .

Generalized maximum coverage

In the generalized maximum coverage version every set S i has a cost c ( S i ) , element e j has a different weight and cost depending on which set covers it. Namely, if e j is covered by set S i the weight of e j is w i ( e j ) and its cost is c i ( e j ) . A budget B is given for the total cost of the solution.

maximize e E , S i w i ( e j ) y i j . (maximizing the weighted sum of covered elements in the sets in which they are covered). subject to c i ( e j ) y i j + c ( S i ) x i B ; (the cost of the selected sets cannot exceed B ). i y i j 1 ; (element e j = 1 can only be covered by at most one set). S i x i y i j ; (if y j > 0 then at least one set e j S i is selected). y i j { 0 , 1 } ; (if y i j = 1 then e j is covered by set S i ) x i { 0 , 1 } (if x i = 1 then S i is selected for the cover).

Generalized maximum coverage algorithm

The algorithm uses the concept of residual cost/weight. The residual cost/weight is measured against a tentative solution and it is the difference of the cost/weight from the cost/weight gained by a tentative solution.

The algorithm has several stages. First, find a solution using greedy algorithm. In each iteration of the greedy algorithm the tentative solution is added the set which contains the maximum residual weight of elements divided by the residual cost of these elements along with the residual cost of the set. Second, compare the solution gained by the first step to the best solution which uses a small number of sets. Third, return the best out of all examined solutions. This algorithm achieves an approximation ratio of 1 1 / e o ( 1 ) .

  • Set cover problem is to cover all elements with as few sets as possible.
  • References

    Maximum coverage problem Wikipedia