代写代考 COMP3530 Linear programming duality – cscodehelp代写
Linear programming duality
This week we cover the fascinating topic of linear programming du- ality. We will learn that every minimization program has associated a maximization program that has the same value. This surprising connection will allow us to efficiently certify the optimality or the infeasibility of a given linear program.
4.1 Bounding the value of a linear program
To develop some intuition, we will explore the idea of bounding the
value of the following linear program: maximize 2×1 + 3×2
subject to 6×1 + 4×2 ≤ 10 x1 + 5×2 ≤ 4 x1,x2 ≥0
We can get a first easy upperbound by noting that for any feasi- ble solution (x1, x2) we have
2×1 +3×2 ≤ 6×1 +4×2 ≤ 10,
where the first inequality follows from the fact that x1, x2 ≥ 0 and the second from the first constraint of the program.
We can get another upperbound by noting that for any feasible solution (x1, x2) we have
2×1 + 3×2 ≤ 2×1 + 10×2 = 2(x1 + 5×2) ≤ 2 · 4 = 8,
where the second inequality follows from the second constraint of the program.
We can get an even better upperbound if we combine the two constraints:
2×1 +3×2 ≤ 1(6×1 +4×2)+ 1(x1 +5×2) ≤ 1 ·10+ 1 ·4 = 29. 2 5 2 5 5
Taking a more systematic approach, we could ask to find mul- tipliersy1,y2 ≥0foreachconstrainttoobtainthebestpossible upperbound on the value of our program.
This leads to the following linear program1
© Copyright 2021 Julián Mestre.
This work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License.
1 In order to have the inequality
2×1 +3×2 ≤ y1(6×1 +4×2)+y2(x1 +5×2), we ask that
2≤6y1+y2, and 3 ≤ 4y1 + 5y2,
so that we get the upperbound
week 4 – linear programming duality
discrete optimization 2
minimize 10y1 + 4y2
subject to 6y1 + y2 ≥ 2 4y1 + 5y2 ≥ 3 y1,y2 ≥0
The best possible upperbound is attained by y1 = 7 and y2 = 10 , 26 26
which yields a bound of 55 . How good an upperbound is this? As 13
good as it gets! Indeed, there is a solution to the original program with the same value2. The auxiliary linear program is the dual program of the original program, which is called primal.
In the rest of this lecture, we will define formally how to derive the dual of a given program and to prove the remarkable prop- erty that the value of the primal and dual programs is the same, provided they are both feasible.
4.2 How to derive the dual program
Let A ∈ Rm×n be a matrix with columns A1,…,An and rows a1, . . . , am. With each primal linear program
2 Thesolutionx1 = 17 andx2 = 7 13 13
has value 55 . Amazing! 13
Of course, we must have
m = |M1|+|M2|+|M3| and
n = |N1| + |N2| + |N3|.
A variable xj or yi is free if it can take positive and negative values.
We can define the dual of a maximiz- ing program by reversing the direction of the transformation. In this way, taking the dual of the dual brings us back to the linear program we started from!
we associate the dual program
minimize subject to
aiT x ≥ bi i∈M1
aiT x ≤ bi i∈M2
aiT x = bi i∈M3 (P)
xj ≥ 0 j∈N1 xj ≤ 0 j∈N2 xj free j∈N3
maximize subject to
yT Aj ≤ cj j∈N1
yTAj ≥cj j∈N2
yT Aj = cj j∈N3 (D)
yi ≥ 0 i∈M1 yi ≤ 0 i∈M2 yi free i∈M3
4.3 Weak duality
The Weak Duality Theorem states that if (P) and (D) are feasible
then the value of (P) is greater than or equal to the value of (D). Theorem 4.1. Let x and y be feasible solutions of (P) and (D) respec-
tively. Then c·x ≥ b·y.
Proof. One could say that the dual has been defined with the ex-
press purpose to make this proof go through:
cjxj ≥ y Ajxj =y Ax= yiai x≥ yibi. (4.1)
week 4 – linear programming duality
discrete optimization 3
The first inequality holds because,
ifj∈N1 thenxj ≥0andcj ≥yTAj,socjxj ≥yTAjxj; ifj∈N2 thenxj ≤0andcj ≤yTAj,socjxj ≥yTAjxj; ifj∈N3 thenxj freeandyTAj =cj,socjxj =yTAjxj.
The second inequality can be justified in a similar fashion. Corollary 4.1. If (P) has unbounded objective then (D) is infeasible.
Similarly, if (D) has unbounded objective then (P) is infeasible. 4.4 Strong duality
The Strong Duality Theorem states that the value of the primal and the dual programs must be the same, provided the programs have bounded objective.
Theorem 4.2. If a linear program is feasible and has bounded objective, so does its dual. Furthermore, the value of the two programs is the same.
Proof. We will prove the theorem only for linear programs in stan- dard form3:
The collorary follows from Theo-
rem 4.1: Any feasible solution of (D) sets a lowerbound on the value of (P). Similarly, any solution of (P) sets an upperbound on the value of (D).
3 In the Exercises we will show that it indeed holds for any linear program
minimize subject to
The dual program is: maximize
c · x Ax = b
subject to yT A ≤ cT
Suppose the primal program has bounded objective. Then if we run Simplex on the primal program, it will return an optimal basic feasible solution x. Let B be the basis associated with x. Recall that the termination criterion for Simplex was that reduced costs at B were non-negative; that is,
cT −cBTA−1A ≥ 0. B
Therefore, the solution yT = cBT A−1 is dual feasible. The cost of B
this solution is
b·y = yTb = cBTA−1b = cBTxB = c·x.
From Theorem 4.1, it follows that y is an optimal dual solution. Furthermore, x and y have the same objective value.
4.5 Complementary slackness
Strong duality constrains the structure of the optimal solutions of primal and dual programs. We now show that a pair of optimal primal and dual solutions must obey the so-called complementary slackness property.
Notice that this Theorem only provides interesting information for indices associated with inequality constraints (j ∈ N1 ∪ N2 and i ∈ M1 ∪ M2).
For equality constraints (j ∈ N3 and i ∈ M3) the property holds trivially.
week 4 – linear programming duality
discrete optimization 4
Theorem 4.3. Let x and y be feasible solutions to (P) and (D) respec- tively. Then x and y are optimal if and only if
i) xj =0oryTAj =cj forallj∈N1∪N2∪N3,and ii) yi =0oraiTx=bi foralli∈M1∪M2∪M3.
Proof. Consider the proof of Theorem 4.1 applied to the optimal solution x and y. By Theorem 4.2 we know that if the solution are optimal then c · x = b · y, therefore, each of the inequalities in (4.1) must in fact, be met with equality. Conversely, if we have equality throughout the cost of the solutions must be the same.
It is easy to check that the first inequality in (4.1) is met with equality if and only if i) holds. Similarly, the second inequality in (4.1) is met with equality if and only if ii) holds.
4.6 Certifying infeasibility
If the program (P) and its dual (D) are feasible, Theorem 4.2 tells us that they must have the same value. Therefore, if one of them is feasible and it is has unbounded objective, then the other one must necessarily be infeasible. This connection between (P) and (D) can be exploited to design a simple certificate of infeasibility.
Theorem 4.4 (Farkas’ lemma). Let A ∈ Rm×n and b ∈ Rm. Then exactly one of the following alternatives holds,
i) thereexistsx∈Rn suchthatAx=bandx≥0,or ii) thereexistsy∈Rm suchthatATy≥0andb·y<0.
Proof. Let us first argue that both alternatives cannot hold simulta- neously. Indeed, if Ax = b then
yTAx = yTb.
Noticethatx ≥ 0andyTA ≥ 0,soyTAx ≥ 0. Ontheotherhand, we know that yT b < 0, so we get a contradiction.
Nowassumethatthereisnox∈Rn suchthatx≥0andAx=b. Then the following linear program is infeasible
maximize subject to
While the statement and the proof of Theorem 4.4 may seem abstract and unintuitive at first, there is a neat geometric interpretation of the Theorem statement.
We can think of Ax as a positive linear combination of the columns of A. Therefore, the system is Ax = b,
x ≥ 0 is infeasible if and only if b lies outside conical hull of the columns of A.
Intuitively b lies outside this cone
if and only if there is a hyperplane separating the two. The vector y is the normal direction of this hyperplane.
{Ax : x ≥ 0}
By Theorem 4.2, this means its dual must be either infeasible or have unbounded objective. The dual is the following program
minimize b · y −y subject to AT y ≥ 0
The program is feasible since y = 0 is a valid solution. Therefore, there must be a solution y such that ATy ≥ 0 and b·y < 0.
week 4 – linear programming duality
discrete optimization 5
4.7 Physical interpretation of duality
The exposition in this section is not meant to be a rigorous mathe- matical argument, but rather a helpful image for building intuition. Feel free to skip this part if you find it confusing.
Consider the linear program minimize subject to
and its dual
maximize subject to
Let A have rows aT1 , . . . , aTm. Picture a physical representation of the polyhedron P = {x : Ax ≥ b} where the hyperplanes ai · x = bi are solid walls. Imagine placing a ball inside P and letting it fall under the influence of a gravitation force that pull the ball in the opposite direction of c. If the ball is small enough, after some time it will come to a rest near some corner x of P. The ball is stationary because the gravitational pull is perfectly cancelled by the forces exerted by the walls it rests on. In other words, there are coefficientsy1,...,yn ≥0suchthat
Notice that y is a dual feasible solution. Furthermore, yi > 0 only for those hyperplanes that meet at x. Therefore, if ai · x > bi then yi = 0. Therefore,
TTT b·y= yibi = yiai x=y Ax=c x.
In other words, we get strong duality.
1. Each linear program can be classified into infeasible, feasible with unbounded objective, or feasible with bounded objective. Therefore, a pair of primal and dual programs can potentially be classified into nine possible types. Not all nine combination can be achieved. For each combination either argue why it cannot be achieved, or show example (a pair of primal and dual programs).
x free b · y
yT A = cT y≥0
In other words, for all i
yi = 0 or ai · x = bi.
This is precisely the complementary slackness condition ii) of Theorem 4.3
Hint: it is possible to have an unfeasible-unfeasible primal-dual pair of programs.
week 4 – linear programming duality
discrete optimization 6
2. Our proof of Theorem 4.2 only works for linear programs in standard form. Show that it also holds for programs of the form:
minimize c · x subject to Ax ≥ b
First turn the above program into an equivalent4 program in standard form by introducing additional slack variables. Then show that the duals of the original program and the one in standard form are equivalent.
3. Our proof of Theorem 4.2 only works for linear programs in standard form. Show that it also holds for programs of the form:
maximize b · y subject to AT y ≤ c
Notice that the proof of Theorem 4.2 does not work here because the program is not in standard form, but rather it is the dual of a program in standard form.
4. Let A ∈ Rm×n and b ∈ Rm. Prove that exactly one of the following alternatives holds,
i) thereexistsx∈Rn suchthatAx≥b,or
ii) thereexistsy∈Rm suchthatATy=0andy≥0andb·y>0.
* 5. Prove that the system Ax = 0 and x > 0 is infeasible if and only if the system ATy ≥ 0 and ATy ̸= 0 is feasible.
* 6. Let a and a1,…,am be vectors in Rn. Prove that the following statements are equivalent:
4 We already did this in Week 1.
1. forallx∈Rn suchthatx≥0wehavea·x≤maxiai·x,or 2. thereexistsλ ∈ Rm suchthatλ ≥ 0and λ = 1anda ≤
ii i λiai.
Here x > 0 means every coordinate of x is strictly positive.
week 4 – linear programming duality
discrete optimization 7
Solutions of selected exercises
2. Call the above program (P). Its dual, which we will denote by (D), is
maximize b · y subject to AT y ≤ c
We introduce an equivalent program (P’) in standard form by
adding dummy variables that eat up the slack. minimize c · x
subject to Ax−Iz = b x≥0 z≥0
We call the dual of this new program (D’): maximize b · y
subject to AT y ≤ c −Iy ≤ 0 y free
Notice that (D’) is in fact equivalent to (D), since −Iy ≤ 0 is the same constraint as y ≥ 0.
To finish the argument, we note that if (P) is feasible and has a bounded objective then (P’) is feasible and has bounded objective, because they are, in essence, the same program. Using the proof of Theorem 4.2 from the notes, we get that (D’) is feasible and has the same value of (P’). But since (D’) and (D) are equivalent, it follows that (D) is also feasible and has the same value as (P).