CS计算机代考程序代写 algorithm data structure 6CCS3OME/7CCSMOME – Optimisation Methods
6CCS3OME/7CCSMOME – Optimisation Methods
Lecture 1
Single-source shortest-paths problem: Basic concepts, Relaxation technique, Bellman-Ford algorithm
Tomasz Radzik and Kathleen Steinho ̈fel
Department of Informatics, King’s College London 2020/21, Second term
Single-source shortest-paths problem
• w(v, u) – the weight of edge (v, u)
• s ∈ V – the source vertex
26 79573
1
51s53241 32
• G=(V,E) –directedgraph; |V|=n,|E|=m.
• Compute the shortest paths, that is, the paths with the smallest total weights, from s to all other vertices.
• The point-to-point shortest path problem is normally solved using a single-source shortest path algorithm (or an adaptation of such an algorithm).
6CCS3OME/7CCSMOME, Lecture 1
2 / 31
2
Preliminaries
a
2 c 6 v 1 79573
δ(s, v) = 14.
Two shortest-paths
z
5 1 s 5 3 2 4 1
from s to v. δ(s, y) = +∞.
32 xr2
• A path: p =< v1,v2,...,vk,vk+1 >, where (vi,vi+1) is an edge for each
i = 1,2,…,k. For example, path Q =< s,x,r,z,v >.
• The weight of a path p =< v1,v2,...,vk,vk+1 >:
w(p) = w(v1, v2) + w(v2, v3) + · · · + w(vk, vk+1). w(Q) = 14.
• The shortest-path weight (distance) from u to v:
δ(u,v) = min w(p) over all p from u to v, if there is any such path;
+∞, ifthereisnopathfromutov.
• Ashortestpathfromutovisanypathpfromutovwithweight w(p) = δ(u, v).
6CCS3OME/7CCSMOME, Lecture 1
3 / 31
y
Preliminaries (cont.)
• Useful simple facts:
1. A subpath of a shortest path is a shortest path.
In the graph on the previous slide: path ⟨x, r, z⟩ is a subpath of a shortest path ⟨s, x, r, z, v⟩, so it must be a shortest path (from x to z).
2. Triangle inequality:
foreachedge(u,v)∈E, δ(s,v)≤δ(s,u)+w(u,v).
a shortest path from s to v
v
s
from s to u
Holds also if u or v is not reachable from s.
• First the general case:
the weights of edges may be negative.
6CCS3OME/7CCSMOME, Lecture 1
4 / 31
u
a shortest path
Negative weights in an application
Examplefromfinancialanalysis: agraphofexchangerates. Find paths maximising exchange rates:
Find shortest paths:
• •
For an edge (x, y): γ(x, y) = the exchange rate from x to y.
E.g., γ(S, D) = 1.6.
• •
For example: w(S, D) = − ln(γ(S, D)) = − ln(1.6) ≈ −0.47
•
If γ(x,y) > 1, then w(x,y) < 0.
We cannot avoid negative weights here, so we have to solve the shortest-paths
D
0.006 Yen
−4.61
−8.01
Dollar
0.6 100
1.4
Dollar
3.00 5.12
0.35
−0.34
1.6
S
0.039 0.05
−0.47 0.51
3.24
0.031
0.00032
8.05
Sterling
INTEL 0.7
Sterling
INTEL
3000
IBM
3.47
Yen
IBM
Set the weight of edge (x, y): w(x, y) = − ln(γ(x, y))
(We use the natural logarithms, but any fixed base > 1 would do.)
A path from v to u maximises the combined exchange rate from v to u, if and only if, this is a shortest path from v to u according to the these edge weights.
problem in a graph with (some) edge weights negative. 6CCS3OME/7CCSMOME, Lecture 1
5 / 31
The exchange-rates example (cont.)
For a path P = ⟨v1,v2,…,vk⟩:
w(v1,v2)+w(v2,v3)+···w(vk−1,vk)
w(P) =
= − ln(γ(v1, v2)) − ln(γ(v2, v3)) − · · · − ln(γ(vk−1, vk))
= − [ln(γ(v1, v2)) + ln(γ(v2, v3)) + · · · + ln(γ(vk−1, vk))]
= − ln (γ(v1, v2)γ(v2, v3) · · · γ(vk−1, vk)) {ln(a) + ln(b) + ln(c) = ln(abc)}
= −ln(γ(P)).
For two paths P and Q, the following relations are equivalent:
That is, “γ(P ) > γ(Q)” if and only if “w(P ) < w(Q)”
γ(P) > ln(γ(P)) > −ln(γ(P)) < w(P) <
γ(Q) ln(γ(Q)) − ln(γ(Q)) w(Q)
Thus, a path P from v to u is a maximum exchange rate path from v to u, if and only if,
P is a shortest path from v to u according to the edge weights w.
6CCS3OME/7CCSMOME, Lecture 1 6 / 31
8
8
8
8
Negative-weight cycles (negative cycles)
s
2
−3
5 +
3
5
6 8 11
5
3
−4
−1
2? −
5? −
7
3 2
−6
• If there is a negative cycle on a path from s to v, then by going increasingly many times around such a cycle,
we get paths from s to v
of arbitrarily small (negative) weights.
• If there is a negative cycle on a path from s to v, then, by convention, δ(s, v) = −∞.
6CCS3OME/7CCSMOME, Lecture 1
7 / 31
4
3?
−
We give up, if we detect a negative cycle in the input graph
• We consider only shortest-paths algorithms which:
• compute shortest paths for graphs with no negative cycles reachable from the source
• for graphs with negative cycles reachable from the source, correctly detect that the input graph has a negative cycle (but are not required to compute anything else).
• Why don’t we compute shortest simple paths (not containing cycles)? Such paths are always well defined, even if the graph has negative cycles.
• This would be a valid, and interesting computational problem (with some applications).
• The issue: we know efficient algorithms which compute shortest paths in graphs with no negative cycles, but we don’t know any efficient algorithm for computing shortest simple paths in graphs with negative cycles.
• The problem of computing shortest simple paths in graphs containing negative cycles is NP-hard, that is, computationally difficult (by reduction from the Hamiltonian Path problem), and the algorithms which we discuss
in Lectures 1–3 do not apply. 6CCS3OME/7CCSMOME, Lecture 1
8 / 31
Representation of computed shortest paths: A shortest-paths tree
a
2
c
6
v
PARENT[a] = x PARENT[x] = s PARENT[s] = nil PARENT[y] = nil ...
7 5 1
5
y
x
r
h
b 2
s 3
5
3
2
9
7
3
z 2
4
1
• If there are multiple shortest paths from s to v, do we want to find all of them or just one?
1
For each node v reachable from s, we want to find just one shortest path from s to v.
• How should we represent the output, that is, the computed shortest paths?
• If there is no negative cycle reachable from s, then shortest paths from s to all nodes reachable from s are well defined and can be represented by a shortest-paths tree:
a tree rooted at s such that for any node v reachable from s, the tree path from s to v is a shortest path from s to v.
6CCS3OME/7CCSMOME, Lecture 1 9 / 31
Representation of computed shortest paths: A shortest-paths tree (cont.)
• A shortest-paths tree from s contains exactly one shortest path from s to each v reachable from s. There may be other shortest paths from s to v, which are not included in this tree.
• A shortest-paths tree T can be represented by an array PARENT[v], v ∈ V , in Θ(n) space (memory) , where n is the number of nodes in the graph.
• PARENT[v] is the predecessor of node v in tree T.
• An explicit representation of shortest-paths from s (a list of shortest-paths, one path per one node reachable from s, and each path represented as a full sequence of all its edges) would take Θ(n2) space in the worst case.
s
6CCS3OME/7CCSMOME, Lecture 1 10 / 31
Output of a shortest-paths algorithm
• A shortest-paths algorithm computes the shortest-path weights and a shortest-paths tree, or detects a negative cycle reachable from s.
• Example.
Input graph and the computed shortest-paths tree:
a
2c6 79573
51s53z241
x
2
32 rh
b
The output of a shortest-path algorithm: array PARENT[.] representing the computed shortest-paths tree and array d[.] with the shortest-path weights:
node a b c h r s v x y z PARENT[node] x nil a r x nil z s nil r
d[node] 6 ∞ 8 6 4 0 14 1 ∞ 7 • Terminology and notation in [CLRS]:
(parent, PARENT[v], d[v]) −→ (predecessor, v.π, v.d). 6CCS3OME/7CCSMOME, Lecture 1
11 / 31
v
1
y
Relaxation technique (for computing shortest paths)
• For each node v, maintain:
• d[v]: the shortest-path estimate for v – an upper bound on the weight of a shortest path from s to v;
• PARENT[v]: the current predecessor of node v.
• The relaxation technique: INITIALIZATION followed by
node a b c h r s v x y z PARENT nil nil nil nil nil nil nil nil nil nil d∞∞∞∞∞0∞∞∞∞
a sequence of RELAX operations.
INITIALIZATION(G, s)
d[s] ← 0; PARENT[s] ← NIL
for each node v ∈ V − {s} do
d[u]
w(u,v) PARENT[v]
d[v] v
d[v] ← ∞; RELAX(u, v, w)
PARENT[v] ← NIL
{ relax edge (u, v) } if d[v]>d[u]+w(u,v) then
s
PARENT[v]
d[v] ← d[u] + w(u, v); PARENT[v] ← u
• All algorithms discussed in this module are based on the relaxation technique (as
most of the SP algorithms). They differ in the order of RELAX operations. • When should the computation stop?
6CCS3OME/7CCSMOME, Lecture 1 12 / 31
u
PARENT[u]
8
8
8
8
8
8
Relax operation
14
0
RELAX(u,v) 14 0
s s
12
23 12
23
6CCS3OME/7CCSMOME, Lecture 1
13 / 31
20
20
4 29 uu
27
v 6 v 5 5
7 7
18
8 8
18
8 25 8
25
9
Example of Relaxation Technique – LGT
10
23
10 023
9
7
u
1
v
10 u 1
v 21 46
x
y
5
12 2
9s 46
s
5 7 x y25
2
(s,u), (y,v), (u,x), (x,v), (v,y)
10 u 1 v 11 s46s46
10 u 02390239
10
10
1
v 11
5757
12 x (u,v)
y 25 12 x 22
y 14
… and so on. 6CCS3OME/7CCSMOME, Lecture 1
14 / 31
(x,y)
Properties of the relaxation technique
• (1)Non-increasingshortest-pathestimates. Throughoutthe computation, for each node v, the shortest-path estimate d[v] can only decrease (it never increases). (from definition of operation)
• (2) For each node v, the shortest-path estimate d[v] is always either equal to ∞ (at the beginning) or equal to the weight of some path from s to v.
graph G
(by induction)
finte d[.]
infinite d[.]
s
u
v
• (3)Upperboundproperty. Foreachnodev,wealwayshave
d[v] ≥ δ(s, v). (directly from (2))
• (4)No-pathproperty. Ifthereisnopathfromstov,thenwealwayshave d[v] = δ(s, v) = ∞. (directly from (2))
6CCS3OME/7CCSMOME, Lecture 1 15 / 31
Properties of the relaxation technique (cont. 1)
• (5) Convergence property. If (s, . . . , u, v) is a shortest path from s to v and if d[u] = δ(s, u) at any time prior to relaxing edge (u, v), then
d[v] = δ(s, v) at all times afterward.
s
• (6) Path-relaxation property. If p = (v0, v1, . . . , vk), is a shortest path from s = v0 to vk, and we relax the edges of p in the order (v0,v1),
(v1, v2), . . . , (vk−1, vk), then d[vk] = δ(s, vk).
This property holds regardless of any other relaxation steps that occur, even if they are intermixed with relaxations of the edges of path p.
s
v0 v1 vi vi+1 vk
6CCS3OME/7CCSMOME, Lecture 1
16 / 31
uv
Relaxation technique: properties of the parent subgraph
• (7) For each edge (u, v) in the current parent subgraph (that is,
u = PARENT[v]), d[u] + w(u, v) ≤ d[v]. (by induction – LGT)
• (8) If the graph contains no negative-weight cycle reachable from s, then • the parent subgraph is always a tree T rooted at s,
• foreachnodevintreeT,d[v]≥theweightofthepathinT fromstov. (prove using (7) – LGT)
graph G
s
The red part of the current parent tree (“near” the source vertex s) is
already part of the computed shortest-paths tree.
The blue part may change during the subsequent relax operations. 6CCS3OME/7CCSMOME, Lecture 1
17 / 31
finite d[.], parent tree
Relaxation technique: properties of the parent subgraph (cont.)
• (9)Whend[v]=δ(s,v)forallv∈V,then
• the parent subgraph is a shortest-paths tree rooted at s, (from (8))
• no further updates possible. (from (3) – the upper bound property) graph G
s
not reachable from s, infinite d[.]
6CCS3OME/7CCSMOME, Lecture 1 18 / 31
Effective relax operations
• Definition:
an effective relax operation is a relax operation which decreases d[v].
• (10) The computation can progress, if the s.p. weights not reached yet:
• (b)
• (a)
⇒ (a): from (3) and the Triangle Inequality.
⇒ (b): Assume d[x] > δ(s, x) for some node x (so δ(s, x) < +∞).
• (a) There exists a vertex x ∈ V such that d[x] > δ(s, x), if and only if,
• (b) There exists an edge (u, v) ∈ E such that d[v] > d[u] + w(u, v) (that is, another effective relax operation is possible).
• Case δ(s, x) > −∞ (no negative cycle on a path from s to x). Consider a shortest s-to-x path P :
s
x
u v d[x] > δ(s,x)
d[s] = δ(s,s)
Must have an edge (u, v) on P such that d[u] = δ(s, u) but d[v] > δ(s, v).
Forsuchanedge: d[v]>δ(s,v)=δ(s,u)+w(u,v)=d[u]+w(u,v). • Case δ(s, x) = −∞ for some vertex x: exercise (consider a path
(s,u1,…,uk,v1,…,vr,v1), where (v1,…,vr,v1) is a negative cycle). 6CCS3OME/7CCSMOME, Lecture 1 19 / 31
The relaxation technique: summary for the case of no negative cycles
graph G
• If no negative cycle is reachable from s:
s
not reachable from s, infinite d[.]
• Only finitely many effective relax operations during the computation.
(If no neg. cycles, Prop. (2) becomes: d[v] is ∞ or the weight of a simple s-to-v path. Hence d[v] takes on only finitely many different values.)
• The PARENT-subgraph is always a tree rooted at s. (property (8))
• When eventually no effective relax operation possible, then for each node v, d[v] = δ(s, v) (property (10)), and
the PARENT pointers form a shortest-paths tree (property(9)).
6CCS3OME/7CCSMOME, Lecture 1 20 / 31
The relaxation technique: summary for the case with negative cycles
graph G
s
• If there is a negative cycle reachable from s:
• There is always an edge (u, v) such that: d[v] > d[u] + w(u, v), that is,
an effective RELAX operation is always possible. (from property (10))
• The PARENT subgraph eventually contains a cycle. (not easy to prove)
• Thus we can detect the existence of a negative cycle by periodically checking if the PARENT pointers form a cycle.
This way we can avoid infinite loop, so we can turn any relaxation technique computation into an algorithm.
How to check for cycles in the parent subgraph and how often? 6CCS3OME/7CCSMOME, Lecture 1
21 / 31
negative cycle
The Bellman-Ford algorithm (for single-source shortest-paths problem)
• Edge weights may be negative.
BELLMAN-FORD(G, w, s) { G = (V, E) } INITIALIZATION(G, s)
1: for i←1 to n−1 do {n=|V|}
for each edge (u, v) ∈ E do { edges considered in arbitrary order}
RELAX(u, v, w)
representation of input:
nodes indexed by 1,2,…,n; one node index as the source; list of edges
2: for each edge (u,v) ∈ E do
if d[v]>d[u]+w(u,v) then
return FALSE { a negative cycle is reachable from s} return TRUE { no negative cycles; for each v ∈ V , d[v] = δ(s, v) }
• This algorithm is based on the relaxation technique: INITIALIZATION followed by a (finite) sequence of RELAX operations.
• The running time is Θ(mn), where n is the number of nodes and m is the
number of edges.
• The worst case running time of any algorithm for the single-source shortest
paths problem with negative weights is Ω(mn). 6CCS3OME/7CCSMOME, Lecture 1
at least of the order of
22 / 31
exactly of the order of
Example of the computation by the Bellman-Ford algorithm – LGT
s
Assume that the edges are given in this order:
10
23
9
46
5
7
(s, u), (s, x), (y, s), (v, y), (u, v), (x, v), (y, v), (x, u),
(x, y),
(u, x).
Initially:
(relaxation technique initialization)
node s u v x y PARENT[.] nil nil nil nil nil d[.] 0 ∞ ∞ ∞ ∞
6CCS3OME/7CCSMOME, Lecture 1
23 / 31
1 u
v
xy 2
Example the computation by the Bellman-Ford algorithm (cont.)
At the end of the 1-st iteration of the main loop 1:
81 uv
11
node s u v x y PARENT[.] nil x u s x d[.] 0 8 11 5 7
0239 s
46
At the end of the 2-nd iteration of the main loop 1:
5
819
node s u v x y PARENT[.] nil x u s x d[.] 0 8 9 5 7
10 s
46 7
The shortest paths and the shortest-paths weights are now computed, but the algorithm will execute the remaining two iterations of the main loop.
6CCS3OME/7CCSMOME, Lecture 1 24 / 31
10
5 7 xy
7
uv 0239
5
5
2
2
xy
7
Correctness of the Bellman-Ford algorithm
• If there is a negative cycle reachable from s, then Bellman-Ford algorithm returns FALSE (because an effective relax operation will always be possible).
That is, in this case the algorithm is correct.
• If no negative cycle reachable from s, then the following claim is true.
Claim: Attheterminationofloop1,d[v]=δ(s,v)foreachvertexv∈V.
• This claim follows from the lemma given on the next slide.
• This claim implies that at the termination of loop 1, no effective relax operation is possible so the algorithm returns TRUE.
At the end of the computation:
(a) array d contains the shortest path distances from s to all other nodes;
(b) array PARENT contains a shortest-paths tree with node s as the source (from (a) and property (9) of the relaxation technique).
• That is, in the case of “no negative cycles,” the algorithm is also correct. 6CCS3OME/7CCSMOME, Lecture 1 25 / 31
Correctness of the Bellman-Ford algorithm (cont.)
• Lemma: If the length (the number of edges) of a shortest (simple) path from s to a node v is k, then at the end of iteration k (of the main loop 1) in the Bellman-Ford algorithm, d[v] = δ(s, v).
• This lemma follows from the path-relaxation property of the relaxation technique (property (6)).
• A shortest path from s to v of lenght k (k edges on the path, k ≤ n−1): s
x1 x2 x3
xk=v
d[x1] = δ(s, x1) by the end of the 1st iteration, d[x2] = δ(s, x2) by the end of the 2nd iteration, d[x3] = δ(s, x3) by the end of the 3rd iteration, …
d[v] = δ(s, v) by the end of the k-th iteration.
• This lemma implies the claim from
the previous slide, because each
simple shortest path has at most n − 1 edges. 6CCS3OME/7CCSMOME, Lecture 1
26 / 31
Speeding-up the Bellman-Ford algorithm
• Try to decrease the number of iterations of loop 1:
• If no effective relax operation in the current iteration, then terminate: the
shortest-path weights are already computed.
• Check periodically (at the end of each iteration of loop 1?) if the PARENT pointers form a cycle. If they do, then terminate and return FALSE: there is a negative cycle reachable from s.
• Try to decrease the running time of one iteration of loop 1: consider only edges which may give effective relax operations.
• We say a vertex u is active, if the edges outgoing from u have not been relaxed since
the last time d[u] has been decreased.
v1
• Perform RELAX only on edges outgoing from active vertices.
• Store active vertices in the Queue data structure.
(What is the Queue data structure?) 6CCS3OME/7CCSMOME, Lecture 1
27 / 31
x
u parent[u]
v2
z
The Bellman-Ford algorithm with FIFO Queue
BF-WITH-FIFO(G, w, s) INITIALIZATION(G, s)
Q ← ∅; ENQUEUE(Q, s)
{ G = (V, E) }
{Q – FIFO queue of active vertices}
while Q̸=∅ do
u ← head(Q); DEQUEUE(Q)
for each v ∈ Adj[u] { for each node v adjacent to u } do RELAX(u,v,w)
v1
v2
return TRUE {foreachv∈V,d[v]=δ(s,v)}
RELAX(u, v, w):
if d[v]>d[u]+w(u,v) then
s
u
v3
d[v] ← d[u] + w(u, v); PARENT[v] ← u if v is not in Q then ENQUEUE(Q, v)
• A mechanism for detecting negative cycles must be added (Q never empty, if there is a negative cycle): periodically check PARENT pointers for a cycle.
• The running time: O(mn), if properly implemented. But Θ(mn) worst case. 6CCS3OME/7CCSMOME, Lecture 1 28 / 31
representation of input:
the array Adj of adjacency lists; Adj[u] – the list of u’s neighbours
Example of the computation of Bellman-Ford with FIFO Queue – LGT
s
5
7
10
23
9
46
Assume that the edges outgoing from nodes are given in this order:
(s, u), (u, x), (x, u), (v, y); (y, s),
(s, x);
(u, v);
(x, v), (x, y);
(y, v). 6CCS3OME/7CCSMOME, Lecture 1
29 / 31
1 u
v
xy 2
Examples and Exercises – LGT
1. Example of the relaxation technique – slide 14.
2. Show the properties of the relaxation technique given on slide 17:
• Property (7): For each edge (u, v) in the current parent subgraph (that is, u = PARENT[v]), d[u] + w(u, v) ≤ d[v].
• Property (8):
If the graph contains no negative-weight cycle reachable from s, then
• the parent subgraph is always a tree T rooted at s,
• foreachnodevinT,d[v]≥theweightofthepathinT fromstov.
3. Example of the computation by the Bellman-Ford algorithm – slide 23.
4. Example of the computation of Bellman-Ford with FIFO Queue – slide 29.
6CCS3OME/7CCSMOME, Lecture 1 30 / 31
Exercises – LGT (cont.)
5. Design an efficient algorithm for checking whether the array PARENT[.] represents a tree (or it has a cycle). Check how your algorithm works on these two arrays:
NODE abcgh s xz PARENT: x s s c x NIL b c
NODE abcgh s xz PARENT: x h s c x NIL b c
6CCS3OME/7CCSMOME, Lecture 1
31 / 31