Note: You are looking at a static copy of the former PineWiki site, used for class notes by from 2003 to 2012. Many mathematical formulas are broken, and there are likely tamela mann new single to be other bugs as well. These will most likely not be fixed. You may be able to find more uptodate versions of some of these notes at.
single manning in bookiesThe tamela mann new single shortest path problem is to find a path in a graph with given edge weights that reicher mann sucht frau fürs leben has the minimum total weight. Typically the graph is directed, so that the weight w_{uv} of an edge uv may differ from the weight w_{vu} of vu; in the case of an undirected graph, we can always turn it into a directed graph by replacing each undirected edge with two directed edges with the same weight that go in opposite directions. We will use the terms weight and length interchangeably, and use distance for the minimum total path weight between two nodes, even when the weights don't make sense as lengths (for example, when some are negative).
There are two main variants to the problem:

The singlesource shortest path problem is to compute the distance from some source node s to every other node in the graph. This partnersuche meine stadt dresden variant includes the case where what we really want is just the distance from s to some target node t.

The allpairs shortest path problem is to compute the distance between every pair of nodes in the graph. This can be solved by running a singlesource algorithm once for each starting vertex, but it can be solved more efficiently by combining the work for different starting vertices.
There are also two different assumptions about the edge weights that can radically change the nature of the problem:

All edge weights are nonnegative. This is the natural case where edge weights represent distances, and allows a fast greedy solution for the singlesource case.

Edge weights are arbitrary. This case typically arises when the edge weights represent the net cost of traversing some edge, which may be negative for profitable edges. Now greedy solutions manny a single taxpayer earns fail; even though it may be very expensive to get to some distant node u, if there is a good enough edge leaving u you can make up all the costs by taking it. Shortest paths with negative edge weights are typically found by algorithms using techniques related to.
In the singlesource shortest path problem, we want to compute the distance δ(s,t) from a single source node s to every target node t. (As a side effect, we might like to find the actual shortest path, but usually this can be done easily while we are computing the distances.) There are many algorithms for solving this problem, but most are based on the same technique, known as relaxation.
1.1. Relaxation
In general, a relaxation of an optimization problem is a new problem that replaces equality constraints in the original problem, like

δ(s,t) = sie sucht ihn judo min_{u} δ(s,u) + manny a single taxpayer earns w_{ut}
with an inequality constraint, like

d(s,t) ≥ min_{u} d(s,u) + w_{ut}.
When we do this kind of replacement, we are also replacing the exact distances δ(s,t) with upper bounds d(s,t), and guaranteeing that d(s,t) is always greater than the correct answer δ(s,t).
The reason for relaxing a problem is that we can start off with very high upper bounds and lower them incrementally until they settle on the correct answer. For shortest paths this is done by setting d(s,t) initially to zero when t=s and +∞ when t≠s (this choice doesn't require looking at the graph). We then proceed to lower the d(s,t) bounds by a sequence of edge relaxations (a different use of the same term), where relaxing an edge uv sets the upper bound on the distance to v to the minimum of the old upper bound and the upper bound that we get by looking at a path through u, i.e.

d'(s,v) := min(d(s,v), d(s,u) + w_{uv}).
It is easy to see that if d(s,v) ≥ δ(s,v) and d(s,u) ≥ δ(s,u), then it will also be the case that d'(s,v) ≥ δ(s,v).
What is less obvious is that performing an edge relaxation on every edge in some shortest st path in order starting from the initial state of the d array will set d(s,t) to exactly δ(s,t), even if other relaxation operations occur in between. The proof is by induction on the number of edges in the path. With zero edges, d(s,t) = δ(s,t) = 0. With k+1 edges, the induction hypothesis says that d(s,u) = δ(s,u) after the first k relaxations, where u is the secondtolast vertex in the path. But then the last relaxation sets d(s,t) ≤ δ(s,u) + w_{ut}, which is the length of the shortest path and thus equals δ(s,t).
We mentioned earlier that it is possible to compute the actual shortest paths as a sideeffect of computing the distances. This is done using relaxation by keeping track of a previousvertex pointer p[v] for each vertex, so that the shortest path is found by following all the previousvertex pointers and reversing the sequence. Initially, p[v] = NULL for all v; when relaxing an edge uv, it is set to u just in case d(s,u) + w_{uv} is less than the previously computed distance d(s,v). So in addition to getting the correct distances by relaxing all edges on the shortest path in order, we also find the shortest path.
This raises the issue of how to relax all the edges on the shortest path in order if we don't know what the shortest path is. There are two ways to do this, depending on whether the graph contains negativeweight edges.
1.2. Dijkstra's algorithm
If the graph contains no negativeweight edges, we can apply the, relaxing at each step all the outgoing edges from the apparently closest vertex v that hasn't been processed yet; if this is in fact the closest vertex, we process all vertices in order of increasing distance and thus relax the edges of each shortest path in order. This method gives Dijkstra's algorithm for singlesource shortest paths, one of the best and simplest algorithms for the problem. It requires a priority queue Q that provides an EXTRACTMIN operation that deletes and returns the element v with smallest key, in this case the upper bound d(s,v) on the distance.
Dijkstra(G,w,s): Set d[s] = 0 and set d[v] = +infinity for all v!= s. Add all the vertices to Q. while Q is not empty: u = EXTRACTMIN(Q) for each each uv: d[v] = min(d[v], d[u] + w(u,v)) return d
The running time of Dijkstra's algorithm depends on the implementation of Q. The simplest implementation is just to keep around an array of all unprocessed vertices, and to carry out EXTRACTMIN by performing a lineartime scan for one with the smallest d[u]. This gives a cost to EXTRACTMIN of O(V), which is paid V times (once per vertex), for a total of O(V^{2}) time. The additional overhead of the algorithm takes O(V) time, except for the loop over outgoing edges from u, all of whose iterations together take O(E) time. So the total cost is O(V^{2} + E) = O(V^{2}). This can be improved for sparse graphs to O((V+E) log V) using a heap to represent Q (the extra log V on the E comes from the cost of moving elements within the heap when their distances drop), and it can be improved further to O(V log V + E) time using a.
Why does Dijkstra's algorithm work? Assuming there are no negative edge weights, there is a simple proof that d[u] = δ(s,u) for any v that leaves the priority queue. The proof is by induction on the number of vertices that have left the queue, and requires a rather complicated induction hypothesis, which is that after each pass through the outer loop:

If u is any vertex not in the queue, then d[u] = δ(s,u).
 If u is any vertex not in the queue, and v is any vertex in the queue, then δ(s,u) ≤ δ(s,v)

If u is any vertex not in the queue, and v is any vertex in the queue, then d[v] ≤ δ(s,u) + w_{uv} (where w_{uv} is taken to be +∞ if uv is not an edge in the graph).
This invariant looks ugly but what it says is actually pretty simple: after i steps, we have extracted the i closest vertices and correctly computed there distance, and for any other vertex v, d[v] is at most the length of the shortest path that consists only of nonqueue vertices except for v. If the first two parts of the invariant hold, the third is immediate from the relaxation step in the algorithm. So we concentrate on proving the first two parts.
The base case is obtained by consider the state where s is the only vertex not in the queue; we easily have d[s] = 0 = δ(s,s) ≤ δ(s,v) for any vertex v in the queue.
For later u, we'll assume that the invariant holds at the beginning of the loop, and show that it also holds at the end. For each v in the queue, d[v] is at most the length of the shortest sv path that uses only vertices already processed. We'll show that the smallest d[v] is in fact equal to δ(s,v) and is no greater than δ(s,v') for any v' in the queue. Consider any vertex v that hasn't been processed yet. Let t be the last vertex before v on some shortest sv path that uses only previously processed vertices. From the invariant we have that d[v] ≤ δ(s,t) + w_{tv}. Now consider two cases:
 The stv path is a shortest path. Then d[v] = δ(s,v)

The stv path is not a shortest path. Then d[v] > δ(s,v) and there is some shorter sv path whose last vertex before v is t'. But this shorter st'v path can only exist if t' is still in the queue. If we let q be the first vertex in some shortest st'v path that is still in the queue, the sq part of the path is a shortest path that uses only nonqueue vertices. So d[q] = δ(s,q) ≤ δ(s,v) < d[v].
Let u be returned by EXTRACTMIN. If case 1 applies to u, part 1 of the invariant follows immediately. Case 2 cannot occur because in this case there would be some q with d[q] < d[u] and EXTRACTMIN would have returned q instead. We have thus proved that part 1 continues to hold.
For part 2, consider the two cases for some v≠u. In case 1, δ(s,v) = d[v] ≥ d[u] = δ(s,u). In case 2, δ(s,v) ≥ δ(s,q) = d[q] ≥ d[u] = δ(s,u). Thus in either case δ(s,v) ≥ δ(s,u) and part 2 of the invariant holds.
Part 3 of the invariant is immediate from the code.
To complete the proof of correctness, observe that the first part of the induction hypothesis implies that all distances are correct when the queue is empty.
1.3. BellmanFord
What if the graph contains negative edges? Then Dijkstra's algorithm may fail in the usual pattern of misled : the very last vertex v processed may have spectacularly negative edges leading to other vertices that would greatly reduce their distances, but they have already been processed and it's too late to take advantage of this amazing fact (more specifically, it's not too late for the immediate successors of v, but it's too late for any other vertex reachable from such a successor that is not itself a successor of v).
But we can still find shortest paths using the technique of relaxing every edge on a shortest path in sequence. The BellmanFord algorithm does so under the assumption that there are no negativeweight cycles in the graph, in which case all shortest paths are simplethey contain no duplicate verticesand thus have at most V1 edges in them. If we relax every edge, we are guaranteed to get the first edge of every shortest path; relaxing every edge again gets the second edge; and repeating this operation V1 gets all edges in order.
BellmanFord(G,s,w): Set d[s] = 0 and set d[v] = +infinity for all v!= s. for i = 1 to V1 for each each edge uv in G: d[v] = min(d[v], d[u] + w(u,v)) return d
The running time of BellmanFord is O(VE), which is generally slower than even the simple O(V^{2}) implementation of Dijkstra's algorithm; but it handles any edge weights, even negative ones.
What if a negative cycle exists? In this case, there may be no shortest paths; any short path that reaches a vertex on the cycle can be shortened further by taking a few extra loops around it. The BellmanFord algorithm can be used to detect such cycles by running the outer loop one more timeif d[v] drops for any v, then a negative cycle reachable from s exists. The converse is also true; intuitively, this is because further reductions in distance can only propagate around the negative cycle if there is some edge that can be relaxed further in each state. Section 24.1 contains a real proof.
There is a very simple algorithm known as FloydWarshall that computes the distance between all V^{2} pairs of vertices in Θ(V^{3}) time. This is no faster than running Dijkstra's algorithm V times, but it works even if some of the edge weights are negative.
Like any other dynamic programming algorithm, FloydWarshall starts with a recursive decomposition of the shortestpath problem. The basic idea is to cut the path in half, by expanding d(i,j) as min_{k} (d(i,k) + d(k,j)), but this requires considering n2 intermediate vertices k and doesn't produce a smaller problem. There are a couple of ways to make the d(i,k) on the righthand side "smaller" than the d(i,j) on the lefthand sidefor example, we could add a third parameter that is the length of the path and insist that the subpaths on the righthand side be half the length of the path on the lefthand sidebut most of these approaches still require looking at Θ(n) intermediate vertices. The trick used by FloydWarshall is to make the third parameter be the largest vertex that can be used in the path. This allows us männer aus dem gefängnis kennenlernen to consider only one new intermediate vertex each time we increase this limit.
Define d(i,j,k) as the length of the shortest ij path that uses only vertices with indices less than or equal to k. Then

d(i,j,0) = w_{ij}, d(i,j,k) = min(d(i,j,k1), d(i,k,k1) + d(k,j,k1)).
The reason this decomposition works (for any graph that does not contain a negativeweight cycle) is that every shortest ij path with no vertex greater than k either includes k exactly once (the second case) or not at all. The nice thing about this decomposition is that we only have to consider two values in the minimum, so we can evaluate d(i,j,k) in O(1) time if we already have d(i,k,k1) and d(k,j,k1) in our table. The natural way to guarantee this is to build the table in order of increasing k. We assume that the input is given as an array of edge weights with +∞ for missing edges; the algorithm's speed is not improved by using an adjacencylist representation of the graph.
FloydWarshall(w): // initialize first plane of table for i = 1 to V do for j = 1 to V do d[i,j,0] = w[i,j] // fill in the rest for k = 1 to V do for i = 1 to V do for j = 1 to V do d[i,j,k] = min(d[i,j,k1], d[i,k,k1] + d[k,j,k1]) // pull out the distances where all vertices on the path are <= n // (i.e. with no restrictions) return d' where d'[i,j] = d[i,j,k]
The running time of this algorithm is easily seen to be Θ(V^{3}). As with BellmanFord, its output is guaranteed to be correct only if the graph does not contain a negative cycle; if the graph does contain a negative cycle, it can be detected by looking for vertices with d'[i,i] < 0.
Below are implementations of BellmanFord, FloydWarshall, and Dijkstra's algorithm (in a separate file). The Dijkstra's algorithm implementation uses the generic priority queue from the sample solutions. Both files use an extended version of the Graph structure from that supports weights.
Here are the support files:
Here is some test code and a Makefile:
And here are the actual implementations:
/* various algorithms for shortest paths */ #define SHORTEST_PATH_NULL_PARENT (1) /* Computes distance of each node from starting node */ /* and stores results in dist (length n, allocated by the caller) */ /* unreachable nodes get distance MAXINT */ /* If parent argument is nonnull, also stores parent pointers in parent */ /* Allows negativeweight edges and runs in O(nm) time. */ /* returns 1 if there is a negative cycle, 0 otherwise */ int bellman_ford(Graph g, int source, int *dist, int *parent); /* computes allpairs shortest paths using FloydWarshall given */ /* an adjacency matrix */ /* answer is returned in the provided matrix! */ /* assumes matrix is n pointers to rows of n ints each */ void floyd_warshall(int n, int **matrix);
#include <stdlib.h> #include <assert.h> #include <values.h> #include "graph.h" #include "shortest_path.h" /* data field for relax helper */ struct relax_data { int improved; int *dist; int *parent; }; static void relax(Graph g, int source, int sink, int weight, void *data) { int len; struct relax_data *d; d = data; if(d>dist[source] < MAXINT && weight < MAXINT) { len = d>dist[source] + weight; if(len < d>dist[sink]) { d>dist[sink] = len; if(d>parent) d>parent[sink] = source; d>improved = 1; } } } /* returns 1 if there is a negative cycle */ int bellman_ford(Graph g, int source, int *dist, int *parent) { int round; int n; int i; struct relax_data d; assert(dist); d.dist männer aus dem gefängnis kennenlernen = dist; d.parent = parent; d.improved = 1; n = graph_vertex_count(g); for(i = 0; i < n; i++) { d.dist[i] = MAXINT; if(d.parent) d.parent[i] = SHORTEST_PATH_NULL_PARENT; } d.dist[source] = 0; d.parent[source] = source; for(round = 0; d.improved && round < n; round++) { d.improved = 0; /* relax all edges */ for(i = 0; i < n; i++) { graph_foreach_weighted(g, i, relax, &d); } } return d.improved; } void floyd_warshall(int n, int **d) { int i; int j; int k; int newdist; /* The algorithm: * * d(i, j, k) = min distance from i to j with all intermediates <= k * * d(i, j, k) = min(d(i, j, k1), d(i, k, k1) + d(k, j, k1) * * We will allow shorter paths to sneak in to d(i, j, k) so that * we don't have to store anything extra. */ /* initial matrix is essentially d(:,:,1) */ /* within body of outermost loop we compute d(:,:,k) */ for(k = 0; k < n; k++) { for(i = 0; i < n; i++) { /* skip if we can't get to k */ if(d[i][k] == MAXINT) continue; for(j = 0; j < n; j++) { /* skip if we can't get from k */ if(d[k][j] == MAXINT) continue; /* else */ newdist = d[i][k] + d[k][j]; if(newdist < d[i][j]) { d[i][j] = newdist; } } } } }
#define DIJKSTRA_NULL_PARENT (1) /* Computes distance of each node from starting node */ /* and stores results in dist (length n, allocated by the caller) */ /* unreachable nodes get distance MAXINT */ /* If parent argument is nonnull, also stores parent pointers in parent */ /* Assumes no negativeweight edges */ /* Runs in O(n + m log m) time. */ /* Note: uses pq structure from pq.c */ void dijkstra(Graph g, int source, int *dist, int *parent);
#include <stdlib.h> #include <assert.h> #include <values.h> #include "graph.h" #include "pq.h" #include "dijkstra.h" /* internal edge representation for dijkstra */ struct pq_elt { int d; /* distance to v */ int u; /* source */ int v; /* sink */ }; static int pq_elt_cmp(const void *a, const void *b) { return ((const struct pq_elt *) a)>d  ((const struct pq_elt *) b)>d; } struct push_data { PQ pq; int *dist; }; static void push(Graph g, int u, int v, int wt, void *data) { struct push_data *d; struct pq_elt e; d = data; e.d = d>dist[u] + wt; e.u = u; e.v = v; pq_insert(d>pq, &e); } void dijkstra(Graph g, int source, int *dist, int *parent) { struct push_data data; struct pq_elt e; int n; int i; assert(dist); data.dist = dist; data.pq = pq_create(sizeof(struct pq_elt), pq_elt_cmp); assert(data.pq); n = graph_vertex_count(g); /* set up dist and parent arrays */ for(i = 0; i < n; i++) { dist[i] = MAXINT; } if(parent) { for(i = 0; i < n; i++) { parent[i] = DIJKSTRA_NULL_PARENT; } } /* push (source, source, 0) */ /* this will get things started with parent[source] == source */ /* and dist[source] == 0 */ push(g, source, source, MAXINT, &data); while(!pq_is_empty(data.pq)) { /* pull the min value out */ pq_delete_min(data.pq, &e); /* did we reach the sink already? */ if(dist[e.v] == MAXINT) { /* no, it's a new edge */ dist[e.v] = e.d; if(parent) parent[e.v] = e.u; /* throw in the outgoing edges */ graph_foreach_weighted(g, e.v, push, &data); } } pq_destroy(data.pq); }
In, the shortest path problem is the problem of finding a between two (or nodes) in a such that the sum of the of its constituent edges is minimized.
The problem of finding the shortest path between two intersections on a road map (the graph's vertices correspond to intersections and the edges correspond to road segments, each weighted by the length of its road segment) may be modeled by a special case of the shortest path problem in graphs.
Contents
Definition[]
The shortest path problem can be defined for whether,, or. It is defined here for undirected graphs; for directed graphs the definition of path requires that consecutive vertices be connected by an appropriate directed edge.
Two vertices are adjacent when they are both incident to a common edge. A in an undirected graph is a of vertices P = ( v 1 , v 2 , … , v n ) ∈ V × V × ⋯ × V {\displaystyle P=(v_{1},v_{2},\ldots,v_{n})\in V\times V\times \cdots \times V} such that v i {\displaystyle v_{i}} is adjacent to v i + 1 {\displaystyle v_{i+1}} for 1 ≤ i < n {\displaystyle 1\leq i<n} . Such a path P {\displaystyle P} is called a path of length n − 1 {\displaystyle n1} from v 1 {\displaystyle v_{1}} to v n {\displaystyle v_{n}} . (The v i {\displaystyle v_{i}} are variables; their numbering here relates to their position in the sequence and needs not to relate to any canonical labeling of the vertices.)
Let e i , j {\displaystyle e_{i,j}} be the edge incident to both v i {\displaystyle v_{i}} and v j {\displaystyle v_{j}} . Given a weight function f : E → R {\displaystyle f:E\rightarrow \mathbb {R} } , and an undirected (simple) graph G {\displaystyle G} , the shortest path from v {\displaystyle v} to v ′ {\displaystyle v'} is the path P = ( v 1 , v 2 , … , v n ) {\displaystyle P=(v_{1},v_{2},\ldots,v_{n})} (where v 1 = v {\displaystyle v_{1}=v} and v n = v ′ {\displaystyle v_{n}=v'} ) that over all possible n {\displaystyle n} minimizes the sum ∑ i = 1 n − 1 f ( e i , i + 1 ) . {\displaystyle \sum _{i=1}^{n1}f(e_{i,i+1}).} When each edge in the graph has unit weight or f : E → { 1 } {\displaystyle f:E\rightarrow \{1\}} , this is equivalent to finding the path with fewest edges.
The problem is also sometimes called the singlepair shortest path problem, to distinguish it from the following variations:
 The singlesource shortest path problem, in which we have to find shortest paths from a source vertex v to all other vertices in the graph.
 The singledestination shortest path problem, in which we have to find shortest paths from all vertices in the directed graph to a single destination vertex v. This can be reduced to the singlesource shortest path problem by reversing the arcs in the directed graph.
 The allpairs shortest path problem, in which we have to find shortest paths between every pair of vertices v, v' in the graph.
These generalizations have significantly more efficient algorithms than the simplistic approach of running a singlepair shortest path algorithm on all relevant pairs of vertices.
Algorithms[]
The most important algorithms for solving this problem are:
 solves the singlesource shortest path problem.
 solves the singlesource problem if edge weights may be negative.
 solves for single pair shortest path using heuristics to try to speed up the search.
 solves all pairs shortest paths.
 solves all pairs shortest paths, and may be faster than Floyd–Warshall on.
 solves the shortest stochastic path problem with an additional probabilistic weight on each node.
Additional algorithms and associated evaluations may be found in.
Singlesource shortest paths[]
Undirected graphs[]
Unweighted graphs[]
Directed acyclic graphs[]
An algorithm using can solve the singlesource shortest path problem in linear time, Θ(E + V), in weighted DAGs.
Directed graphs with nonnegative weights[]
The following table is taken from. A green background indicates an asymptotically best bound in the table; L is the maximum length (or weight) among all edges.
Algorithm  Time complexity  Author 

O(V ^{2}EL)  
O(VE)  ,  
O(V ^{2} log V)  , Minty (cf. ),  
with list  O(V ^{2})  , 
with modified  O((E + V) log V)  
...  ...  ... 
Dijkstra's algorithm with  O(E + V log V)  , 
O(E log log L)  ,  
O(E log_{E/V} L)  ,  
O(E + V √log L)  
Thorup  O(E + V log log V) 
This list is ; you can help by.
Planar directed graphs with nonnegative weights[]
Directed graphs with arbitrary weights without negative cycles[]
This list is ; you can help by.
Planar directed graphs with arbitrary weights[]
Allpairs shortest paths[]
The allpairs shortest path problem finds the shortest paths between every pair of vertices v, v' in the graph. The allpairs shortest paths problem for unweighted directed graphs was introduced by, who observed that it could be solved by a linear number of matrix multiplications that takes a total time of O(V^{4}).
Undirected graph[]
Weights  Time complexity  Algorithm 

ℝ_{+}  O(V^{3})  
{ + 1 , + ∞ } {\displaystyle \{+1,+\infty \}}  O ( V ω log V ) {\displaystyle O(V^{\omega }\log V)}  (expected running time). 
ℕ  O ( V 3 / 2 Ω ( log n ) 1 / 2 ) {\displaystyle O(V^{3}/2^{\Omega (\log n)^{1/2}})}  
ℝ_{+}  O(EV log α(E,V))  
ℕ  O(EV)  (requires constanttime multiplication). 
Directed graph[]
Weights  Time complexity  Algorithm 

ℝ (no negative cycles)  O(V^{3})  
ℕ  O ( V 3 / 2 Ω ( log n ) 1 / 2 ) {\displaystyle O(V^{3}/2^{\Omega (\log n)^{1/2}})}  
ℝ (no negative cycles)  O(EV + V^{2} log V)  
ℝ (no negative cycles)  O(EV + V^{2} log log V)  
ℕ  O(EV + V^{2} log log V) 
Applications[]
Shortest path algorithms are applied to automatically find directions between physical locations, such as driving directions on websites like or. For this application fast specialized algorithms are available.^{}
If one represents a nondeterministic as a graph where vertices describe states and edges describe possible transitions, shortest path algorithms can be used to find an optimal sequence of choices to reach a certain goal state, or to establish lower bounds on the time needed to reach a given state. For example, if vertices represent the states of a puzzle like a and each directed edge corresponds to a single move or turn, shortest path algorithms can be used to find a solution that uses the minimum possible number of moves.
In a or mindset, this shortest path problem is sometimes called the mindelay path problem and usually tied with a. For example, the algorithm may seek the shortest (mindelay) widest path, or widest shortest (mindelay) path.
A more lighthearted application is the games of "" that try to find the shortest path in graphs like movie stars appearing in the same film.
Other applications, often studied in, include plant and facility layout,,, and design.^{}
Road networks[]
A road network can be considered as a graph with positive weights. The nodes represent road junctions and each edge of the graph is associated with a road segment between two junctions. The weight of an edge may correspond to the length of the associated road segment, the time needed to traverse the segment, or the cost of traversing the segment. Using directed edges it is also possible to model oneway streets. Such graphs are special in the sense that some edges are more important than others for long distance travel (e.g. highways). This property has been formalized using the notion of highway dimension.^{} There are a great number of algorithms that exploit this property and are therefore able to compute the shortest path a lot quicker than would be possible on general graphs.
All of these algorithms work in two phases. In the first phase, the graph is preprocessed without knowing the source or target node. The second phase is the query phase. In this phase, source and target node are known.The idea is that the road network is static, so the preprocessing phase can be done once and used for a large number of queries on the same road network.
The algorithm with the fastest known query time is called hub labeling and is able to compute shortest path on the road networks of Europe or the USA in a fraction of a microsecond.^{} Other techniques that have been used are:
Related problems[]
For shortest path problems in, see.
The is the problem of finding the shortest path that goes through every vertex exactly once, and returns to the start. Unlike the shortest path problem, which can be solved in polynomial time in graphs without negative cycles, the travelling salesman problem is and, as such, is believed not to be efficiently solvable for large sets of data (see ). The problem of in a graph is also NPcomplete.
The and the stochastic shortest path problem are generalizations where either the graph isn't completely known to the mover, changes over time, or where actions (traversals) are probabilistic.
The shortest multiple disconnected path ^{} is a representation of the primitive path network within the framework of.
The seeks a path so that the minimum label of any edge is as large as possible.
Strategic shortestpaths[]
Sometimes, the edges in a graph have personalities: each edge has its own selfish interest. An example is a communication network, in which each edge is a computer that possibly belongs to a different person. Different computers have different transmission speeds, so every edge in the network has a numeric weight equal to the number of milliseconds it takes to transmit a message. Our goal is to send a message between two points in the network in the shortest time possible. If we know the transmissiontime of each computer (the weight of each edge), then we can use a standard shortestpaths algorithm. If we do not know the transmission times, then we have to ask each computer to tell us its transmissiontime. But, the computers may be selfish: a computer might tell us that its transmission time is very long, so that we will not bother it with our messages. A possible solution to this problem is to use, which gives the computers an incentive to reveal their true weights.
Linear programming formulation[]
There is a natural formulation for the shortest path problem, given below. It is very simple compared to most other uses of linear programs in, however it illustrates connections to other concepts.
Given a directed graph (V, A) with source node s, target node t, and cost w_{ij} for each edge (i, j) in A, consider the program with variables x_{ij}
 minimize ∑ i j ∈ A w i j x i j {\displaystyle \sum _{ij\in A}w_{ij}x_{ij}} subject to x ≥ 0 {\displaystyle x\geq 0} and for all i, ∑ j x i j − ∑ j x j i = { 1 , if i = s ; − 1 , if i = t ; 0 , otherwise. {\displaystyle \sum _{j}x_{ij}\sum _{j}x_{ji}={\begin{cases}1,&{\text{if }}i=s;\\1,&{\text{if }}i=t;\\0,&{\text{ otherwise.}}\end{cases}}}
The intuition behind this is that x i j {\displaystyle x_{ij}} is an indicator variable for whether edge (i, j) is part of the shortest path: 1 when it is, and 0 if it is not. We wish to select the set of edges with minimal weight, subject to the constraint that this set forms a path from s to t (represented by the equality constraint: for all vertices except s and t the number of incoming and outcoming edges that are part of the path must be the same (i.e., that it should be a path from s to t).
This LP has the special property that it is integral; more specifically, every (when one exists) has all variables equal to 0 or 1, and the set of edges whose variables equal 1 form an st. See Ahuja et al.^{} for one proof, although the origin of this approach dates back to mid20th century.
The dual for this linear program is
 maximize y_{t} − y_{s} subject to for all ij, y_{j} − y_{i} ≤ w_{ij}
and feasible duals correspond to the concept of a for the for shortest paths. For any feasible dual y the w i j ′ = w i j − y j + y i {\displaystyle w'_{ij}=w_{ij}y_{j}+y_{i}} are nonnegative and essentially runs on these reduced costs.
General algebraic framework on semirings: the algebraic path problem[]
This section needs expansion. You can help by. (August 2014) 
Many problems can be framed as a form of the shortest path for some suitably substituted notions of addition along a path and taking the minimum. The general approach to these is to consider the two operations to be those of a. Semiring multiplication is done along the path, and the addition is between paths. This general framework is known as the.^{}^{}^{}
Most of the classic shortestpath algorithms (and new ones) can be formulated as solving linear systems over such algebraic structures.^{}
More recently, an even more general framework for solving these (and much less obviously related problems) has been developed under the banner of.^{}
Shortest path in stochastic timedependent networks[]
In reallife situations, the transportation network is usually stochastic and timedependent. In fact, a traveler traversing a link daily may experiences different travel times on that link due not only to the fluctuations in travel demand (origindestination matrix) but also due to such incidents as work zones, bad weather conditions, accidents and vehicle breakdowns. As a result, a stochastic timedependent (STD) network is a more realistic representation of an actual road network compared with the deterministic one.^{}^{}
Despite considerable progress during the course of the past decade, it remains a controversial question how an optimal path should be defined and identified in stochastic road networks. In other words, there is no unique definition of an optimal path under uncertainty. One possible and common answer to this question is to find a path with the minimum expected travel time. The main advantage of using this approach is that efficient shortest path algorithms introduced for the deterministic networks can be readily employed to identify the path with the minimum expected travel time in a stochastic network. However, the resulting optimal path identified by this approach may not be reliable, because this approach fails to address travel time variability. To tackle this issue some researchers use distribution of travel time instead of expected value of it so they find the probability distribution of total traveling time using different optimization methods such as and.^{} These methods use, specifically stochastic dynamic programming to find the shortest path in networks with probabilistic arc length.^{} It should be noted that the concept of travel time reliability is used interchangeably with travel time variability in the transportation research literature, so that, in general, one can say that the higher the variability in travel time, the lower the reliability would be, and vice versa.
In order to account for travel time reliability more accurately, two common alternative definitions for an optimal path under uncertainty have been suggested. Some have introduced the concept of the most reliable path, aiming to maximize the probability of arriving on time or earlier than a given travel time budget. Others, alternatively, have put forward the concept of an αreliable path based on which they intended to minimize the travel time budget required to ensure a prespecified ontime arrival probability.
See also[]
References[]
Notes[]
 (March 23, 2009).. Google Tech Talk.
 Chen, Danny Z. (December 1996). "Developing algorithms and software for geometric path planning problems". ACM Computing Surveys. 28 (4es): 18. :.
 Abraham, Ittai; Fiat, Amos; ; Werneck, Renato F.. ACMSIAM Symposium on Discrete Algorithms, pages 782–793, 2010.
 Abraham, Ittai; Delling, Daniel; ; Werneck, Renato F.. Symposium on Experimental Algorithms, pages 230–241, 2011.
 Kroger, Martin (2005). "Shortest multiple disconnected path for the analysis of entanglements in two and threedimensional polymeric systems". Computer Physics Communications. 168 (168): 209–232. :.
 ; ; (1993). Network Flows: Theory, Algorithms and Applications. Prentice Hall. .
 Baras, John; Theodorakopoulos, George (4 April 2010).. Morgan & Claypool Publishers. pp. 9–. .
 (January 2002). (PDF). . 7 (3): 321–350.
 (PDF). Archived from (PDF) on 20140821. Retrieved 20140820.
 Gondran, Michel; Minoux, Michel (2008). Graphs, Dioids and Semirings: New Models and Algorithms. Springer Science & Business Media. chapter 4. .
 Pouly, Marc; Kohlas, Jürg (2011). Generic Inference: A Unifying Theory for Automated Reasoning. John Wiley & Sons. Chapter 6. Valuation Algebras for Path Problems. .
 Loui, R.P., 1983. Optimal paths in graphs with stochastic or multidimensional weights. Communications of the ACM, 26(9), pp.670676.
 RajabiBahaabadi, Mojtaba; ShariatMohaymany, Afshin; Babaei, Mohsen; Ahn, Chang Wook (2015). "Multiobjective path finding in stochastic timedependent road networks using nondominated sorting genetic algorithm". Expert Systems with Applications. 42 (12): 5056–5064. :.
 Olya, Mohammad Hessam (2014). "Finding shortest path in a combined exponential – gamma probability distribution arc length". International Journal of Operational Research. 21 (1): 25–37. :.
 Olya, Mohammad Hessam (2014). "Applying Dijkstra's algorithm for general shortest path problem with normal probability distribution arc length". International Journal of Operational Research. 21 (2): 143–154. :.
Bibliography[]
 Ahuja, Ravindra K.; Mehlhorn, Kurt; Orlin, James; (April 1990).. Journal of the ACM. ACM. 37 (2): 213–223. :.
 (1958). "On a routing problem". Quarterly of Applied Mathematics. 16: 87–90. .
 Cherkassky, Boris V.; ; Radzik, Tomasz (1996).. Mathematical Programming. Ser. A. 73 (2): 129–174. :. .
 ; ; ; (2001) [1990]. "SingleSource Shortest Paths and AllPairs Shortest Paths". (2nd ed.). MIT Press and McGrawHill. pp. 580–642. .
 Dantzig, G. B. (January 1960). "On the Shortest Route through a Network". Management Science. 6 (2): 187–190. :.
 (1959). (PDF). Numerische Mathematik. 1: 269–271. :.
 Ford, L. R. (1956).. Rand Corporation. P923.
 ; (1984). Fibonacci heaps and their uses in improved network optimization algorithms. 25th Annual Symposium on Foundations of Computer Science.. pp. 338–346. :. .
 ; (1987).. Journal of the Association for Computing Machinery. 34 (3): 596–615. :.
 Gabow, H. N. (1983). "Scaling algorithms for network problems". (PDF). pp. 248–258. :.
 Gabow, Harold N. (1985). "Scaling algorithms for network problems". . 31 (2): 148–168. :. .
 Hagerup, Torben (2000). Montanari, Ugo; Rolim, José D. P.; Welzl, Emo, eds.. Proceedings of the 27th International Colloquium on Automata, Languages and Programming. pp. 61–72. .
 (December 1981). "A priority queue in which initialization and queue operations take O(log log D) time". Mathematical Systems Theory. 15 (1): 295–309. :. .
 Karlsson, Rolf G.; Poblete, Patricio V. (1983). "An O(m log log D) algorithm for shortest paths". . 6 (1): 91–93. :. .
 Leyzorek, M.; Gray, R. S.; Johnson, A. A.; Ladew, W. C.; Meaker, S. R., Jr.; Petry, R. M.; Seitz, R. N. (1957). Investigation of Model Techniques — First Annual Report — 6 June 1956 — 1 July 1957 — A Study of Model Techniques for Communication Systems. Cleveland, Ohio: Case Institute of Technology.
 (1959). "The shortest path through a maze". Proceedings of an International Symposium on the Theory of Switching (Cambridge, Massachusetts, 2–5 April 1957). Cambridge: Harvard University Press. pp. 285–292.
 Pettie, Seth; Ramachandran, Vijaya (2002).. Proceedings of the thirteenth annual ACMSIAM symposium on Discrete algorithms. pp. 267–276. .
 Pettie, Seth (26 January 2004). "A new approach to allpairs shortest paths on realweighted graphs". Theoretical Computer Science. 312 (1): 47–74. :.
 Pollack, M.; Wiebenson, W. (March–April 1960). "Solution of the ShortestRoute Problem—A Review". Op. Res. 8 (2): 224–230. :.
 Schrijver, Alexander (2004). Combinatorial Optimization — Polyhedra and Efficiency. Algorithms and Combinatorics. 24. Springer. . Here: vol.A, sect.7.5b, p. 103
 Shimbel, Alfonso (1953). "Structural parameters of communication networks". Bulletin of Mathematical Biophysics. 15 (4): 501–507. :.
 Thorup, Mikkel (1999).. Journal of the ACM. 46 (3): 362–394. :. Retrieved 28 November 2014.
 Thorup, Mikkel (2004).. Journal of Computer and System Sciences. 69 (3): 330–353. :.
 Whiting, P. D.; Hillier, J. A. (March–June 1960). "A Method for Finding the Shortest Route through a Road Network". Operational Research Quarterly. 11 (1/2): 37–40. :.
 (2014). "Faster allpairs shortest paths via circuit complexity". . New York: ACM. pp. 664–673. : . :. .
Further reading[]
MST solves the problem of finding a minimum total weight subset of edges that spans all the vertices. Another common graph problem is to find the shortest paths to all reachable vertices from a given source. We have already seen how to solve this problem in the case where all the edges have the same weight (in which case the shortest path is simply the minimum number of edges) using BFS. Now we will examine two algorithms for finding single source shortest paths for directed graphs when the edges have different weights  BellmanFord and Dijkstra's algorithms. Several related problems are:
BellmanFord Algorithm
The BellmanFord algorithm uses relaxation to find single source shortest paths on directed graphs that may contain negative weight edges. The algorithm will also detect if there are any negative weight cycles (such that there is no solution).
BELLMANFORD(G,w,s) 1. INITIALIZESINGLESOURCE(G,s) 2. for i = 1 to G.V1 3. for each edge (u,v) ∈ G.E 4. RELAX(u,v,w) 5. for each edge (u,v) ∈ G.E 6. if v.d > u.d + w(u,v) 7. return FALSE 8. return TRUE INITIALIZESINGLESOURCE(G,s) 1. for each vertex v ∈ G.V 2. v.d = ∞ 3. v.pi = NIL 4. s.d = 0 RELAX(u,v,w) 1. if v.d > u.d + w(u,v) 2. v.d = u.d + w(u,v) 3. v.pi = u
Basically the algorithm works as follows:
 Initialize d's, π's, and set s.d = 0 ⇒ O(V)
 Loop V1 times through all edges checking the relaxation condition to compute minimum distances ⇒ (V1) O(E) = O(VE)
 Loop through all edges checking for negative weight cycles which occurs if any of the relaxation conditions fail ⇒ O(E)
The run time of the BellmanFord algorithm is O(V + VE + E) = O(VE).
Note that if the graph is a DAG (and thus is known to not have any cycles), we can make BellmanFord more efficient by first topologically sorting G (O(V+E)), performing the same initialization (O(V)), and then simply looping through each vertex u in topological order relaxing only the edges in Adj[u] (O(E)). This method only takes O(V + E) time. This procedure (with a few slight modifications) is useful for finding critical paths for PERT charts.
Example
Given the following directed graph
Using vertex 5 as the source (setting its distance to 0), we initialize all the other distances to ∞.
Iteration 1: Edges (u_{5},u_{2}) and (u_{5},u_{4}) relax updating the distances to 2 and 4
Iteration 2: Edges (u_{2},u_{1}), (u_{4},u_{2}) and (u_{4},u_{3}) relax updating the distances to 1, 2, and 4 respectively. Note edge (u_{4},u_{2}) finds a shorter path to vertex 2 by going through vertex 4
Iteration 3: Edge (u_{2},u_{1}) relaxes (since a shorter path to vertex 2 was found in the previous iteration) updating the distance to 1
Iteration 4: No edges relax
The final shortest paths from vertex 5 with corresponding distances is
Negative cycle checks: We now check the relaxation condition one additional time for each edge. If any of the checks pass then there exists a negative weight cycle in the graph.
v_{3}.d > u_{1}.d + w(1,3) ⇒ 4 ≯ 6 + 6 = 12 ✓
v_{4}.d > u_{1}.d + w(1,4) ⇒ 2 ≯ 6 + 3 = 9 ✓
v_{1}.d > u_{2}.d + w(2,1) ⇒ 6 ≯ 3 + 3 = 6 ✓
v_{4}.d > u_{3}.d + w(3,4) ⇒ 2 ≯ 3 + 2 = 5 ✓
v_{2}.d > u_{4}.d + w(4,2) ⇒ 3 ≯ 2 + 1 = 3 ✓
v_{3}.d > u_{4}.d + w(4,3) ⇒ 3 ≯ 2 + 1 = 3 ✓
v_{2}.d > u_{5}.d + w(5,2) ⇒ 3 ≯ 0 + 4 = 4 ✓
v_{4}.d > u_{5}.d + w(5,4) ⇒ 2 ≯ 0 + 2 = 2 ✓
Note that for the edges on the shortest paths the relaxation criteria gives equalities.
Additionally, the path to any reachable vertex can be found by starting at the vertex and following the π's back to the source. For example, starting at vertex 1, u_{1}.π = 2, u_{2}.π = 4, u_{4}.π = 5 ⇒ the shortest path to vertex 1 is {5,4,2,1}.
3 Comments