all_pairs_bellman_ford_path_length#

all_pairs_bellman_ford_path_length(G, weight='weight')[source]#

Compute shortest path lengths between all nodes in a weighted graph.

Parameters:
GNetworkX graph
weightstring or function (default=”weight”)

If this is a string, then edge weights will be accessed via the edge attribute with this key (that is, the weight of the edge joining u to v will be G.edges[u, v][weight]). If no such edge attribute exists, the weight of the edge is assumed to be one.

If this is a function, the weight of an edge is the value returned by the function. The function must accept exactly three positional arguments: the two endpoints of an edge and the dictionary of edge attributes for that edge. The function must return a number.

Returns:
distanceiterator

(source, dictionary) iterator with dictionary keyed by target and shortest path length as the key value.

Notes

Edge weight attributes must be numerical. Distances are calculated as sums of weighted edges traversed.

The dictionary returned only has keys for reachable node pairs.

Examples

>>> G = nx.path_graph(5)
>>> length = dict(nx.all_pairs_bellman_ford_path_length(G))
>>> for node in [0, 1, 2, 3, 4]:
...     print(f"1 - {node}: {length[1][node]}")
1 - 0: 1
1 - 1: 0
1 - 2: 1
1 - 3: 2
1 - 4: 3
>>> length[3][2]
1
>>> length[2][2]
0

Additional backends implement this function

cugraphGPU-accelerated backend.

Negative cycles are not yet supported. NotImplementedError will be raised if there are negative edge weights. We plan to support negative edge weights soon. Also, callable weight argument is not supported.

Additional parameters:
dtypedtype or None, optional

The data type (np.float32, np.float64, or None) to use for the edge weights in the algorithm. If None, then dtype is determined by the edge values.

graphblasOpenMP-enabled sparse linear algebra backend.
Additional parameters:
chunksizeint or str, optional

Split the computation into chunks; may specify size as string or number of rows. Default “10 MiB”

parallelParallel backend for NetworkX algorithms

The parallel implementation first divides the nodes into chunks and then creates a generator to lazily compute shortest paths lengths for each node in node_chunk, and then employs joblib’s Parallel function to execute these computations in parallel across all available CPU cores.

Additional parameters:
get_chunksstr, function (default = “chunks”)

A function that takes in an iterable of all the nodes as input and returns an iterable node_chunks. The default chunking is done by slicing the G.nodes into n chunks, where n is the number of CPU cores.

[Source]