all_pairs_shortest_path_length#

all_pairs_shortest_path_length(G, cutoff=None)[source]#

Computes the shortest path lengths between all nodes in G.

Parameters:
GNetworkX graph
cutoffinteger, optional

Depth at which to stop the search. Only paths of length at most cutoff (i.e. paths containing <= cutoff + 1 nodes) are returned.

Returns:
lengthsiterator

(source, dictionary) iterator with dictionary keyed by target and shortest path length as the key value.

Notes

The iterator returned only has reachable node pairs.

Examples

>>> G = nx.path_graph(5)
>>> length = dict(nx.all_pairs_shortest_path_length(G))
>>> for node in [0, 1, 2, 3, 4]:
...     print(f"1 - {node}: {length[1][node]}")
1 - 0: 1
1 - 1: 0
1 - 2: 1
1 - 3: 2
1 - 4: 3
>>> length[3][2]
1
>>> length[2][2]
0

Only include paths with length less than or equal to the cutoff keyword argument:

>>> path_lengths = dict(nx.all_pairs_shortest_path_length(G, cutoff=2))
>>> path_lengths[1]  # node 4 is too far away to appear
{1: 0, 0: 1, 2: 1, 3: 2}
----

Additional backends implement this function

cugraph : GPU-accelerated backend.

graphblasOpenMP-enabled sparse linear algebra backend.
Additional parameters:
chunksizeint or str, optional

Split the computation into chunks; may specify size as string or number of rows. Default “10 MiB”

parallelA networkx backend that uses joblib to run graph algorithms in parallel. Find the nx-parallel’s configuration guide here

The parallel implementation first divides the nodes into chunks and then creates a generator to lazily compute shortest paths lengths for each node in node_chunk, and then employs joblib’s Parallel function to execute these computations in parallel across n_jobs number of CPU cores.

Additional parameters:
get_chunksstr, function (default = “chunks”)

A function that takes in an iterable of all the nodes as input and returns an iterable node_chunks. The default chunking is done by slicing the G.nodes into n_jobs number of chunks.

[Source]