all_pairs_node_connectivity#

all_pairs_node_connectivity(G, nbunch=None, cutoff=None)[source]#

Compute node connectivity between all pairs of nodes.

Pairwise or local node connectivity between two distinct and nonadjacent nodes is the minimum number of nodes that must be removed (minimum separating cutset) to disconnect them. By Menger’s theorem, this is equal to the number of node independent paths (paths that share no nodes other than source and target). Which is what we compute in this function.

This algorithm is a fast approximation that gives an strict lower bound on the actual number of node independent paths between two nodes [1]. It works for both directed and undirected graphs.

Parameters:
GNetworkX graph
nbunch: container

Container of nodes. If provided node connectivity will be computed only over pairs of nodes in nbunch.

cutoffinteger

Maximum node connectivity to consider. If None, the minimum degree of source or target is used as a cutoff in each pair of nodes. Default value None.

Returns:
Kdictionary

Dictionary, keyed by source and target, of pairwise node connectivity

References

[1]

White, Douglas R., and Mark Newman. 2001 A Fast Algorithm for Node-Independent Paths. Santa Fe Institute Working Paper #01-07-035 http://eclectic.ss.uci.edu/~drwhite/working.pdf

Examples

A 3 node cycle with one extra node attached has connectivity 2 between all nodes in the cycle and connectivity 1 between the extra node and the rest:

>>> G = nx.cycle_graph(3)
>>> G.add_edge(2, 3)
>>> import pprint  # for nice dictionary formatting
>>> pprint.pprint(nx.all_pairs_node_connectivity(G))
{0: {1: 2, 2: 2, 3: 1},
 1: {0: 2, 2: 2, 3: 1},
 2: {0: 2, 1: 2, 3: 1},
 3: {0: 1, 1: 1, 2: 1}}
----

Additional backends implement this function

parallelA networkx backend that uses joblib to run graph algorithms in parallel. Find the nx-parallel’s configuration guide here

The parallel implementation first divides the a list of all permutation (in case of directed graphs) and combinations (in case of undirected graphs) of nbunch into chunks and then creates a generator to lazily compute the local node connectivities for each chunk, and then employs joblib’s Parallel function to execute these computations in parallel across n_jobs number of CPU cores. At the end, the results are aggregated into a single dictionary and returned.

Additional parameters:
get_chunksstr, function (default = “chunks”)

A function that takes in list(iter_func(nbunch, 2)) as input and returns an iterable pairs_chunks, here iter_func is permutations in case of directed graphs and combinations in case of undirected graphs. The default is to create chunks by slicing the list into n_jobs chunks, such that size of each chunk is atmost 10, and at least 1.

[Source]