all_pairs_node_connectivity#
- all_pairs_node_connectivity(G, nbunch=None, flow_func=None)[source]#
Compute node connectivity between all pairs of nodes of G.
- Parameters:
- GNetworkX graph
Undirected graph
- nbunch: container
Container of nodes. If provided node connectivity will be computed only over pairs of nodes in nbunch.
- flow_funcfunction
A function for computing the maximum flow among a pair of nodes. The function has to accept at least three parameters: a Digraph, a source node, and a target node. And return a residual network that follows NetworkX conventions (see
maximum_flow()
for details). If flow_func is None, the default maximum flow function (edmonds_karp()
) is used. See below for details. The choice of the default function may change from version to version and should not be relied on. Default value: None.
- Returns:
- all_pairsdict
A dictionary with node connectivity between all pairs of nodes in G, or in nbunch if provided.
See also
local_node_connectivity()
edge_connectivity()
local_edge_connectivity()
maximum_flow()
edmonds_karp()
preflow_push()
shortest_augmenting_path()
Additional backends implement this function
- parallelA networkx backend that uses joblib to run graph algorithms in parallel. Find the nx-parallel’s configuration guide here
The parallel implementation first divides a list of all permutation (in case of directed graphs) and combinations (in case of undirected graphs) of
nbunch
into chunks and then creates a generator to lazily compute the local node connectivities for each chunk, and then employs joblib’sParallel
function to execute these computations in parallel acrossn_jobs
number of CPU cores. At the end, the results are aggregated into a single dictionary and returned.- Additional parameters:
- get_chunksstr, function (default = “chunks”)
A function that takes in
list(iter_func(nbunch, 2))
as input and returns an iterablepairs_chunks
, hereiter_func
ispermutations
in case of directed graphs andcombinations
in case of undirected graphs. The default is to create chunks by slicing the list inton_jobs
number of chunks, such that size of each chunk is atmost 10, and at least 1.
[Source]