harmonic_centrality#

harmonic_centrality(G, nbunch=None, distance=None, sources=None)[source]#

Compute harmonic centrality for nodes.

Harmonic centrality [1] of a node u is the sum of the reciprocal of the shortest path distances from all other nodes to u

\[C(u) = \sum_{v \neq u} \frac{1}{d(v, u)}\]

where d(v, u) is the shortest-path distance between v and u.

If sources is given as an argument, the returned harmonic centrality values are calculated as the sum of the reciprocals of the shortest path distances from the nodes specified in sources to u instead of from all nodes to u.

Notice that higher values indicate higher centrality.

Parameters:
Ggraph

A NetworkX graph

nbunchcontainer (default: all nodes in G)

Container of nodes for which harmonic centrality values are calculated.

sourcescontainer (default: all nodes in G)

Container of nodes v over which reciprocal distances are computed. Nodes not in G are silently ignored.

distanceedge attribute key, optional (default=None)

Use the specified edge attribute as the edge distance in shortest path calculations. If None, then each edge will have distance equal to 1.

Returns:
nodesdictionary

Dictionary of nodes with harmonic centrality as the value.

Notes

If the ‘distance’ keyword is set to an edge attribute key then the shortest-path length will be computed using Dijkstra’s algorithm with that edge attribute as the edge weight.

References

[1]

Boldi, Paolo, and Sebastiano Vigna. “Axioms for centrality.” Internet Mathematics 10.3-4 (2014): 222-262.


Additional backends implement this function

parallelA networkx backend that uses joblib to run graph algorithms in parallel. Find the nx-parallel’s configuration guide here

The parallel computation is implemented by dividing the nodes into chunks and computing harmonic centrality for each chunk concurrently.

Additional parameters:
get_chunksstr, function (default = “chunks”)

A function that takes in a list of all the nodes as input and returns an iterable node_chunks. The default chunking is done by slicing the nodes into n_jobs number of chunks.

[Source]