Backends and Configs#


Both NetworkX backend and config systems are experimental. They let you execute an alternative backend implementation instead of NetworkX’s pure Python dictionaries implementation. Things will almost certainly change and break in future releases!


NetworkX utilizes a plugin-dispatch architecture, which means we can plug in and out of backends with minimal code changes. A valid NetworkX backend specifies entry points, named networkx.backends and an optional networkx.backend_info when it is installed (not imported). This allows NetworkX to dispatch (redirect) function calls to the backend so the execution flows to the designated backend implementation, similar to how plugging a charger into a socket redirects the electricity to your phone. This design enhances flexibility and integration, making NetworkX more adaptable and efficient.

There are three main ways to use a backend after the package is installed. You can set environment variables and run the exact same code you run for NetworkX. You can use a keyword argument backend=... with the NetworkX function. Or, you can convert the NetworkX Graph to a backend graph type and call a NetworkX function supported by that backend. Environment variables and backend keywords automatically convert your NetworkX Graph to the backend type. Manually converting it yourself allows you to use that same backend graph for more than one function call, reducing conversion time.

For example, you can set an environment variable before starting python to request all dispatchable functions automatically dispatch to the given backend:


or you can specify the backend as a kwarg:

nx.betweenness_centrality(G, k=10, backend="parallel")

or you can convert the NetworkX Graph object G into a Graph-like object specific to the backend and then pass that in the NetworkX function:

H = nx_parallel.ParallelGraph(G)
nx.betweenness_centrality(H, k=10)

How it works: You might have seen the @nx._dispatchable decorator on many of the NetworkX functions in the codebase. It decorates the function with code that redirects execution to the function’s backend implementation. The code also manages any backend_kwargs you provide to the backend version of the function. The code looks for the environment variable or a backend keyword argument and if found, converts the input NetworkX graph to the backend format before calling the backend’s version of the function. If no environment variable or backend keyword are found, the dispatching code checks the input graph object for an attribute called __networkx_backend__ which tells it which backend provides this graph type. That backend’s version of the function is then called. The backend system relies on Python entry_point system to signal NetworkX that a backend is installed (even if not imported yet). Thus no code needs to be changed between running with NetworkX and running with a backend to NetworkX. The attribute __networkx_backend__ holds a string with the name of the entry_point. If none of these options are being used, the decorator code simply calls the NetworkX function on the NetworkX graph as usual.

The NetworkX library does not need to know that a backend exists for it to work. So long as the backend package creates the entry_point, and provides the correct interface, it will be called when the user requests it using one of the three approaches described above. Some backends have been working with the NetworkX developers to ensure smooth operation. They are the following:

- `graphblas <>`_
- `cugraph <>`_
- `parallel <>`_
- ``loopback`` is for testing purposes only and is not a real backend.

Note that the backend_name is e.g. parallel, the package installed is nx-parallel, and we use nx_parallel while importing the package.

Creating a Custom backend#

  1. To be a valid backend that is discoverable by NetworkX, your package must register an entry-point networkx.backends in the package’s metadata, with a key pointing to your dispatch object . For example, if you are using setuptools to manage your backend package, you can add the following to your pyproject.toml file:

    backend_name = "your_dispatcher_class"

    You can also add the backend_info entry-point. It points towards the get_info function that returns all the backend information, which is then used to build the “Additional Backend Implementation” box at the end of algorithm’s documentation page (e.g. nx-cugraph’s get_info function):

    backend_name = "your_get_info_function"

    Note that this would only work if your backend is a trusted backend of NetworkX, and is present in the circleci/config.yml and github/workflows/deploy-docs.yml files in the NetworkX repository.

  2. The backend must create an nx.Graph-like object which contains an attribute __networkx_backend__ with a value of the entry point name:

    class BackendGraph:
        __networkx_backend__ = "backend_name"

Testing the Custom backend#

To test your custom backend, you can run the NetworkX test suite with your backend. This also ensures that the custom backend is compatible with NetworkX’s API.

Testing Environment Setup#

To enable automatic testing with your custom backend, follow these steps:

  1. Set Backend Environment Variables:
    • NETWORKX_TEST_BACKEND : Setting this to your registered backend key will let the NetworkX’s dispatch machinery automatically convert a regular NetworkX Graph, DiGraph, MultiGraph, etc. to their backend equivalents, using your_dispatcher_class.convert_from_nx(G, ...) function.

    • NETWORKX_FALLBACK_TO_NX (default=False) : Setting this variable to True will instruct tests to use a NetworkX Graph for algorithms not implemented by your custom backend. Setting this to False will only run the tests for algorithms implemented by your custom backend and tests for other algorithms will xfail.

  2. Defining convert_from_nx and convert_to_nx methods:

    The arguments to convert_from_nx are:

    • G : NetworkX Graph

    • edge_attrsdict, optional

      Dictionary mapping edge attributes to default values if missing in G. If None, then no edge attributes will be converted and default may be 1.

    • node_attrs: dict, optional

      Dictionary mapping node attributes to default values if missing in G. If None, then no node attributes will be converted.

    • preserve_edge_attrsbool

      Whether to preserve all edge attributes.

    • preserve_node_attrsbool

      Whether to preserve all node attributes.

    • preserve_graph_attrsbool

      Whether to preserve all graph attributes.

    • preserve_all_attrsbool

      Whether to preserve all graph, node, and edge attributes.

    • namestr

      The name of the algorithm.

    • graph_namestr

      The name of the graph argument being converted.

Running Tests#

You can invoke NetworkX tests for your custom backend with the following commands:

pytest --pyargs networkx

Conversions while running tests :

  • Convert NetworkX graphs using <your_dispatcher_class>.convert_from_nx(G, ...) into the backend graph.

  • Pass the backend graph objects to the backend implementation of the algorithm.

  • Convert the result back to a form expected by NetworkX tests using <your_dispatcher_class>.convert_to_nx(result, ...).


  • Dispatchable algorithms that are not implemented by the backend will cause a pytest.xfail, giving some indication that not all tests are running, while avoiding causing an explicit failure.

  • If a backend only partially implements some algorithms, it can define a can_run(name, args, kwargs) function that returns True or False indicating whether it can run the algorithm with the given arguments. It may also return a string indicating why the algorithm can’t be run; this string may be used in the future to give helpful info to the user.

  • A backend may also define should_run(name, args, kwargs) that is similar to can_run, but answers whether the backend should be run (converting if necessary). Like can_run, it receives the original arguments so it can decide whether it should be run by inspecting the arguments. can_run runs before should_run, so should_run may assume can_run is True. If not implemented by the backend, can_run and should_run are assumed to always return True if the backend implements the algorithm.

  • A special on_start_tests(items) function may be defined by the backend. It will be called with the list of NetworkX tests discovered. Each item is a test object that can be marked as xfail if the backend does not support the test using item.add_marker(pytest.mark.xfail(reason=...)).

  • A backend graph instance may have a G.__networkx_cache__ dict to enable caching, and care should be taken to clear the cache when appropriate.

Decorator: _dispatchable#

@_dispatchable(func=None, *, name=None, graphs='G', edge_attrs=None, node_attrs=None, preserve_edge_attrs=False, preserve_node_attrs=False, preserve_graph_attrs=False, preserve_all_attrs=False, mutates_input=False, returns_graph=False)[source]#

A decorator that makes certain input graph types dispatch to func’s backend implementation.

Usage can be any of the following decorator forms: - @_dispatchable - @_dispatchable() - @_dispatchable(name=”override_name”) - @_dispatchable(graphs=”graph_var_name”) - @_dispatchable(edge_attrs=”weight”) - @_dispatchable(graphs={“G”: 0, “H”: 1}, edge_attrs={“weight”: “default”}) with 0 and 1 giving the position in the signature function for graph objects. When edge_attrs is a dict, keys are keyword names and values are defaults.

The class attributes are used to allow backends to run networkx tests. For example: PYTHONPATH=. pytest --backend graphblas --fallback-to-nx Future work: add configuration to control these.

funccallable, optional

The function to be decorated. If func is not provided, returns a partial object that can be used to decorate a function later. If func is provided, returns a new callable object that dispatches to a backend algorithm based on input graph types.

namestr, optional

The name of the algorithm to use for dispatching. If not provided, the name of func will be used. name is useful to avoid name conflicts, as all dispatched algorithms live in a single namespace. For example, tournament.is_strongly_connected had a name conflict with the standard nx.is_strongly_connected, so we used @_dispatchable(name="tournament_is_strongly_connected").

graphsstr or dict or None, default “G”

If a string, the parameter name of the graph, which must be the first argument of the wrapped function. If more than one graph is required for the algorithm (or if the graph is not the first argument), provide a dict of parameter name to argument position for each graph argument. For example, @_dispatchable(graphs={"G": 0, "auxiliary?": 4}) indicates the 0th parameter G of the function is a required graph, and the 4th parameter auxiliary is an optional graph. To indicate an argument is a list of graphs, do e.g. "[graphs]". Use graphs=None if no arguments are NetworkX graphs such as for graph generators, readers, and conversion functions.

edge_attrsstr or dict, optional

edge_attrs holds information about edge attribute arguments and default values for those edge attributes. If a string, edge_attrs holds the function argument name that indicates a single edge attribute to include in the converted graph. The default value for this attribute is 1. To indicate that an argument is a list of attributes (all with default value 1), use e.g. "[attrs]". If a dict, edge_attrs holds a dict keyed by argument names, with values that are either the default value or, if a string, the argument name that indicates the default value.

node_attrsstr or dict, optional

Like edge_attrs, but for node attributes.

preserve_edge_attrsbool or str or dict, optional

For bool, whether to preserve all edge attributes. For str, the parameter name that may indicate (with True or a callable argument) whether all edge attributes should be preserved when converting. For dict of {graph_name: {attr: default}}, indicate pre-determined edge attributes (and defaults) to preserve for input graphs.

preserve_node_attrsbool or str or dict, optional

Like preserve_edge_attrs, but for node attributes.

preserve_graph_attrsbool or set

For bool, whether to preserve all graph attributes. For set, which input graph arguments to preserve graph attributes.


Whether to preserve all edge, node and graph attributes. This overrides all the other preserve_*_attrs.

mutates_inputbool or dict, default False

For bool, whether the functions mutates an input graph argument. For dict of {arg_name: arg_pos}, arguments that indicates whether an input graph will be mutated, and arg_name may begin with "not " to negate the logic (for example, this is used by copy= arguments). By default, dispatching doesn’t convert input graphs to a different backend for functions that mutate input graphs.

returns_graphbool, default False

Whether the function can return or yield a graph object. By default, dispatching doesn’t convert input graphs to a different backend for functions that return graphs.



alias of NetworkXConfig(backend_priority=[], backends=Config(parallel=Config(), graphblas=Config(), cugraph=Config()), cache_converted_graphs=False)

class NetworkXConfig(**kwargs)[source]#

Configuration for NetworkX that controls behaviors such as how to use backends.

Attribute and bracket notation are supported for getting and setting configurations:

>>> nx.config.backend_priority == nx.config["backend_priority"]
backend_prioritylist of backend names

Enable automatic conversion of graphs to backend graphs for algorithms implemented by the backend. Priority is given to backends listed earlier. Default is empty list.

backendsConfig mapping of backend names to backend Config

The keys of the Config mapping are names of all installed NetworkX backends, and the values are their configurations as Config mappings.


If True, then save converted graphs to the cache of the input graph. Graph conversion may occur when automatically using a backend from backend_priority or when using the backend= keyword argument to a function call. Caching can improve performance by avoiding repeated conversions, but it uses more memory. Care should be taken to not manually mutate a graph that has cached graphs; for example, G[u][v][k] = val changes the graph, but does not clear the cache. Using methods such as G.add_edge(u, v, weight=val) will clear the cache to keep it consistent. G.__networkx_cache__.clear() manually clears the cache. Default is False.


Environment variables may be used to control some default configurations:

  • NETWORKX_BACKEND_PRIORITY: set backend_priority from comma-separated names.

  • NETWORKX_CACHE_CONVERTED_GRAPHS: set cache_converted_graphs to True if nonempty.

This is a global configuration. Use with caution when using from multiple threads.

class Config(**kwargs)[source]#

The base class for NetworkX configuration.

There are two ways to use this to create configurations. The first is to simply pass the initial configuration as keyword arguments to Config:

>>> cfg = Config(eggs=1, spam=5)
>>> cfg
Config(eggs=1, spam=5)

The second–and preferred–way is to subclass Config with docs and annotations.

>>> class MyConfig(Config):
...     '''Breakfast!'''
...     eggs: int
...     spam: int
...     def _check_config(self, key, value):
...         assert isinstance(value, int) and value >= 0
>>> cfg = MyConfig(eggs=1, spam=5)

Once defined, config items may be modified, but can’t be added or deleted by default. Config is a Mapping, and can get and set configs via attributes or brackets:

>>> cfg.eggs = 2
>>> cfg.eggs
>>> cfg["spam"] = 42
>>> cfg["spam"]

Subclasses may also define _check_config (as done in the example above) to ensure the value being assigned is valid:

>>> cfg.spam = -1
Traceback (most recent call last):

If a more flexible configuration object is needed that allows adding and deleting configurations, then pass strict=False when defining the subclass:

>>> class FlexibleConfig(Config, strict=False):
...     default_greeting: str = "Hello"
>>> flexcfg = FlexibleConfig()
>>> = "Mr. Anderson"
>>> flexcfg
FlexibleConfig(default_greeting='Hello', name='Mr. Anderson')