Backends and Configs#

Backends let you execute an alternative backend implementation instead of NetworkX’s pure Python dictionaries implementation. Configs provide library-level storage of configuration settings that can also come from environment variables.

Note

NetworkX backend and configuration systems are receiving frequent updates and improvements. The user interface for using backends is generally stable. In the unlikely case where compatibility-breaking changes are necessary to the backend or config APIs, the standard deprecation policy of NetworkX may not be followed. This flexibility is intended to allow us to respond rapidly to user feedback and improve usability, and care will be taken to avoid unnecessary disruption. Developers of NetworkX backends should regularly monitor updates to maintain compatibility. Participating in weekly NX-dispatch meetings is an excellent way to stay updated and contribute to the ongoing discussions.

Backends#

Docs for backend users#

NetworkX utilizes a plugin-dispatch architecture, which means we can plug in and out of backends with minimal code changes. A valid NetworkX backend specifies entry points, named networkx.backends and an optional networkx.backend_info when it is installed (not imported). This allows NetworkX to dispatch (redirect) function calls to the backend so the execution flows to the designated backend implementation, similar to how plugging a charger into a socket redirects the electricity to your phone. This design enhances flexibility and integration, making NetworkX more adaptable and efficient.

There are three main ways to use a backend after the package is installed. You can set environment variables and run the exact same code you run for NetworkX. You can use a keyword argument backend=... with the NetworkX function. Or, you can convert the NetworkX Graph to a backend graph type and call a NetworkX function supported by that backend. Environment variables and backend keywords automatically convert your NetworkX Graph to the backend type. Manually converting it yourself allows you to use that same backend graph for more than one function call, reducing conversion time.

For example, you can set an environment variable before starting python to request all dispatchable functions automatically dispatch to the given backend:

bash> NETWORKX_AUTOMATIC_BACKENDS=cugraph python my_networkx_script.py

or you can specify the backend as a kwarg:

nx.betweenness_centrality(G, k=10, backend="parallel")

or you can convert the NetworkX Graph object G into a Graph-like object specific to the backend and then pass that in the NetworkX function:

H = nx_parallel.ParallelGraph(G)
nx.betweenness_centrality(H, k=10)

The first approach is useful when you don’t want to change your NetworkX code and just want to run your code on different backend(s). The second approach comes in handy when you need to pass additional backend-specific arguments, for example:

nx.betweenness_centrality(G, k=10, backend="parallel", get_chunks=get_chunks)

Here, get_chunks is not a NetworkX argument, but a nx_parallel-specific argument.

How does this work?#

You might have seen the @nx._dispatchable decorator on many of the NetworkX functions in the codebase. This decorator function works by dispatching a NetworkX function to a specified backend if available, or running it with NetworkX if no backend is specified or available. It checks if the specified backend is valid and installed. If not, it raises an ImportError. It also resolves the graph arguments from the provided args and kwargs, handling cases where graphs are passed as positional arguments or keyword arguments. It then checks if any of the resolved graphs are from a backend by checking if they have a __networkx_backend__ attribute. The attribute __networkx_backend__ holds a string with the name of the entry_point (more on them later). If there are graphs from a backend, it determines the priority of the backends based on the backend_priority configuration. If there are dispatchable graphs (i.e., graphs from a backend), it checks if all graphs are from the same backend. If not, it raises a TypeError. If a backend is specified and it matches the backend of the graphs, it loads the backend and calls the corresponding function on the backend along with the additional backend-specific backend_kwargs. After calling the function the networkx logger displays the DEBUG message, if the logging is enabled (see Introspection below). If no compatible backend is found or the function is not implemented by the backend, it raises a NetworkXNotImplemented exception. And, if the function mutates the input graph or returns a graph, graph generator or loader then it tries to convert and run the function with a backend with automatic conversion. And it only convert and run if backend.should_run(...) returns True. If no backend is used, it falls back to running the original function with NetworkX. Refer the __call__ method of the _dispatchable class for more details.

The NetworkX library does not need to know that a backend exists for it to work. As long as the backend package creates the entry_point, and provides the correct interface, it will be called when the user requests it using one of the three approaches described above. Some backends have been working with the NetworkX developers to ensure smooth operation. They are the following:

  • graphblas:

    OpenMP-enabled sparse linear algebra backend.

  • cugraph:

    GPU-accelerated backend.

  • parallel:

    Parallel backend for NetworkX algorithms.

  • loopback:

    It’s for testing purposes only and is not a real backend.

Note that the backend_name is e.g. parallel, the package installed is nx-parallel, and we use nx_parallel while importing the package.

Introspection#

Introspection techniques aim to demystify dispatching and backend graph conversion behaviors.

The primary way to see what the dispatch machinery is doing is by enabling logging. This can help you verify that the backend you specified is being used. You can enable NetworkX’s backend logger to print to sys.stderr like this:

import logging
nxl = logging.getLogger("networkx")
nxl.addHandler(logging.StreamHandler())
nxl.setLevel(logging.DEBUG)

And you can disable it by running this:

nxl.setLevel(logging.CRITICAL)

Refer to logging to learn more about the logging facilities in Python.

By looking at the .backends attribute, you can get the set of all currently installed backends that implement a particular function. For example:

>>> nx.betweenness_centrality.backends  
{'parallel'}

The function docstring will also show which installed backends support it along with any backend-specific notes and keyword arguments:

>>> help(nx.betweenness_centrality)  
...
Backends
--------
parallel : Parallel backend for NetworkX algorithms
  The parallel computation is implemented by dividing the nodes into chunks
  and computing betweenness centrality for each chunk concurrently.
...

The NetworkX documentation website also includes info about trusted backends of NetworkX in function references. For example, see all_pairs_bellman_ford_path_length().

Introspection capabilities are currently limited, but we are working to improve them. We plan to make it easier to answer questions such as:

  • What happened (and why)?

  • What will happen (and why)?

  • Where was time spent (including conversions)?

  • What is in the cache and how much memory is it using?

Transparency is essential to allow for greater understanding, debug-ability, and customization. After all, NetworkX dispatching is extremely flexible and can support advanced workflows with multiple backends and fine-tuned configuration, but introspection is necessary to inform when and how to evolve your workflow to meet your needs. If you have suggestions for how to improve introspection, please let us know!

Docs for backend developers#

Creating a custom backend#

  1. Defining a BackendInterface object:

    Note that the BackendInterface doesn’t need to must be a class. It can be an instance of a class, or a module as well. You can define the following methods or functions in your backend’s BackendInterface object.:

    1. convert_from_nx and convert_to_nx methods or functions are required for backend dispatching to work. The arguments to convert_from_nx are:

      • G : NetworkX Graph

      • edge_attrsdict, optional

        Dictionary mapping edge attributes to default values if missing in G. If None, then no edge attributes will be converted and default may be 1.

      • node_attrs: dict, optional

        Dictionary mapping node attributes to default values if missing in G. If None, then no node attributes will be converted.

      • preserve_edge_attrsbool

        Whether to preserve all edge attributes.

      • preserve_node_attrsbool

        Whether to preserve all node attributes.

      • preserve_graph_attrsbool

        Whether to preserve all graph attributes.

      • preserve_all_attrsbool

        Whether to preserve all graph, node, and edge attributes.

      • namestr

        The name of the algorithm.

      • graph_namestr

        The name of the graph argument being converted.

    2. can_run (Optional):

      If your backend only partially implements an algorithm, you can define a can_run(name, args, kwargs) function in your BackendInterface object that returns True or False indicating whether the backend can run the algorithm with the given arguments or not. Instead of a boolean you can also return a string message to inform the user why that algorithm can’t be run.

    3. should_run (Optional):

      A backend may also define should_run(name, args, kwargs) that is similar to can_run, but answers whether the backend should be run. should_run is only run when performing backend graph conversions. Like can_run, it receives the original arguments so it can decide whether it should be run by inspecting the arguments. can_run runs before should_run, so should_run may assume can_run is True. If not implemented by the backend, can_run``and ``should_run are assumed to always return True if the backend implements the algorithm.

    4. on_start_tests (Optional):

      A special on_start_tests(items) function may be defined by the backend. It will be called with the list of NetworkX tests discovered. Each item is a test object that can be marked as xfail if the backend does not support the test using item.add_marker(pytest.mark.xfail(reason=...)).

  2. Adding entry points

    To be discoverable by NetworkX, your package must register an entry-point networkx.backends in the package’s metadata, with a key pointing to your dispatch object . For example, if you are using setuptools to manage your backend package, you can add the following to your pyproject.toml file:

    [project.entry-points."networkx.backends"]
    backend_name = "your_backend_interface_object"
    

    You can also add the backend_info entry-point. It points towards the get_info function that returns all the backend information, which is then used to build the “Additional Backend Implementation” box at the end of algorithm’s documentation page. Note that the get_info function shouldn’t import your backend package.:

    [project.entry-points."networkx.backend_info"]
    backend_name = "your_get_info_function"
    
    The get_info should return a dictionary with following key-value pairs:
    • backend_namestr or None

      It is the name passed in the backend kwarg.

    • projectstr or None

      The name of your backend project.

    • packagestr or None

      The name of your backend package.

    • urlstr or None

      This is the url to either your backend’s codebase or documentation, and will be displayed as a hyperlink to the backend_name, in the “Additional backend implementations” section.

    • short_summarystr or None

      One line summary of your backend which will be displayed in the “Additional backend implementations” section.

    • default_configdict

      A dictionary mapping the backend config parameter names to their default values. This is used to automatically initialise the default configs for all the installed backends at the time of networkx’s import.

      See also

      Config

    • functionsdict or None

      A dictionary mapping function names to a dictionary of information about the function. The information can include the following keys:

      • url : str or None The url to function’s source code or documentation.

      • additional_docs : str or None A short description or note about the backend function’s implementation.

      • additional_parameters : dict or None A dictionary mapping additional parameters headers to their short descriptions. For example:

        "additional_parameters": {
            'param1 : str, function (default = "chunks")' : "...",
            'param2 : int' : "...",
        }
        

      If any of these keys are not present, the corresponding information will not be displayed in the “Additional backend implementations” section on NetworkX docs website.

    Note that your backend’s docs would only appear on the official NetworkX docs only if your backend is a trusted backend of NetworkX, and is present in the circleci/config.yml and github/workflows/deploy-docs.yml files in the NetworkX repository.

  3. Defining a Backend Graph class

    The backend must create an object with an attribute __networkx_backend__ that holds a string with the entry point name:

    class BackendGraph:
        __networkx_backend__ = "backend_name"
        ...
    

    A backend graph instance may have a G.__networkx_cache__ dict to enable caching, and care should be taken to clear the cache when appropriate.

Testing the Custom backend#

To test your custom backend, you can run the NetworkX test suite on your backend. This also ensures that the custom backend is compatible with NetworkX’s API. The following steps will help you run the tests:

  1. Setting Backend Environment Variables:
    • NETWORKX_TEST_BACKEND : Setting this to your backend’s backend_name will let NetworkX’s dispatch machinery to automatically convert a regular NetworkX Graph, DiGraph, MultiGraph, etc. to their backend equivalents, using your_backend_interface_object.convert_from_nx(G, ...) function.

    • NETWORKX_FALLBACK_TO_NX (default=False) : Setting this variable to True will instruct tests to use a NetworkX Graph for algorithms not implemented by your custom backend. Setting this to False will only run the tests for algorithms implemented by your custom backend and tests for other algorithms will xfail.

  2. Running Tests:

    You can invoke NetworkX tests for your custom backend with the following commands:

    NETWORKX_TEST_BACKEND=<backend_name>
    NETWORKX_FALLBACK_TO_NX=True # or False
    pytest --pyargs networkx
    

How tests are run?#

  1. While dispatching to the backend implementation the _convert_and_call function is used and while testing the _convert_and_call_for_tests function is used. Other than testing it also checks for functions that return numpy scalars, and for functions that return graphs it runs the backend implementation and the networkx implementation and then converts the backend graph into a NetworkX graph and then compares them, and returns the networkx graph. This can be regarded as (pragmatic) technical debt. We may replace these checks in the future.

  2. Conversions while running tests:
    • Convert NetworkX graphs using <your_backend_interface_object>.convert_from_nx(G, ...) into the backend graph.

    • Pass the backend graph objects to the backend implementation of the algorithm.

    • Convert the result back to a form expected by NetworkX tests using <your_backend_interface_object>.convert_to_nx(result, ...).

    • For nx_loopback, the graph is copied using the dispatchable metadata

  3. Dispatchable algorithms that are not implemented by the backend will cause a pytest.xfail, when the NETWORKX_FALLBACK_TO_NX environment variable is set to False, giving some indication that not all tests are running, while avoiding causing an explicit failure.

_dispatchable([func, name, graphs, ...])

A decorator function that is used to redirect the execution of func function to its backend implementation.

Configs#

config#

alias of NetworkXConfig(backend_priority=[], backends=Config(parallel=Config(), cugraph=Config(), graphblas=Config()), cache_converted_graphs=True)

class NetworkXConfig(**kwargs)[source]#

Configuration for NetworkX that controls behaviors such as how to use backends.

Attribute and bracket notation are supported for getting and setting configurations:

>>> nx.config.backend_priority == nx.config["backend_priority"]
True
Parameters:
backend_prioritylist of backend names

Enable automatic conversion of graphs to backend graphs for algorithms implemented by the backend. Priority is given to backends listed earlier. Default is empty list.

backendsConfig mapping of backend names to backend Config

The keys of the Config mapping are names of all installed NetworkX backends, and the values are their configurations as Config mappings.

cache_converted_graphsbool

If True, then save converted graphs to the cache of the input graph. Graph conversion may occur when automatically using a backend from backend_priority or when using the backend= keyword argument to a function call. Caching can improve performance by avoiding repeated conversions, but it uses more memory. Care should be taken to not manually mutate a graph that has cached graphs; for example, G[u][v][k] = val changes the graph, but does not clear the cache. Using methods such as G.add_edge(u, v, weight=val) will clear the cache to keep it consistent. G.__networkx_cache__.clear() manually clears the cache. Default is True.

Notes

Environment variables may be used to control some default configurations:

  • NETWORKX_BACKEND_PRIORITY: set backend_priority from comma-separated names.

  • NETWORKX_CACHE_CONVERTED_GRAPHS: set cache_converted_graphs to True if nonempty.

This is a global configuration. Use with caution when using from multiple threads.

class Config(**kwargs)[source]#

The base class for NetworkX configuration.

There are two ways to use this to create configurations. The recommended way is to subclass Config with docs and annotations.

>>> class MyConfig(Config):
...     '''Breakfast!'''
...
...     eggs: int
...     spam: int
...
...     def _check_config(self, key, value):
...         assert isinstance(value, int) and value >= 0
>>> cfg = MyConfig(eggs=1, spam=5)

Another way is to simply pass the initial configuration as keyword arguments to the Config instance:

>>> cfg1 = Config(eggs=1, spam=5)
>>> cfg1
Config(eggs=1, spam=5)

Once defined, config items may be modified, but can’t be added or deleted by default. Config is a Mapping, and can get and set configs via attributes or brackets:

>>> cfg.eggs = 2
>>> cfg.eggs
2
>>> cfg["spam"] = 42
>>> cfg["spam"]
42

For convenience, it can also set configs within a context with the “with” statement:

>>> with cfg(spam=3):
...     print("spam (in context):", cfg.spam)
spam (in context): 3
>>> print("spam (after context):", cfg.spam)
spam (after context): 42

Subclasses may also define _check_config (as done in the example above) to ensure the value being assigned is valid:

>>> cfg.spam = -1
Traceback (most recent call last):
    ...
AssertionError

If a more flexible configuration object is needed that allows adding and deleting configurations, then pass strict=False when defining the subclass:

>>> class FlexibleConfig(Config, strict=False):
...     default_greeting: str = "Hello"
>>> flexcfg = FlexibleConfig()
>>> flexcfg.name = "Mr. Anderson"
>>> flexcfg
FlexibleConfig(default_greeting='Hello', name='Mr. Anderson')