MPI Tools

Core MPI Utilities

spinup.utils.mpi_tools. mpi_avg ( x ) [source]

Average a scalar or vector over MPI processes.

spinup.utils.mpi_tools. mpi_fork ( n, bind_to_core=False ) [source]

Re-launches the current script with workers linked by MPI.

Also,terminates the original process that launched it.

Taken almost without modification from the Baselines function of thesame name.

Parameters:
  • n(int) – Number of process to split into.
  • bind_to_core(bool) – Bind each MPI process to a core.
spinup.utils.mpi_tools. mpi_statistics_scalar ( x, with_min_and_max=False ) [source]

Get mean/std and optional min/max of scalar x across MPI processes.

Parameters:
  • x– An array containing samples of the scalar to produce statisticsfor.
  • with_min_and_max(bool) – If true,return min and max of x inaddition to mean and std.
spinup.utils.mpi_tools. num_procs ( ) [source]

Count active MPI processes.

spinup.utils.mpi_tools. proc_id ( ) [source]

Get rank of calling process.

MPI + Tensorflow公用事业

Thespinup.utils.mpi_tfcontains a a few tools to make it easy to use the AdamOptimizer across many MPI processes.This is a bit hacky—if you're looking for something more sophisticated and general-purpose,considerhorovod.

class spinup.utils.mpi_tf. MpiAdamOptimizer ( **kwargs ) [source]

Adam optimizer that averages gradients across MPI processes.

The compute_gradients method is taken from BaselinesMpiAdamOptimizer.For documentation on method arguments,see the Tensorflow docs page forthe baseAdamOptimizer.

apply_gradients ( grads_and_vars, global_step=None, name=None ) [source]

Same as normal apply_gradients,except sync params after update.

compute_gradients ( loss, var_list, **kwargs ) [source]

Same as normal compute_gradients,except average grads over processes.

spinup.utils.mpi_tf. sync_all_params ( ) [source]

Sync all tf variables across MPI processes.