Running Experiments

One of the best ways to get a feel for betway电竞deep RL is to run the algorithms and see how they perform on different tasks.The Spinning Up code library makes small-scale (local) experiments easy to do,and in this section,we'll discuss two ways to run them: either from the command line,or through function calls in scripts.

Launching from the Command Line

Spinning Up ships withspinup/,a convenient tool that lets you easily launch any algorithm (with any choices of hyperparameters) from the command line.It also serves as a thin wrapper over the utilities for watching trained policies and plotting,although we will not discuss that functionality on this page (for those details,see the pages onexperiment outputsandplotting).

The standard way to run a Spinning Up algorithm from the command line is

python -m [algo name] [experiment flags]


python -m ppo --env Walker2d-v2 --exp_name walker

You Should Know

If you are using ZShell: ZShell interprets square brackets as special characters.Spinning Up uses square brackets in a few ways for command line arguments;make sure to escape them,or try the solution recommendedhereif you want to escape them by default.

Detailed Quickstart Guide

python -m ppo --exp_name ppo_ant --env Ant-v2 --clip_ratio 0.1 0.2
    --hid[h] [32,32] [64,32] --act tf.nn.tanh --seed 0 10 20 --dt
    --data_dir path/to/data

runs PPO in theAnt-v2Gym environment,with various settings controlled by the flags.

clip_ratio,hid,andactare flags to set some algorithm hyperparameters.You can provide multiple values for hyperparameters to run multiple experiments.Check the docs to see what hyperparameters you can set (click here for thePPO documentation).

hidandactarespecial shortcut flagsfor setting the hidden sizes and activation function for the neural networks trained by the algorithm.

Theseedflag sets the seed for the random number generator.RL algorithms have high variance,so try multiple seeds to get a feel for how performance varies.

Thedtflag ensures that the save directory names will have timestamps in them (otherwise they don't,unless you setFORCE_DATESTAMP=Trueinspinup/

Thedata_dirflag allows you to set the save folder for results.The default value is set byDEFAULT_DATA_DIRinspinup/,which will be a subfolderdatain thespinningupfolder (unless you change it).

Save directory namesare based onexp_nameand any flags which have multiple values.Instead of the full flag,a shorthand will appear in the directory name.Shorthands can be provided by the user in square brackets after the flag,like--hid[h];otherwise,shorthands are substrings of the flag (clip_ratiobecomescli).To illustrate,the save directory for the run withclip_ratio=0.1,hid=[32,32],andseed=10will be:


Setting Hyperparameters from the Command Line

Every hyperparameter in every algorithm can be controlled directly from the command line.Ifkwargis a valid keyword arg for the function call of an algorithm,you can set values for it with the flag--kwarg.To find out what keyword args are available,see either the docs page for an algorithm,or try

python -m [algo name] --help

to see a readout of the docstring.

You Should Know

Values pass througheval()before being used,so you can describe some functions and objects directly from the command line.For example:

python -m ppo --env Walker2d-v2 --exp_name walker --act tf.nn.elu

setstf.nn.eluas the activation function.

You Should Know

There's some nice handling for kwargs that take dict values.Instead of having to provide

--key dict(v1=value_1, v2=value_2)

you can give

--key:v1 value_1 --key:v2 value_2

to get the same result.

Launching Multiple Experiments at Once

You can launch multiple experiments,to be executedin series,by simply providing more than one value for a given argument.(An experiment for each possible combination of values will be launched.)

For example,to launch otherwise-equivalent runs with different random seeds (0,10,and 20),do:

python -m ppo --env Walker2d-v2 --exp_name walker --seed 0 10 20

Experiments don't launch in parallel because they soak up enough resources that executing several at the same time wouldn't get a speedup.

Special Flags

A few flags receive special treatment.

Environment Flag

--env , --env_name

string.The name of an environment in the OpenAI Gym.All Spinning Up algorithms are implemented as functions that acceptenv_fnas an argument,whereenv_fnmust be a callable function that builds a copy of the RL environment.Since the most common use case is Gym environments,though,all of which are built throughgym.make(env_name),we allow you to just specifyenv_name(orenvfor short) at the command line,which gets converted to a lambda-function that builds the correct gym environment.

Shortcut Flags

Some algorithm arguments are relatively long,and we enabled shortcuts for them:

--hid , --ac_kwargs :hidden_sizes

list of ints.Sets the sizes of the hidden layers in the neural networks (policies and value functions).

--act , --ac_kwargs :activation

tf op.The activation function for the neural networks in the actor and critic.

These flags are valid for all current Spinning Up algorithms.

Config Flags

These flags are not hyperparameters of any algorithm,but change the experimental configuration in some way.

--cpu , --num_cpu

int.If this flag is set,the experiment is launched with this many processes,one per cpu,connected by MPI.Some algorithms are amenable to this sort of parallelization but not all.An error will be raised if you try settingnum_cpu> 1 for an incompatible algorithm.You can also set--num_cpu auto,which will automatically use as many CPUs as are available on the machine.


string.The experiment name.This is used in naming the save directory for each experiment.The default is "cmd" + [algo name].


path.Set the base save directory for this experiment or set of experiments.If none is given,theDEFAULT_DATA_DIRinspinup/user_config.pywill be used.


bool.Include date and time in the name for the save directory of the experiment.

Where Results are Saved

Results for a 必威电竞particular experiment (a single run of a configuration of hyperparameters) are stored in



  • data_diris the value of the--data_dirflag (defaults toDEFAULT_DATA_DIRfromspinup/user_config.pyif--data_diris not given),
  • theouter_prefixis aYY-MM-DD_timestamp if the--datestampflag is raised,otherwise nothing,
  • theinner_prefixis aYY-MM-DD_HH-MM-SS-timestamp if the--datestampflag is raised,otherwise nothing,
  • andsuffixis a special string based on the experiment hyperparameters.

How is Suffix Determined?

Suffixes are only included if you run multiple experiments at once,and they only include references to hyperparameters that differ across experiments,except for random seed.The goal is to make sure that results for similar experiments (ones which share all params except seed) are grouped in the same folder.

Suffixes are constructed by combiningshorthandsfor hyperparameters with their values,where a shorthand is either 1) constructed automatically from the hyperparameter name or 2) supplied by the user.The user can supply a shorthand by writing in square brackets after the kwarg flag.

For example,consider:

python -m ddpg --env Hopper-v2 --hid[h] [300] [128,128] --act tf.nn.tanh tf.nn.relu

Here,the--hidflag is given auser-supplied shorthand,h.The--actflag is not given a shorthand by the user,so one will be constructed for it automatically.

The suffixes produced in this case are:


Note that thehwas given by the user.theac-actshorthand was constructed fromac_kwargs:activation(the true name for theactflag).


You Don't Actually Need to Know This One

Each individual algorithm is located in a filespinup/algos/ALGO_NAME/,and these files can be run directly from the command line with a limited set of arguments (some of which differ from what's available tospinup/ command line support in the individual algorithm files is essentially vestigial,however,and this isnota recommended way to perform experiments.

This documentation page will not describe those command line calls,and willonlydescribe calls throughspinup/

Launching from Scripts

Each algorithm is implemented as a python function,which can be imported directly from thespinuppackage,eg

>>>from spinup import ppo

See the documentation page for each algorithm for a complete account of possible arguments.These methods can be used to set up specialized custom experiments,for example:

from spinup import ppo
import tensorflow as tf
import gym

env_fn = lambda : gym.make('LunarLander-v2')

ac_kwargs = dict(hidden_sizes=[64,64], activation=tf.nn.relu)

logger_kwargs = dict(output_dir='path/to/output_dir', exp_name='experiment_name')

ppo(env_fn=env_fn, ac_kwargs=ac_kwargs, steps_per_epoch=5000, epochs=250, logger_kwargs=logger_kwargs)

Using ExperimentGrid

It's often useful in machine learning research to run the same algorithm with many possible hyperparameters.Spinning Up ships with a simple tool for facilitating this,calledExperimentGrid.

Consider the example inspinup/examples/

1 2 3 4 5 6 7 8 910111213141516171819
 from spinup.utils.run_utils import ExperimentGrid
 from spinup import ppo
 import tensorflow as tf

 if __name__ == '__main__':
     import argparse
     parser = argparse.ArgumentParser()
     parser.add_argument('--cpu', type=int, default=4)
     parser.add_argument('--num_runs', type=int, default=3)
     args = parser.parse_args()

     eg = ExperimentGrid(name='ppo-bench')
     eg.add('env_name', 'CartPole-v0', '', True)
     eg.add('seed', [10*i for i in range(args.num_runs)])
     eg.add('epochs', 10)
     eg.add('steps_per_epoch', 4000)
     eg.add('ac_kwargs:hidden_sizes', [(32,), (64,64)], 'hid')
     eg.add('ac_kwargs:activation', [tf.tanh, tf.nn.relu], ''), num_cpu=args.cpu)

After making the ExperimentGrid object,parameters are added to it with

eg.add(param_name, values, shorthand, in_name)

wherein_nameforces a parameter to appear in the experiment name,even if it has the same value across all experiments.

After all parameters have been added,, **run_kwargs)

runs all experiments in the grid (one experiment per valid configuration),by providing the configurations as kwargs to the functionthunk.ExperimentGrid.runuses a function namedcall_experimentto launchthunk,and**run_kwargsspecify behaviors forcall_experiment.Seethe documentation pagefor details.

Except for the absence of shortcut kwargs (you can't usehidforac_kwargs:hidden_sizesinExperimentGrid),the basic behavior ofExperimentGridis the same as running things from the command line.(In fact,spinup.runuses anExperimentGridunder the hood.)