rlberry_scool.envs.GridWorld¶
- class rlberry_scool.envs.GridWorld(nrows=5, ncols=5, start_coord=(0, 0), terminal_states=None, success_probability=0.9, reward_at=None, walls=((1, 1), (2, 2)), default_reward=0.0)[source]¶
- Bases: - RenderInterface2D,- FiniteMDP- Simple GridWorld environment. - Parameters:
- nrowsint
- number of rows 
- ncolsint
- number of columns 
- start_coordtuple
- tuple with coordinates of initial position 
- terminal_statestuple
- ((row_0, col_0), (row_1, col_1), …) = coordinates of terminal states 
- success_probabilitydouble
- probability of moving in the chosen direction 
- reward_at: dict
- dictionary, keys = tuple containing coordinates, values = reward at each coordinate 
- wallstuple
- ((row_0, col_0), (row_1, col_1), …) = coordinates of walls 
- default_rewarddouble
- reward received at states not in ‘reward_at’ 
 
- Attributes:
 - Methods - close()- After the user has finished using the environment, close contains the code necessary to "clean up" the environment. - from_layout([layout, success_probability])- Create GridWorld instance from a layout. - Return a scene (list of shapes) representing the background - get_layout_array([state_data, fill_walls_with])- Returns an array 'layout' of shape (nrows, ncols) such that: - get_layout_img([state_data, colormap_name, ...])- Returns an image array representing the value of state_data on the gridworld layout. - get_params([deep])- Get parameters for this model. - get_scene(state)- Return scene (list of shapes) representing a given state - get_video([framerate])- Get video data. - get_wrapper_attr(name)- Gets the attribute name from the environment. - Returns true if sample() method is implemented - Returns true if reset() and step() methods are implemented - is_terminal(state)- Returns true if a state is terminal. - log()- Print the structure of the MDP. - render([loop])- Function to render an environment that implements the interface. - reseed([seed_seq])- Get new random number generator for the model. - reset([seed, options])- Reset the environment to a default state. - reward_fn(state, action, next_state)- Reward function. - sample(state, action)- Sample a transition s' from P(s'|state, action). - save_video(filename[, framerate])- Save video file. - set_initial_state_distribution(distribution)- Parameters:
 - step(action)- Run one timestep of the environment's dynamics using the agent actions. - append_state_for_rendering - clear_render_buffer - disable_rendering - display_values - enable_rendering - get_renderer - get_transition_support - is_render_enabled - print_transition_at - render_ascii - save_gif - set_clipping_area - set_refresh_interval - close()¶
- After the user has finished using the environment, close contains the code necessary to “clean up” the environment. - This is critical for closing rendering windows, database or HTTP connections. Calling - closeon an already closed environment has no effect and won’t raise an error.
 - classmethod from_layout(layout: str = '\nIOOOO # OOOOO O OOOOR\nOOOOO # OOOOO # OOOOO\nOOOOO O OOOOO # OOOOO\nOOOOO # OOOOO # OOOOO\nIOOOO # OOOOO # OOOOr\n', success_probability=0.95)[source]¶
- Create GridWorld instance from a layout. - Layout symbols: - ‘#’ : wall ‘r’ : reward of 1, terminal state ‘R’ : reward of 1, non-terminal state ‘T’ : terminal state ‘I’ : initial state (if several, start uniformly among I) ‘O’ : empty state any other char : empty state - Layout example: - IOOOO # OOOOO O OOOOR OOOOO # OOOOO # OOOOO OOOOO O OOOOO # OOOOO OOOOO # OOOOO # OOOOO IOOOO # OOOOO # OOOOr 
 - get_layout_array(state_data=None, fill_walls_with=nan)[source]¶
- Returns an array ‘layout’ of shape (nrows, ncols) such that: - layout[row, col] = state_data[self.coord2idx[row, col]] - If (row, col) is a wall: - layout[row, col] = fill_walls_with - Parameters:
- state_datanp.array, default = None
- Array of shape (self.observation_space.n,) 
- fill_walls_withfloat, default: np.nan
- Value to set in the layout in the coordinates corresponding to walls. 
 
- Returns:
- Gridworld layout array of shape (nrows, ncols).
 
 
 - get_layout_img(state_data=None, colormap_name='cool', wall_color=(0.0, 0.0, 0.0))[source]¶
- Returns an image array representing the value of state_data on the gridworld layout. - Parameters:
- state_datanp.array, default = None
- Array of shape (self.observation_space.n,) 
- colormap_namestr, default = ‘cool’
- Colormap name. See https://matplotlib.org/tutorials/colors/colormaps.html 
- wall_colortuple
- RGB color for walls. 
- Returns
- ——-
- Gridworld image array of shape (nrows, ncols, 3).
 
 
 - get_params(deep=True)¶
- Get parameters for this model. - Parameters:
- deepbool, default=True
- If True, will return the parameters for this model and contained subobjects. 
 
- Returns:
- paramsdict
- Parameter names mapped to their values. 
 
 
 - get_video(framerate=25, **kwargs)¶
- Get video data. 
 - get_wrapper_attr(name: str) Any¶
- Gets the attribute name from the environment. 
 - is_generative()¶
- Returns true if sample() method is implemented 
 - is_online()¶
- Returns true if reset() and step() methods are implemented 
 - log()¶
- Print the structure of the MDP. 
 - property np_random: Generator¶
- Returns the environment’s internal - _np_randomthat if not set will initialise with a random seed.- Returns:
- Instances of np.random.Generator 
 
 - render(loop=True, **kwargs)¶
- Function to render an environment that implements the interface. 
 - reseed(seed_seq=None)¶
- Get new random number generator for the model. - Parameters:
- seed_seqnp.random.SeedSequence, rlberry.seeding.Seeder or int, defaultNone
- Seed sequence from which to spawn the random number generator. If None, generate random seed. If int, use as entropy for SeedSequence. If seeder, use seeder.seed_seq 
 
 
 - reset(seed=None, options=None)¶
- Reset the environment to a default state. 
 - reward_fn(state, action, next_state)[source]¶
- Reward function. Returns mean reward at (state, action) by default. - Parameters:
- stateint
- current state 
- actionint
- current action 
- next_state
- next state 
- Returns:
- reward : float 
 
 
 - property rng¶
- Random number generator. 
 - save_video(filename, framerate=25, **kwargs)¶
- Save video file. 
 - set_initial_state_distribution(distribution)¶
- Parameters:
- distributionnumpy.ndarray or int
- array of size (S,) containing the initial state distribution or an integer representing the initial/default state 
 
 
 - step(action)[source]¶
- Run one timestep of the environment’s dynamics using the agent actions. - When the end of an episode is reached ( - terminated or truncated), it is necessary to call- reset()to reset this environment’s state for the next episode.- Changed in version 0.26: The Step API was changed removing - donein favor of- terminatedand- truncatedto make it clearer to users when the environment had terminated or truncated which is critical for reinforcement learning bootstrapping algorithms.- Args:
- action (ActType): an action provided by the agent to update the environment state. 
- Returns:
- observation (ObsType): An element of the environment’s observation_spaceas the next observation due to the agent actions.
- An example is a numpy array containing the positions and velocities of the pole in CartPole. 
 - reward (SupportsFloat): The reward as a result of taking the action. terminated (bool): Whether the agent reaches the terminal state (as defined under the MDP of the task) - which can be positive or negative. An example is reaching the goal state or moving into the lava from the Sutton and Barton, Gridworld. If true, the user needs to call - reset().- truncated (bool): Whether the truncation condition outside the scope of the MDP is satisfied.
- Typically, this is a timelimit, but could also be used to indicate an agent physically going out of bounds. Can be used to end the episode prematurely before a terminal state is reached. If true, the user needs to call - reset().
- info (dict): Contains auxiliary diagnostic information (helpful for debugging, learning, and logging).
- This might, for instance, contain: metrics that describe the agent’s performance state, variables that are hidden from observations, or individual reward terms that are combined to produce the total reward. In OpenAI Gym <v26, it contains “TimeLimit.truncated” to distinguish truncation and termination, however this is deprecated in favour of returning terminated and truncated variables. 
- done (bool): (Deprecated) A boolean value for if the episode has ended, in which case further step()calls will
- return undefined results. This was removed in OpenAI Gym v26 in favor of terminated and truncated attributes. A done signal may be emitted for different reasons: Maybe the task underlying the environment was solved successfully, a certain timelimit was exceeded, or the physics simulation has entered an invalid state. 
 
- observation (ObsType): An element of the environment’s 
 
 - property unwrapped¶
- Returns the base non-wrapped environment. - Returns:
- Env: The base non-wrapped - gymnasium.Envinstance