rlberry_scool.agents.tabular_rl.QLAgent

class rlberry_scool.agents.tabular_rl.QLAgent(env: Env | Tuple[Callable[[...], Env], Mapping[str, Any]], gamma: float = 0.99, alpha: float = 0.1, exploration_type: Literal['epsilon', 'boltzmann'] | None = None, exploration_rate: float | None = None, **kwargs)[source]

Bases: AgentWithSimplePolicy

Q-Learning Agent.

Parameters:
env: :class:`~rlberry.types.Env`

Environment with discrete states and actions.

gamma: float, default = 0.99

Discount factor.

alpha: float, default = 0.1

Learning rate.

exploration_type: {“epsilon”, “boltzmann”}, default: None

If “epsilon”: Epsilon-Greedy exploration. If “boltzmann”: Boltzmann exploration. If None: No exploration.

exploration_rate: float, default: None

epsilon parameter for Epsilon-Greedy exploration or tau parameter for Boltzmann exploration.

**kwargsKeyword Arguments

Arguments to be passed to AgentWithSimplePolicy.__init__(self, env, **kwargs) (AgentWithSimplePolicy).

Attributes:
Qndarray

2D array that stores the estimation ofexpected rewards for state-action pairs.

Examples

>>> from rlberry.envs import GridWorld
>>>
>>> env = GridWorld(walls=(), nrows=5, ncols=5)
>>> agent = QLAgent()
>>> agent.fit(budget=1000)
>>> agent.policy(env.observation_space.sample())
>>> agent.reset()

Methods

eval([eval_horizon, n_simulations, gamma])

Monte-Carlo policy evaluation [1] method to estimate the mean discounted reward using the current policy on the evaluation environment.

fit(budget, **kwargs)

Train the agent using the provided environment. Parameters ---------- budget: int number of Q updates. **kwargs : Keyword Arguments Extra arguments. Not used for this agent.

get_params([deep])

Get parameters for this agent.

load(filename, **kwargs)

Load agent object from filepath.

policy(observation)

Abstract method.

reseed([seed_seq])

Get new random number generator for the agent.

sample_parameters(trial)

Sample hyperparameters for hyperparam optimization using Optuna (https://optuna.org/)

save(filename)

Save agent object.

set_writer(writer)

set self._writer.

get_action

reset

eval(eval_horizon=100000, n_simulations=10, gamma=1.0, **kwargs)

Monte-Carlo policy evaluation [1] method to estimate the mean discounted reward using the current policy on the evaluation environment.

Parameters:
eval_horizonint, optional, default: 10**5

Maximum episode length, representing the horizon for each simulation.

n_simulationsint, optional, default: 10

Number of Monte Carlo simulations to perform for the evaluation.

gammafloat, optional, default: 1.0

Discount factor for future rewards.

Returns:
float

The mean value over ‘n_simulations’ of the sum of rewards obtained in each simulation.

References

[1] (1,2)

Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction. MIT Press.

fit(budget: int, **kwargs)[source]

Train the agent using the provided environment. Parameters ———- budget: int

number of Q updates.

**kwargsKeyword Arguments

Extra arguments. Not used for this agent.

get_params(deep=True)

Get parameters for this agent.

Parameters:
deepbool, default=True

If True, will return the parameters for this agent and contained subobjects.

Returns:
paramsdict

Parameter names mapped to their values.

classmethod load(filename, **kwargs)

Load agent object from filepath.

If overridden, save() method must also be overriden.

Parameters:
filename: str

Path to the object (pickle) to load.

**kwargs: Keyword Arguments

Arguments required by the __init__ method of the Agent subclass to load.

property output_dir

Directory that the agent can use to store data.

policy(observation)[source]

Abstract method. The policy function takes an observation from the environment and returns an action. The specific implementation of the policy function depends on the agent’s learning algorithm or strategy, which can be deterministic or stochastic. Parameters ———- observation (any): An observation from the environment. Returns ——- action (any): The action to be taken based on the provided observation. Notes —– The data type of ‘observation’ and ‘action’ can vary depending on the specific agent and the environment it interacts with.

reseed(seed_seq=None)

Get new random number generator for the agent.

Parameters:
seed_seqnumpy.random.SeedSequence, rlberry.seeding.seeder.Seeder or int, defaultNone

Seed sequence from which to spawn the random number generator. If None, generate random seed. If int, use as entropy for SeedSequence. If seeder, use seeder.seed_seq

property rng

Random number generator.

classmethod sample_parameters(trial)

Sample hyperparameters for hyperparam optimization using Optuna (https://optuna.org/)

Note: only the kwargs sent to __init__ are optimized. Make sure to include in the Agent constructor all “optimizable” parameters.

Parameters:
trial: optuna.trial
save(filename)

Save agent object. By default, the agent is pickled.

If overridden, the load() method must also be overriden.

Before saving, consider setting writer to None if it can’t be pickled (tensorboard writers keep references to files and cannot be pickled).

Note: dill[R11dfd88cd60a-1]_ is used when pickle fails (see https://stackoverflow.com/a/25353243, for instance). Pickle is tried first, since it is faster.

Parameters:
filename: Path or str

File in which to save the Agent.

Returns:
pathlib.Path

If save() is successful, a Path object corresponding to the filename is returned. Otherwise, None is returned.

Warning

The returned filename might differ from the input filename: For instance, ..

the method can append the correct suffix to the name before saving.

References

set_writer(writer)

set self._writer. If is not None, add parameters values to writer.

property thread_shared_data

Data shared by agent instances among different threads.

property unique_id

Unique identifier for the agent instance. Can be used, for example, to create files/directories for the agent to log data safely.

property writer

Writer object to log the output (e.g. tensorboard SummaryWriter)..