rocket_league.action_parsers
Submodules
Classes
World-famous discrete action parser which uses a lookup table to reduce the number of possible actions from 1944 to 90 |
|
A simple wrapper to emulate tick skip. |
Package Contents
- class rocket_league.action_parsers.LookupTableAction
Bases:
rlgym.api.ActionParser
[rlgym.api.AgentID
,numpy.ndarray
,numpy.ndarray
,rlgym.rocket_league.api.GameState
,Tuple
[str
,int
]]World-famous discrete action parser which uses a lookup table to reduce the number of possible actions from 1944 to 90
- _lookup_table
- get_action_space(agent: rlgym.api.AgentID) Tuple[str, int]
Function that returns the action space type. It will be called during the initialization of the environment.
- Returns:
The type of the action space
- reset(agents: List[rlgym.api.AgentID], initial_state: rlgym.rocket_league.api.GameState, shared_info: Dict[str, Any]) None
Function to be called each time the environment is reset.
- Parameters:
agents – List of AgentIDs for which this ActionParser will receive actions
initial_state – The initial state of the reset environment.
shared_info – A dictionary with shared information across all config objects.
- parse_actions(actions: Dict[rlgym.api.AgentID, numpy.ndarray], state: rlgym.rocket_league.api.GameState, shared_info: Dict[str, Any]) Dict[rlgym.api.AgentID, numpy.ndarray]
Function that parses actions from the action space into a format that rlgym understands. The expected return value is a numpy float array of size (n, 8) where n is the number of agents. The second dimension is indexed as follows: throttle, steer, yaw, pitch, roll, jump, boost, handbrake. The first five values are expected to be in the range [-1, 1], while the last three values should be either 0 or 1.
- Parameters:
actions – An dict of actions, as passed to the env.step function.
state – The GameState object of the current state that were used to generate the actions.
shared_info – A dictionary with shared information across all config objects.
- Returns:
the parsed actions in the rlgym format.
- static make_lookup_table()
- class rocket_league.action_parsers.RepeatAction(parser: rlgym.api.ActionParser[rlgym.api.AgentID, rlgym.api.ActionType, numpy.ndarray, rlgym.api.StateType, rlgym.api.ActionSpaceType], repeats=8)
Bases:
rlgym.api.ActionParser
[rlgym.api.AgentID
,rlgym.api.ActionType
,numpy.ndarray
,rlgym.api.StateType
,rlgym.api.ActionSpaceType
]A simple wrapper to emulate tick skip.
Repeats every action for a specified number of ticks.
- parser
- repeats = 8
- get_action_space(agent: rlgym.api.AgentID) rlgym.api.ActionSpaceType
Function that returns the action space type. It will be called during the initialization of the environment.
- Returns:
The type of the action space
- reset(agents: List[rlgym.api.AgentID], initial_state: rlgym.api.StateType, shared_info: Dict[str, Any]) None
Function to be called each time the environment is reset.
- Parameters:
agents – List of AgentIDs for which this ActionParser will receive actions
initial_state – The initial state of the reset environment.
shared_info – A dictionary with shared information across all config objects.
- parse_actions(actions: Dict[rlgym.api.AgentID, rlgym.api.ActionType], state: rlgym.api.StateType, shared_info: Dict[str, Any]) Dict[rlgym.api.AgentID, numpy.ndarray]
Function that parses actions from the action space into a format that rlgym understands. The expected return value is a numpy float array of size (n, 8) where n is the number of agents. The second dimension is indexed as follows: throttle, steer, yaw, pitch, roll, jump, boost, handbrake. The first five values are expected to be in the range [-1, 1], while the last three values should be either 0 or 1.
- Parameters:
actions – An dict of actions, as passed to the env.step function.
state – The GameState object of the current state that were used to generate the actions.
shared_info – A dictionary with shared information across all config objects.
- Returns:
the parsed actions in the rlgym format.