A2C Agent playing PandaReachDense-v3
This is a trained model of an A2C agent playing PandaReachDense-v3 using the stable-baselines3 library and the panda-gym environment.
Usage (with huggingface_sb3)
To use this model, you need to install the following dependencies:
pip install stable-baselines3 huggingface_sb3 panda_gym shimmy
Then you can load and evaluate the model:
```python
from huggingface_sb3 import load_from_hub
from stable_baselines3 import A2C
from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize
# Load the model and statistics
repo_id = "LuckLin/a2c-PandaReachDense-v3"
filename = "a2c-PandaReachDense-v3.zip"
checkpoint = load_from_hub(repo_id, filename)
model = A2C.load(checkpoint)
# Load the normalization statistics
stats_path = load_from_hub(repo_id, "vec_normalize.pkl")
env = DummyVecEnv([lambda: gym.make("PandaReachDense-v3")])
env = VecNormalize.load(stats_path, env)
# At test time, we don't update the stats
env.training = False
env.norm_reward = False
# Evaluate
obs = env.reset()
for _ in range(1000):
action, _states = model.predict(obs, deterministic=True)
obs, rewards, dones, info = env.step(action)
env.render()
- Downloads last month
- 31
Evaluation results
- mean_reward on PandaReachDense-v3self-reported0.00 +/- 0.00