Update readme.md
This commit is contained in:
@@ -12,10 +12,10 @@
|
|||||||
| --------------- | ------------------------------------------------------------ | --------------------------------------------- | -------- |
|
| --------------- | ------------------------------------------------------------ | --------------------------------------------- | -------- |
|
||||||
| DQN | Playing Atari with Deep Reinforcement Learning (**DQN**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/Playing%20Atari%20with%20Deep%20Reinforcement%20Learning.md) [[PDF]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/PDF/Playing%20Atari%20with%20Deep%20Reinforcement%20Learning.pdf) | https://arxiv.org/abs/1312.5602 | |
|
| DQN | Playing Atari with Deep Reinforcement Learning (**DQN**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/Playing%20Atari%20with%20Deep%20Reinforcement%20Learning.md) [[PDF]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/PDF/Playing%20Atari%20with%20Deep%20Reinforcement%20Learning.pdf) | https://arxiv.org/abs/1312.5602 | |
|
||||||
| | Deep Recurrent Q-Learning for Partially Observable MDPs [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/Deep%20Recurrent%20Q-Learning%20for%20Partially%20Observable%20MDPs.md) [[PDF]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/PDF/Deep%20Recurrent%20Q-Learning%20for%20Partially%20Observable%20MDPs.pdf) | https://arxiv.org/abs/1507.06527 | |
|
| | Deep Recurrent Q-Learning for Partially Observable MDPs [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/Deep%20Recurrent%20Q-Learning%20for%20Partially%20Observable%20MDPs.md) [[PDF]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/PDF/Deep%20Recurrent%20Q-Learning%20for%20Partially%20Observable%20MDPs.pdf) | https://arxiv.org/abs/1507.06527 | |
|
||||||
| | Dueling Network Architectures for Deep Reinforcement Learning (**Dueling DQN**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/Dueling%20Network%20Architectures%20for%20Deep%20Reinforceme.md) | https://arxiv.org/abs/1511.06581 | |
|
| | Dueling Network Architectures for Deep Reinforcement Learning (**Dueling DQN**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/Dueling%20Network%20Architectures%20for%20Deep%20Reinforceme.md) [[PDF]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/PDF/Dueling%20Network%20Architectures%20for%20Deep%20Reinforceme.pdf)| https://arxiv.org/abs/1511.06581 | |
|
||||||
| | Deep Reinforcement Learning with Double Q-learning (**Double DQN**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/Deep%20Reinforcement%20Learning%20with%20Double%20Q-learning.md) | https://arxiv.org/abs/1509.06461 | |
|
| | Deep Reinforcement Learning with Double Q-learning (**Double DQN**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/Deep%20Reinforcement%20Learning%20with%20Double%20Q-learning.md) [[PDF]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/PDF/Deep%20Reinforcement%20Learning%20with%20Double%20Q-learning.pdf)| https://arxiv.org/abs/1509.06461 | |
|
||||||
| | Prioritized Experience Replay (**PER**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/Prioritized%20Experience%20Replay.md) | https://arxiv.org/abs/1511.05952 | |
|
| | Prioritized Experience Replay (**PER**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/Prioritized%20Experience%20Replay.md) [[PDF]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/PDF/Prioritized%20Experience%20Replay.pdf)| https://arxiv.org/abs/1511.05952 | |
|
||||||
| | Rainbow: Combining Improvements in Deep Reinforcement Learning (**Rainbow**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/Rainbow_Combining%20Improvements%20in%20Deep%20Reinforcement%20Learning.md) | https://arxiv.org/abs/1710.02298 | |
|
| | Rainbow: Combining Improvements in Deep Reinforcement Learning (**Rainbow**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/Rainbow_Combining%20Improvements%20in%20Deep%20Reinforcement%20Learning.md) [[PDF]](https://github.com/datawhalechina/easy-rl/blob/master/papers/DQN/PDF/Rainbow_Combining%20Improvements%20in%20Deep%20Reinforcement%20Learning.pdf)| https://arxiv.org/abs/1710.02298 | |
|
||||||
| Policy gradient | Asynchronous Methods for Deep Reinforcement Learning (**A3C**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/Policy_gradient/Asynchronous%20Methods%20for%20Deep%20Reinforcement%20Learning.md) | https://arxiv.org/abs/1602.01783 | |
|
| Policy gradient | Asynchronous Methods for Deep Reinforcement Learning (**A3C**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/Policy_gradient/Asynchronous%20Methods%20for%20Deep%20Reinforcement%20Learning.md) | https://arxiv.org/abs/1602.01783 | |
|
||||||
| | Trust Region Policy Optimization (**TRPO**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/Policy_gradient/Trust%20Region%20Policy%20Optimization.md) [[PDF]](https://github.com/datawhalechina/easy-rl/blob/master/papers/Policy_gradient/PDF/Trust%20Region%20Policy%20Optimization.pdf)| https://arxiv.org/abs/1502.05477 | |
|
| | Trust Region Policy Optimization (**TRPO**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/Policy_gradient/Trust%20Region%20Policy%20Optimization.md) [[PDF]](https://github.com/datawhalechina/easy-rl/blob/master/papers/Policy_gradient/PDF/Trust%20Region%20Policy%20Optimization.pdf)| https://arxiv.org/abs/1502.05477 | |
|
||||||
| | High-Dimensional Continuous Control Using Generalized Advantage Estimation (**GAE**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/Policy_gradient/High-Dimensional%20Continuous%20Control%20Using%20Generalized%20Advantage%20Estimation.md) [[PDF]](https://github.com/datawhalechina/easy-rl/blob/master/papers/Policy_gradient/PDF/High-Dimensional%20Continuous%20Control%20Using%20Generalised%20Advantage%20Estimation.pdf) | https://arxiv.org/abs/1506.02438 | |
|
| | High-Dimensional Continuous Control Using Generalized Advantage Estimation (**GAE**) [[Markdown]](https://github.com/datawhalechina/easy-rl/blob/master/papers/Policy_gradient/High-Dimensional%20Continuous%20Control%20Using%20Generalized%20Advantage%20Estimation.md) [[PDF]](https://github.com/datawhalechina/easy-rl/blob/master/papers/Policy_gradient/PDF/High-Dimensional%20Continuous%20Control%20Using%20Generalised%20Advantage%20Estimation.pdf) | https://arxiv.org/abs/1506.02438 | |
|
||||||
|
|||||||
Reference in New Issue
Block a user