update README
This commit is contained in:
26
README.md
26
README.md
@@ -33,19 +33,19 @@
|
||||
| [第十三章 AlphaStar 论文解读](https://datawhalechina.github.io/leedeeprl-notes/#/chapter13/chapter13) |||
|
||||
## 算法代码实现一览
|
||||
|
||||
| 算法名称 | 相关论文材料 | 备注 | 进度 |
|
||||
| :----------------------: | :---------------------------------------------------------: | :--------------------------------: | :--: |
|
||||
| On-Policy First-Visit MC | | 蒙特卡洛算法 | OK |
|
||||
| Q-Learning | | | OK |
|
||||
| SARSA | | | OK |
|
||||
| DQN | [DQN-paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf) | | OK |
|
||||
| DQN-cnn | [DQN-paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf) | 与DQN相比使用了CNN而不是全链接网络 | OK |
|
||||
| DoubleDQN | | | OK |
|
||||
| Hierarchical DQN | [Hierarchical DQN](https://arxiv.org/abs/1604.06057) | | |
|
||||
| PolicyGradient | | | OK |
|
||||
| A2C | | | OK |
|
||||
| DDPG | [DDPG Paper](https://arxiv.org/abs/1509.02971) | | OK |
|
||||
| TD3 | [Twin Dueling DDPG Paper](https://arxiv.org/abs/1802.09477) | | |
|
||||
| 算法名称 | 相关论文材料 | 备注 | 进度 |
|
||||
| :----------------------------------------------------------: | :---------------------------------------------------------: | :--------------------------------: | :--: |
|
||||
| [On-Policy First-Visit MC](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/MonteCarlo) | | 蒙特卡洛算法 | OK |
|
||||
| [Q-Learning](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/QLearning) | | | OK |
|
||||
| [Sarsa](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/Sarsa) | | | OK |
|
||||
| [DQN](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/DQN) | [DQN-paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf) | | OK |
|
||||
| DQN-cnn | [DQN-paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf) | 与DQN相比使用了CNN而不是全链接网络 | OK |
|
||||
| [DoubleDQN](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/DoubleDQN) | | | OK |
|
||||
| Hierarchical DQN | [Hierarchical DQN](https://arxiv.org/abs/1604.06057) | | |
|
||||
| [PolicyGradient](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/PolicyGradient) | | | OK |
|
||||
| [A2C](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/A2C) | | | OK |
|
||||
| DDPG | [DDPG Paper](https://arxiv.org/abs/1509.02971) | | OK |
|
||||
| TD3 | [Twin Dueling DDPG Paper](https://arxiv.org/abs/1802.09477) | | |
|
||||
|
||||
## 贡献者
|
||||
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
# 李宏毅深度强化学习笔记(LeeDeepRL-Notes)
|
||||
|
||||
李宏毅老师的《深度强化学习》是强化学习领域经典的中文视频之一。李老师幽默风趣的上课风格让晦涩难懂的强化学习理论变得轻松易懂,他会通过很多有趣的例子来讲解强化学习理论。比如老师经常会用玩 Atari 游戏的例子来讲解强化学习算法。此外,为了课程的完整性,我们整理了周博磊老师的《强化学习纲要》、李科浇老师的《百度强化学习》以及多个强化学习的经典资料作为补充。对于想入门强化学习又想看中文讲解的人来说绝对是非常推荐的。
|
||||
|
||||
## 使用说明
|
||||
|
||||
* 第 4 章到第 11 章为[李宏毅《深度强化学习》](http://speech.ee.ntu.edu.tw/~tlkagk/courses_MLDS18.html)的部分;
|
||||
* 第 1 章和第 2 章根据[《强化学习纲要》](https://github.com/zhoubolei/introRL)整理而来;
|
||||
* 第 3 章和第 12 章根据[《百度强化学习》](https://aistudio.baidu.com/aistudio/education/group/info/1335) 整理而来。
|
||||
|
||||
|
||||
## 笔记在线阅读地址(内容实时更新)
|
||||
地址:https://datawhalechina.github.io/leedeeprl-notes/
|
||||
|
||||
@@ -28,23 +31,24 @@
|
||||
| [第十一章 模仿学习](https://datawhalechina.github.io/leedeeprl-notes/#/chapter11/chapter11) | [第十一章 习题](https://datawhalechina.github.io/leedeeprl-notes/#/chapter11/chapter11_questions&keywords) | |
|
||||
| [第十二章 深度确定性策略梯度 (DDPG) 算法](https://datawhalechina.github.io/leedeeprl-notes/#/chapter12/chapter12) | [第十二章 习题](https://datawhalechina.github.io/leedeeprl-notes/#/chapter12/chapter12_questions&keywords) | [项目三 使用 Policy-Based 方法实现 Pendulum-v0](https://datawhalechina.github.io/leedeeprl-notes/#/chapter12/project3) |
|
||||
| [第十三章 AlphaStar 论文解读](https://datawhalechina.github.io/leedeeprl-notes/#/chapter13/chapter13) |||
|
||||
|
||||
## 算法代码实现一览
|
||||
|
||||
| 算法名称 | 相关论文材料 | 备注 | 进度 |
|
||||
| :----------------------: | :---------------------------------------------------------: | :--------------------------------: | :--: |
|
||||
| On-Policy First-Visit MC | | 蒙特卡洛算法 | OK |
|
||||
| Q-Learning | | | OK |
|
||||
| SARSA | | | OK |
|
||||
| DQN | [DQN-paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf) | | OK |
|
||||
| DQN-cnn | [DQN-paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf) | 与DQN相比使用了CNN而不是全链接网络 | OK |
|
||||
| DoubleDQN | | | OK |
|
||||
| Hierarchical DQN | [Hierarchical DQN](https://arxiv.org/abs/1604.06057) | | |
|
||||
| PolicyGradient | | | OK |
|
||||
| A2C | | | OK |
|
||||
| DDPG | [DDPG Paper](https://arxiv.org/abs/1509.02971) | | OK |
|
||||
| TD3 | [Twin Dueling DDPG Paper](https://arxiv.org/abs/1802.09477) | | |
|
||||
| 算法名称 | 相关论文材料 | 备注 | 进度 |
|
||||
| :----------------------------------------------------------: | :---------------------------------------------------------: | :--------------------------------: | :--: |
|
||||
| [On-Policy First-Visit MC](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/MonteCarlo) | | 蒙特卡洛算法 | OK |
|
||||
| [Q-Learning](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/QLearning) | | | OK |
|
||||
| [Sarsa](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/Sarsa) | | | OK |
|
||||
| [DQN](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/DQN) | [DQN-paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf) | | OK |
|
||||
| DQN-cnn | [DQN-paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf) | 与DQN相比使用了CNN而不是全链接网络 | OK |
|
||||
| [DoubleDQN](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/DoubleDQN) | | | OK |
|
||||
| Hierarchical DQN | [Hierarchical DQN](https://arxiv.org/abs/1604.06057) | | |
|
||||
| [PolicyGradient](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/PolicyGradient) | | | OK |
|
||||
| [A2C](https://github.com/datawhalechina/leedeeprl-notes/tree/master/codes/A2C) | | | OK |
|
||||
| DDPG | [DDPG Paper](https://arxiv.org/abs/1509.02971) | | OK |
|
||||
| TD3 | [Twin Dueling DDPG Paper](https://arxiv.org/abs/1802.09477) | | |
|
||||
|
||||
## 贡献者
|
||||
|
||||
<table border="0">
|
||||
<tbody>
|
||||
<tr align="center" >
|
||||
@@ -69,13 +73,12 @@
|
||||
|
||||
|
||||
## 致谢
|
||||
|
||||
特别感谢 [@Sm1les](https://github.com/Sm1les)、[@LSGOMYP](https://github.com/LSGOMYP) 对本项目的帮助与支持。
|
||||
|
||||
## 关注我们
|
||||
|
||||
<div align=center><img src="https://raw.githubusercontent.com/datawhalechina/pumpkin-book/master/res/qrcode.jpeg" width = "250" height = "270" alt="Datawhale是一个专注AI领域的开源组织,以“for the learner,和学习者一起成长”为愿景,构建对学习者最有价值的开源学习社区。关注我们,一起学习成长。"></div>
|
||||
|
||||
|
||||
## LICENSE
|
||||
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="知识共享许可协议" style="border-width:0" src="https://img.shields.io/badge/license-CC%20BY--NC--SA%204.0-lightgrey" /></a><br />本作品采用<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议</a>进行许可。
|
||||
|
||||
|
||||
Reference in New Issue
Block a user