Update 第四章 大语言模型.md
This commit is contained in:
@@ -22,7 +22,7 @@ LLM,即 Large Language Model,中文名为大语言模型或大型语言模
|
||||
|
||||
时间 | 开源 LLM | 闭源 LLM
|
||||
-------- | ----- | --------
|
||||
2023.11 | 无 | OpenAI-ChatGPT
|
||||
2022.11 | 无 | OpenAI-ChatGPT
|
||||
2023.02 | Meta-LLaMA;复旦-MOSS | 无
|
||||
2023.03 | 斯坦福-Alpaca、Vicuna;智谱-ChatGLM|OpenAI-GPT4;百度-文心一言;Anthropic-Claude;Google-Bard
|
||||
2023.04 | 阿里-通义千问;Stability AI-StableLM|商汤-日日新
|
||||
@@ -31,7 +31,7 @@ LLM,即 Large Language Model,中文名为大语言模型或大型语言模
|
||||
2023.07 | Meta-LLaMA2|Anthropic-Claude2;华为-盘古大模型3
|
||||
2023.08 | 无|字节-豆包
|
||||
2023.09 | 百川-BaiChuan2|Google-Gemini;腾讯-混元大模型
|
||||
2023.11 | 零一万物-Yi;幻方-DeepSpeek|xAI-Grok
|
||||
2023.11 | 零一万物-Yi;幻方-DeepSeek|xAI-Grok
|
||||
|
||||
目前,国内外企业、研究院正不断推出性能更强大的 LLM,探索通往 AGI 的道路。
|
||||
|
||||
@@ -364,4 +364,4 @@ RM,Reward Model,即奖励模型。RM 是用于拟合人类偏好,来给 LL
|
||||
|
||||
[6] Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, Chelsea Finn. (2024). *Direct Preference Optimization: Your Language Model is Secretly a Reward Model.* arXiv preprint arXiv:2305.18290.
|
||||
|
||||
[7] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, Ji-Rong Wen. (2025). *A Survey of Large Language Models.* arXiv preprint arXiv:2303.18223.
|
||||
[7] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, Ji-Rong Wen. (2025). *A Survey of Large Language Models.* arXiv preprint arXiv:2303.18223.
|
||||
|
||||
Reference in New Issue
Block a user