论文标题:Direct Preference Optimization: Your Language Model is Secretly a Reward Model论文链接...
论文标题:Direct Preference Optimization: Your Language Model is Secretly a Reward Model论文链接...
一、概述 大语言模型(LLMs)在预训练的过程中通常会捕捉数据的特征,而这些训练数据通常既包含高质量的也包含低质量的,因此模型有时会产生不被期望的行为,如编造事实,生成有偏见...
论文标题:LoRA: Low-Rank Adaptation of Large Language Models论文链接:https://arxiv.org/abs/2106....
论文标题:Chain-of-Thought Prompting Elicits Reasoning in Large Language Models论文链接:https://...
论文标题:Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallel...
论文标题:Tree of Thoughts: Deliberate Problem Solving with Large Language Models论文链接:https:...
论文标题:LIMA: Less Is More for Alignment论文链接:https://arxiv.org/abs/2305.11206[https://arxi...
论文标题:Self-Consistency Improves Chain of Thought Reasoning in Language Models论文链接:https:...
论文标题:GPipe: Easy Scaling with Micro-Batch Pipeline Parallelism论文链接:https://arxiv.org/ab...