- 更新:2023-08-24 13:56:53
- 首发:2023-08-23 23:21:29
- 人工智能
- 137
Objectives of Supervised Fine-Tuning:
- Enhance Specific Task Performance : Aligning instructions with particular tasks.
- Domain Adaptation : Making the model compatible with specialized areas.
- Improve Interpretability and Controllability : Enhancing the model's ability to be understood and directed.
Overall, the goal is to improve robustness, which refers to the system's resilience.
Core Considerations:
- Diversity : To prevent overfitting, the data must be diverse. Diversity not only enhances generalization but also inference ability. It's not just about having many knowledge categories but also functional ones. The data volume for each category should be as balanced as possible; otherwise, imbalances may lead to oversensitivity to some and undersensitivity to others. Diversity can also be achieved by prompt template construction or data augmentation methods, like expanding translation instructions from Chinese to English.
- Avoid Mistaking SFT for Data Supplementation : SFT is not merely about adding more data; the model may remember some of it, but that's not the main purpose.
- Few-Shot and COT (Chain of Thought) Data Integration : Adding these into training can facilitate the model’s comprehension of instructions and multi-turn dialogue ability.
- Emphasis on Data Quality over Quantity in SFT : Typically, around 10,000 finely labeled data points can achieve good results.
- Quality over Quantity : Expanding data volume without enhancing diversity will significantly reduce benefits, while optimizing data quality will notably increase gains.
Data Quality Requirements:
- Length Constraints : Neither the question nor the answer should be overly long or short. Ideally, no more than 4k tokens.
- No Incorrect Answers : Only select high-quality data.
- Special Industry Requirements : For domains demanding high inference abilities, try to gather more CoT data.
- Diverse NLP Abilities Required : Including classification, structured output, creative writing, multi-turn dialogue, ancient Chinese translation, keyword recognition, reading comprehension, idiom explanation, text correction, sentiment analysis, entity recognition, programming, text matching, copywriting, song reviews, open questions, composition writing, storytelling, structured extraction, summarizing, closed questions, CoT, objective test questions, brainstorming, etc. (Avoid using only vertical domain data).
- Vertical Domain Data Proportions : Avoid too much; secondary pre-training (PT) could lead to better learning, and no vertical domain data might be added to SFT data.
Examples:
Good Dataset: Question: What's the name of the third child of Xiao Ming's mother, who has three children, with the first one named Yi Mao, and the second Er Mao? Answer: The question starts with "Xiao Ming's mother," so the third child is Xiao Ming, as per the premise.
Poor Dataset: Question: Same as above. Answer: Xiao Ming. (This direct answer lacks a thought process, emphasizing CoT)
Q & A
Why include coding ability in SFT? Teaching AI to write code is a way to instruct it to dissect problems and assemble solutions, which greatly enhances reasoning and structured output capabilities. Research supports this, including the increase in translation ability, which also boosts AI's problem-solving skills, along with other seemingly unrelated abilities.
为什么我不建议在不做PT的情况下做SFT?
如果不为了二次预训练,目前大部分模型都提供了Chat版本,直接用就好。SFT对于数据质量要求很高,在数据质量不高的情况下通过Base去做SFT容易反向优化。提升数据质量所消耗的成本也不低。
如何判断SFT的效果?
这是一个非常复杂的问题。但是可以尝试将您的问题对照场景拆解后让AI辅助解答。参考下图。然后你可以继续发送你的具体问题,让AI进行逐步分析。
暂无内容




本文目的在于体现当时GPT3的能力,如有定时请求任务需求请使用成熟的任务调度系统。
可以的。目前我的小米眼镜已经坏了,我们一起期待下一代产品。
前者是社区版,现在已经不被支持了,后者是面向企业的版本。如果是个人使用的话,建议还是注册正版RHEL系统,注册后即可正常使用。
大神,请问小米眼镜可以自己更换成适合自己度数的镜片吗?大神,请问小米眼镜可以自己更换成适合自己度数的镜片吗?
大神,centOS和Redhat到底啥区别?听说redhat一个月试用期到了之后,没法yum install安装软件?那换个源可以吗?