Operationalising the Superficial Alignment Hypothesis via Task Complexity

每日信息看板 · 2026-02-17
研究/论文
Category
arxiv_search
Source
98
Score
2026-02-17T18:59:39Z
Published

AI 总结

该论文以“任务复杂度”形式化表层对齐假说并实证表明:预训练已蕴含高性能能力,而后训练将调用这些能力所需程序长度从GB级大幅压缩到KB级,这对理解模型适配成本与训练分工很关键。
#arXiv #paper #研究/论文 #Superficial Alignment Hypothesis

内容摘录

The superficial alignment hypothesis (SAH) posits that large language models learn most of their knowledge during pre-training, and that post-training merely surfaces this knowledge. The SAH, however, lacks a precise definition, which has led to (i) different and seemingly orthogonal arguments supporting it, and (ii) important critiques to it. We propose a new metric called task complexity: the length of the shortest program that achieves a target performance on a task. In this framework, the SAH simply claims that pre-trained models drastically reduce the complexity of achieving high performance on many tasks. Our definition unifies prior arguments supporting the SAH, interpreting them as different strategies to find such short programs. Experimentally, we estimate the task complexity of mathematical reasoning, machine translation, and instruction following; we then show that these complexities can be remarkably low when conditioned on a pre-trained model. Further, we find that pre-training enables access to strong performances on our tasks, but it can require programs of gigabytes of length to access them. Post-training, on the other hand, collapses the complexity of reaching this same performance by several orders of magnitude. Overall, our results highlight that task adaptation often requires surprisingly little information -- often just a few kilobytes.