Emergent Abilities in Large Language Models
Active Methodological Friction Detected
THESIS A: PRO-SCALING
Emergent Abilities of Large Language Models
"Abilities that are not present in smaller models but are present in larger models." The paper argues for a distinct phase transition in capabilities.
Key Argument
"Performance remains near random until a certain scale threshold is reached, then improves dramatically."
FRICTION
THESIS B: SKEPTICISM
Are Emergent Abilities a Mirage?
Suggests that emergent abilities are created by the researcher's choice of metric, not fundamental changes in model capabilities.
Key Counter-Argument
"Smooth improvements appear as sharp jumps when using non-linear metrics."