Emergent Abilities of Large Language Models

De Wiki BackProp
Aller à la navigation Aller à la recherche

On entend par "Emergent Abilities of Large Language Models" une capacité présente dans un LLM qui ne se retrouve pas dans un modèle similaire mais plus petit. Ce qui veut dire aussi qu'on ne peut pas prévoir (extrapoler) cette nouvelle capacité uniquement à partir de celles d'un modèle plus petit.

[1] We consider an ability to be emergent if it is not present in smaller models but is present in larger models.

[1] We will consider the following general definition of emergence, adapted from Steinhardt (2022) and rooted in a 1972 essay called “More Is Different” by Nobel prize-winning physicist Philip Anderson


[2] Although scaling is mainly conducted in model size (with similar architectures and pre-training tasks), these large-sized PLMs display different behaviors from smaller PLMs (e.g., 330M-parameter BERT and 1.5B- parameter GPT-2) and show surprising abilities (called emergent abilities) in solving a series of complex tasks.

[2] For example, GPT-3 can solve few-shot tasks through in-context learning, whereas GPT-2 cannot do well.


Références

  • [1] Emergent Abilities of Large Language Models
  • [2] A Survey of Large Language Models