« In-context learning » : différence entre les versions

De Wiki BackProp
Aller à la navigation Aller à la recherche
(Page créée avec «  [1] The in-context learning (ICL) ability is formally introduced by GPT-3 : assuming that the language model has been provided with a natural language instruction and/or several task demonstrations, it can generate the expected output for the test instances by completing the word sequence of input text, without requiring additional training or gradient update == Références == * [https://arxiv.org/pdf/2303.18223.pdf] A Survey of Large Language Models »)
 
Aucun résumé des modifications
Ligne 2 : Ligne 2 :
[1] The in-context learning (ICL) ability is formally introduced by GPT-3 : assuming that the language model has been provided with a natural language instruction and/or several task demonstrations, it can generate the expected output for the test instances by completing the word sequence of input text, without requiring additional training or gradient update
[1] The in-context learning (ICL) ability is formally introduced by GPT-3 : assuming that the language model has been provided with a natural language instruction and/or several task demonstrations, it can generate the expected output for the test instances by completing the word sequence of input text, without requiring additional training or gradient update


 
[[File:A comparative illustration of in-context learning (ICL) and chain-of-thought (CoT) prompting.jpg|500px]]


== Références ==
== Références ==


* [https://arxiv.org/pdf/2303.18223.pdf] A Survey of Large Language Models
* [https://arxiv.org/pdf/2303.18223.pdf] A Survey of Large Language Models

Version du 27 avril 2023 à 16:38

[1] The in-context learning (ICL) ability is formally introduced by GPT-3 : assuming that the language model has been provided with a natural language instruction and/or several task demonstrations, it can generate the expected output for the test instances by completing the word sequence of input text, without requiring additional training or gradient update

Fichier:A comparative illustration of in-context learning (ICL) and chain-of-thought (CoT) prompting.jpg

Références

  • [1] A Survey of Large Language Models