« In-context learning » : différence entre les versions

De Wiki BackProp
Aller à la navigation Aller à la recherche
Aucun résumé des modifications
Aucun résumé des modifications
Balise : Révoqué
Ligne 2 : Ligne 2 :
[1] The in-context learning (ICL) ability is formally introduced by GPT-3 : assuming that the language model has been provided with a natural language instruction and/or several task demonstrations, it can generate the expected output for the test instances by completing the word sequence of input text, without requiring additional training or gradient update
[1] The in-context learning (ICL) ability is formally introduced by GPT-3 : assuming that the language model has been provided with a natural language instruction and/or several task demonstrations, it can generate the expected output for the test instances by completing the word sequence of input text, without requiring additional training or gradient update


[[File:A comparative illustration of in-context learning (ICL) and chain-of-thought (CoT) prompting.jpg|500px]]
[[File:Capture d’écran 2023-04-27 à 18.32.50.png|500px]]


== Références ==
== Références ==


* [https://arxiv.org/pdf/2303.18223.pdf] A Survey of Large Language Models
* [https://arxiv.org/pdf/2303.18223.pdf] A Survey of Large Language Models

Version du 27 avril 2023 à 16:40

[1] The in-context learning (ICL) ability is formally introduced by GPT-3 : assuming that the language model has been provided with a natural language instruction and/or several task demonstrations, it can generate the expected output for the test instances by completing the word sequence of input text, without requiring additional training or gradient update

Erreur lors de la création de la vignette : Fichier manquant

Références

  • [1] A Survey of Large Language Models