« In-context learning » : différence entre les versions

De Wiki BackProp
Aller à la navigation Aller à la recherche
Aucun résumé des modifications
Balise : Révoqué
Aucun résumé des modifications
Balise : Révocation manuelle
 
Ligne 2 : Ligne 2 :
[1] The in-context learning (ICL) ability is formally introduced by GPT-3 : assuming that the language model has been provided with a natural language instruction and/or several task demonstrations, it can generate the expected output for the test instances by completing the word sequence of input text, without requiring additional training or gradient update
[1] The in-context learning (ICL) ability is formally introduced by GPT-3 : assuming that the language model has been provided with a natural language instruction and/or several task demonstrations, it can generate the expected output for the test instances by completing the word sequence of input text, without requiring additional training or gradient update


[[File:Capture d’écran 2023-04-27 à 18.32.50.png|500px]]
[[File:A comparative illustration of in-context learning (ICL) and chain-of-thought (CoT) prompting.jpg|500px]]


== Références ==
== Références ==


* [https://arxiv.org/pdf/2303.18223.pdf] A Survey of Large Language Models
* [https://arxiv.org/pdf/2303.18223.pdf] A Survey of Large Language Models

Version actuelle datée du 27 avril 2023 à 16:40

[1] The in-context learning (ICL) ability is formally introduced by GPT-3 : assuming that the language model has been provided with a natural language instruction and/or several task demonstrations, it can generate the expected output for the test instances by completing the word sequence of input text, without requiring additional training or gradient update

Fichier:A comparative illustration of in-context learning (ICL) and chain-of-thought (CoT) prompting.jpg

Références

  • [1] A Survey of Large Language Models