« Zero-shot » : différence entre les versions
(Page créée avec « Zero-shot prompting - is a method where the LLM is given no additional data on the specifictask that it is being asked to perform. Instead, it is only given a prompt that describes the task. For example, if you want the LLM to answer a question, you just prompt "what is prompt design?". One-shot prompting - is a method where the LLM is given a single example of the task that it is being asked to perform. For example, if you want the LLM to write a poem, you might... ») |
Aucun résumé des modifications |
||
Ligne 2 : | Ligne 2 : | ||
First we have the context, which instructs how the model should respond. You can specify words the model can or cannot use, topics to focus on or avoid, or a particular response format. And the context applies each time you send a request to the model. Let’s say we want to use an LLM to answer questions based on some background text. In this case, a passage that describes changes in rainforest vegetation in the Amazon. We can paste in the background text as the context. Then, we add some examples of questions that could be answered from this passage Like what does LGM stand for? Or what did the analysis from the sediment deposits indicate? We’ll need to add in the corresponding answers to these questions, to demonstrate how we want the model to respond. Then, we can test out the prompt we’ve designed by sending a new question as input. And there you go, you’ve prototyped a q&a system based on background text in just a few minutes! Please note a few best practices around prompt design. Be concise Be specific and well-defined Ask one task at a time Turn generative tasks into classification tasks. For example, instead of asking what programming language to learn, ask if Python, Java, or C is a better fit for a beginner in programming. and Improve response quality by including examples. Adding instructions and a few examples tends to yield good results however there’s currently no one best way to write a prompt. You may need to experiment with different structures, formats, and examples to see what works best for your use case | First we have the context, which instructs how the model should respond. You can specify words the model can or cannot use, topics to focus on or avoid, or a particular response format. And the context applies each time you send a request to the model. Let’s say we want to use an LLM to answer questions based on some background text. In this case, a passage that describes changes in rainforest vegetation in the Amazon. We can paste in the background text as the context. Then, we add some examples of questions that could be answered from this passage Like what does LGM stand for? Or what did the analysis from the sediment deposits indicate? We’ll need to add in the corresponding answers to these questions, to demonstrate how we want the model to respond. Then, we can test out the prompt we’ve designed by sending a new question as input. And there you go, you’ve prototyped a q&a system based on background text in just a few minutes! Please note a few best practices around prompt design. Be concise Be specific and well-defined Ask one task at a time Turn generative tasks into classification tasks. For example, instead of asking what programming language to learn, ask if Python, Java, or C is a better fit for a beginner in programming. and Improve response quality by including examples. Adding instructions and a few examples tends to yield good results however there’s currently no one best way to write a prompt. You may need to experiment with different structures, formats, and examples to see what works best for your use case | ||
- Zero-shot prompting: Provides one single command with no examples. | |||
- One-shot prompting: Provides one example of the task. | |||
- Few-shot prompting: Provides a few examples of the task often with the description of the context. | |||
https://www.cloudskillsboost.google/course_sessions/3264154/video/383122 | https://www.cloudskillsboost.google/course_sessions/3264154/video/383122 |
Version actuelle datée du 4 juillet 2023 à 10:13
Zero-shot prompting - is a method where the LLM is given no additional data on the specifictask that it is being asked to perform. Instead, it is only given a prompt that describes the task. For example, if you want the LLM to answer a question, you just prompt "what is prompt design?". One-shot prompting - is a method where the LLM is given a single example of the task that it is being asked to perform. For example, if you want the LLM to write a poem, you might provide a single example poem. and Few-shot prompting - is a method where the LLM is given a small number of examples of the task that it is being asked to perform. For example, if you want the LLM to write a news article, you might give it a few news articles to read. You can use the structured mode to design the few-shot prompting by providing a context and additional examples for the model to learn from. The structured prompt contains a few different components:
First we have the context, which instructs how the model should respond. You can specify words the model can or cannot use, topics to focus on or avoid, or a particular response format. And the context applies each time you send a request to the model. Let’s say we want to use an LLM to answer questions based on some background text. In this case, a passage that describes changes in rainforest vegetation in the Amazon. We can paste in the background text as the context. Then, we add some examples of questions that could be answered from this passage Like what does LGM stand for? Or what did the analysis from the sediment deposits indicate? We’ll need to add in the corresponding answers to these questions, to demonstrate how we want the model to respond. Then, we can test out the prompt we’ve designed by sending a new question as input. And there you go, you’ve prototyped a q&a system based on background text in just a few minutes! Please note a few best practices around prompt design. Be concise Be specific and well-defined Ask one task at a time Turn generative tasks into classification tasks. For example, instead of asking what programming language to learn, ask if Python, Java, or C is a better fit for a beginner in programming. and Improve response quality by including examples. Adding instructions and a few examples tends to yield good results however there’s currently no one best way to write a prompt. You may need to experiment with different structures, formats, and examples to see what works best for your use case
- Zero-shot prompting: Provides one single command with no examples. - One-shot prompting: Provides one example of the task. - Few-shot prompting: Provides a few examples of the task often with the description of the context.
https://www.cloudskillsboost.google/course_sessions/3264154/video/383122