Résultat de l’API de MediaWiki

Voici la représentation HTML du format JSON. HTML convient au débogage, mais est inapproprié pour être utilisé dans une application.

Spécifiez le paramètre format pour modifier le format de sortie. Pour voir la représentation non HTML du format JSON, mettez format=json.

Voir la documentation complète, ou l’aide de l’API pour plus d’informations.

{
    "batchcomplete": "",
    "continue": {
        "lecontinue": "20230502092502|23",
        "continue": "-||"
    },
    "query": {
        "logevents": [
            {
                "logid": 33,
                "ns": 0,
                "title": "AI Agent",
                "pageid": 29,
                "logpage": 29,
                "params": {},
                "type": "create",
                "action": "create",
                "user": "Jboscher",
                "timestamp": "2023-09-19T09:33:16Z",
                "comment": "Page cr\u00e9\u00e9e avec \u00ab\u202fAI agents are artificial entities that sense their environment, make decisions, and take actions.   The Rise and Potential of Large Language Model Based Agents: A Survey : https://arxiv.org/pdf/2309.07864.pdf\u202f\u00bb"
            },
            {
                "logid": 32,
                "ns": 0,
                "title": "SFT",
                "pageid": 28,
                "logpage": 28,
                "params": {},
                "type": "create",
                "action": "create",
                "user": "Jboscher",
                "timestamp": "2023-08-01T12:00:50Z",
                "comment": "Page cr\u00e9\u00e9e avec \u00ab\u202fSupervised Fine-Tuning (SFT): Models are trained on a dataset of instructions and responses. It adjusts the weights in the LLM to minimize the difference between the generated answers and ground-truth responses, acting as labels.  == R\u00e9f\u00e9rences ==  * [https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32] Fine-Tune Your Own Llama 2 Model in a Colab Notebook\u202f\u00bb"
            },
            {
                "logid": 31,
                "ns": 0,
                "title": "PEFT",
                "pageid": 27,
                "logpage": 27,
                "params": {},
                "type": "create",
                "action": "create",
                "user": "Jboscher",
                "timestamp": "2023-07-27T09:34:26Z",
                "comment": "Page cr\u00e9\u00e9e avec \u00ab\u202fThus, we, as a com- munity of researchers and engineers, need efficient ways to train on downstream task data.  Parameter-efficient fine-tuning, which we denote as PEFT, aims to resolve this problem by only training a small set of parameters which might be a subset of the existing model parameters or a set of newly added parameters.   https://arxiv.org/pdf/2303.15647.pdf\u202f\u00bb"
            },
            {
                "logid": 30,
                "ns": 0,
                "title": "Agent",
                "pageid": 26,
                "logpage": 26,
                "params": {},
                "type": "create",
                "action": "create",
                "user": "Jboscher",
                "timestamp": "2023-07-05T21:18:48Z",
                "comment": "Page cr\u00e9\u00e9e avec \u00ab\u202fIn LangChain, agents are high-level components that use language models (LLMs) to determine which actions to take and in what order. An action can either be using a tool and observing its output or returning it to the user. Tools are functions that perform specific duties, such as Google Search, database lookups, or Python REPL. Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until...\u202f\u00bb"
            },
            {
                "logid": 29,
                "ns": 0,
                "title": "Prompt design",
                "pageid": 25,
                "logpage": 25,
                "params": {},
                "type": "create",
                "action": "create",
                "user": "Jboscher",
                "timestamp": "2023-07-04T09:20:33Z",
                "comment": "Page cr\u00e9\u00e9e avec \u00ab\u202fPlease note a few best practices around prompt design. Be concise Be specific and well-defined Ask one task at a time Turn generative tasks into classification tasks. For example, instead of asking what programming language to learn, ask if Python, Java, or C is a better fit for a beginner in programming. and Improve response quality by including examples. Adding instructions and a few examples tends to yield good results however there\u2019s currently no one best w...\u202f\u00bb"
            },
            {
                "logid": 28,
                "ns": 0,
                "title": "Zero-shot",
                "pageid": 24,
                "logpage": 24,
                "params": {},
                "type": "create",
                "action": "create",
                "user": "Jboscher",
                "timestamp": "2023-07-04T08:58:15Z",
                "comment": "Page cr\u00e9\u00e9e avec \u00ab\u202fZero-shot prompting - is a method where the LLM is given no additional data on the specifictask that it is being asked to perform. Instead, it is only given a prompt that describes the task. For example, if you want the LLM to answer a question, you just prompt \"what is prompt design?\". One-shot prompting - is a method where the LLM is given a single example of the task that it is being asked to perform. For example, if you want the LLM to write a poem, you might...\u202f\u00bb"
            },
            {
                "logid": 27,
                "ns": 0,
                "title": "Top P",
                "pageid": 23,
                "logpage": 23,
                "params": {},
                "type": "create",
                "action": "create",
                "user": "Jboscher",
                "timestamp": "2023-06-22T08:30:29Z",
                "comment": "Page cr\u00e9\u00e9e avec \u00ab\u202fFirst, there are different models you can choose from. Each model is tuned to perform well on specific tasks. You can also specify the temperature, top P, and top K. These parameters all adjust the randomness of responses by controlling how the output tokens are selected. When you send a prompt to the model, it produces an array of probabilities over the words that could come next. And from this array, we need some strategy to decide what to return. A simple stra...\u202f\u00bb"
            },
            {
                "logid": 26,
                "ns": 0,
                "title": "Top K",
                "pageid": 22,
                "logpage": 22,
                "params": {},
                "type": "create",
                "action": "create",
                "user": "Jboscher",
                "timestamp": "2023-06-22T08:28:45Z",
                "comment": "Page cr\u00e9\u00e9e avec \u00ab\u202fFirst, there are different models you can choose from. Each model is tuned to perform well on specific tasks. You can also specify the temperature, top P, and top K. These parameters all adjust the randomness of responses by controlling how the output tokens are selected. When you send a prompt to the model, it produces an array of probabilities over the words that could come next. And from this array, we need some strategy to decide what to return. A simple stra...\u202f\u00bb"
            },
            {
                "logid": 25,
                "ns": 0,
                "title": "Temperature",
                "pageid": 21,
                "logpage": 21,
                "params": {},
                "type": "create",
                "action": "create",
                "user": "Jboscher",
                "timestamp": "2023-06-22T08:24:10Z",
                "comment": "Page cr\u00e9\u00e9e avec \u00ab\u202fFirst, there are different models you can choose from. Each model is tuned to perform well on specific tasks. You can also specify the temperature, top P, and top K. These parameters all adjust the randomness of responses by controlling how the output tokens are selected. When you send a prompt to the model, it produces an array of probabilities over the words that could come next. And from this array, we need some strategy to decide what to return. A simple stra...\u202f\u00bb"
            },
            {
                "logid": 24,
                "ns": 0,
                "title": "LangChain",
                "pageid": 20,
                "logpage": 20,
                "params": {},
                "type": "create",
                "action": "create",
                "user": "Jboscher",
                "timestamp": "2023-05-11T03:30:09Z",
                "comment": "Page cr\u00e9\u00e9e avec \u00ab\u202f  == R\u00e9f\u00e9rences ==  * [https://python.langchain.com/en/latest/index.html] Documentation LangChain * [https://www.youtube.com/watch?v=2xxziIWmaSA&list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5&index=4] Cours LangChain\u202f\u00bb"
            }
        ]
    }
}