Generate a prompt for completion, encoded as a string, string array, tag array, or tag array array. Please note<| Endoftext |>is the document delimiter that the model sees during training, so if no prompt is specified, the model will generate the beginning of a new document.
The default value is null, and the probability of the specified token appearing during modification is specified. Accept a JSON object that maps a token (specified by the token ID in the GPT tokenizer) to an associated bias value ranging from -100 to 100. You can use this tokenizer tool (applicable to GPT-2 and GPT-3) to convert text into token IDs. Mathematically speaking, bias is added to the generated logit before sampling the model. The exact effect varies depending on the model, but values between -1 and 1 should decrease or increase the likelihood of selection; Values like -100 or 100 should result in the disabling or exclusive selection of the relevant token. For example, you can pass {"50256": -100} to prevent the generation of<| endoftext |>tokens.
{
"model": "gpt-3.5-turbo-instruct",
"prompt": "Say this is a test",
"max_tokens": 7,
"temperature": 0
}
curl --location -g --request POST '{{BASE_URL}}/v1/completions' \
--header 'Authorization: Bearer {{YOUR_API_KEY}}' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "gpt-3.5-turbo-instruct",
"prompt": "Say this is a test",
"max_tokens": 7,
"temperature": 0
}'
{
"id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
"object": "text_completion",
"created": 1589478378,
"model": "gpt-3.5-turbo-instruct",
"system_fingerprint": "fp_44709d6fcb",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}