Overview#
The API of Aitre can be applied to almost any task that requires understanding or generating natural language and code. It can also be used to generate and edit images or convert speech to text. We offer a range of models with different features and pricing.In the API of Aitre, protecting user data is the foundation of our mission. We do not use the inputs and outputs of our API to train our models.Key Concepts#
GPT#
OpenAI's GPT (Generative Pre-trained Transformer) models are trained to understand natural language and code. GPT provides text output in response to its input. The input to GPT is also known as a "prompt." Designing prompts is essentially how you "program" the GPT model, usually by providing instructions or examples of how to successfully complete a task. GPT can be used for a variety of tasks, including content or code generation, summarization, conversation, creative writing, and more. Please read our Introduction to GPT Guide and GPT Best Practices Guide to learn more.Embeddings#
Embeddings are vector representations of a piece of data (such as some text) designed to preserve various aspects of its content and/or meaning. Data chunks that are similar in some respects tend to have embeddings that are closer together than unrelated data. OpenAI provides a text embedding model that takes a text string as input and generates an embedding vector as output. Embeddings are very useful for search, clustering, recommendation, anomaly detection, classification, and more. Read more about embeddings in our Embedding Guide.Tokens#
GPT and embedding models process text in the form of chunks called tokens. Tokens represent common sequences of characters. For example, the string "tokenization" is broken down into "token" and "ization," while short and common words like "the" are represented as a single token. Note that in a sentence, the first token of each word usually starts with a space character. Check out our Tokens Calculator to test specific strings and see how they convert into tokens. As a rough rule of thumb, 1 token is approximately equivalent to 4 characters or 0.75 words of English text.One limitation to keep in mind is that for GPT models, the sum of the prompt and generated output must not exceed the model's maximum context length. For embedding models (which do not output tokens), the input must be shorter than the model's maximum context length. The maximum context length limit for each GPT and embedding model can be found in the model index.Modified at 2025-03-30 04:49:02