OpenAI's ChatCompletion interfaceSupports all OpenAI Chat models, including the latest 1106 batch.For more information about the parameters of this interface, you can refer to the official OpenAI documentation at https://platform.openai.com/docs/api-reference/chat/create.You can also use this interface to call the following non-OpenAI models in the same format as the OpenAI ChatCompletion API:
0. Due to the poor API interface developed by Google VertexAI, we won't provide the native API, only offering the standard OpenAI Chat/Completions API. 1. Although the Gemini Pro series does not support non-streaming requests, we have implemented a wrapper for both streaming and non-streaming responses according to the OpenAI standard. Applications native to GPT can easily integrate by changing the model name. 2. Google AI's Gemini Pro and PaLM series models are currently in a preview stage with low concurrent quota, but our site is providing these model inference services for free. You can evaluate their capabilities or integrate them into your application in advance. 3. Gemini Pro Vision is a multimodal large model, similar to GPT4Vision, supporting mixed inference based on text and input images. Its usage is consistent with GPT4v. 4. As Google AI models are billed based on the number of characters, the Tokens in the bill will directly reflect the length of the input and output strings, not the actual Token length. 5. Regarding the capability level of the Gemini Pro series models: They are approximately at the level of 3.5, but the Vision version supports multimodality with moderately satisfactory results (meaning generally pleasing), and can be used as a smaller version of GPT4V.