General API Description
Feature Introduction
This API is used to invoke large models on the ModelVerse platform to achieve intelligent conversation functionality.
Model List
| Model ID | Model Version |
|---|---|
| deepseek-ai/DeepSeek-R1 | DeepSeek-R1 |
| deepseek-ai/DeepSeek-V3-0324 | DeepSeek-V3-0324 |
| deepseek-ai/DeepSeek-Prover-V2-671B | DeepSeek-Prover-V2-671B |
| Qwen/QwQ-32B | QwQ-32B |
| Qwen/Qwen3-235B-A22B | QwQ3-235B |
Step 1: Obtain API Key
How to obtain the api_key value: Please enter the Umodelverse console - “Experience Center” - API Key Management for quick creation.
Step 2: Chat API Invocation
Request
Request Header Fields
| Name | Type | Required | Description |
|---|---|---|---|
| Content-Type | string | Yes | Fixed value application/json |
| Authorization | string | Yes | Pass the Key obtained from the API in step one |
Request Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Model ID |
| messages | List[message] | Yes | Chat context information. Description: (1) Messages members cannot be empty; 1 member indicates a single-turn conversation, multiple members indicate a multi-turn conversation. For example: · Single member example, "messages": [ {"role": "user","content": "Hello"}]· Three-member example, "messages": [ {"role": "user","content": "Hello"},{"role":"assistant","content":"How can I help?"},{"role":"user","content":"Introduce yourself"}](2) The last message is the information for the current request, and the preceding messages are historical conversations. (3) Role description of messages: ① The role of the first message must be user or system ② The role of the last message must be user or tool ③ If the function call feature is not used: · When the role of the first message is user, the role values need to be in the order of user -> assistant -> user…, meaning the roles in odd-numbered messages must be user or function, and even-numbered messages must be assistant. For example, in the example given, the role values in the messages are user, assistant, user, assistant, user; odd-numbered (red box) roles are user, i.e., in the 1st, 3rd, and 5th messages; even-numbered (blue box) roles are assistant, i.e., in the 2nd and 4th messages. |
| stream | bool | No | Whether to return data as a stream, description: (1) Beam search models can only be false (2) Default is false |
| stream_options | stream_options | No | Whether to output usage in streaming response, description: true: Yes, when set to true, a field will be output in the last chunk, showing the token statistics for the entire request; false: No, by default, streaming response does not output usage |
| temperature | float | No | Description: (1) Higher values make the output more random, while lower values make it more focused and deterministic |
| top_p | float | No | Description: (1) Affects the diversity of output text; the higher the value, the stronger the diversity of generated text (2) Default is 0.7, range [0, 1.0] |
| penalty_score | float | No | Reduces the phenomenon of repetitive generation by penalizing already generated tokens. Description: (1) The larger the value, the greater the penalty (2) Default is 1.0, range: [1.0, 2.0] |
| max_completion_tokens | int | No | Specifies the maximum number of output tokens for the model. Description: (1) For the value of this parameter, please refer to the [Supported Model List - max_completion_tokens Value Range] section of this document. |
| seed | int | No | Description: (1) Range: (0,2147483647), will be randomly generated by the model, default is empty (2) If specified, the system will make every effort to perform deterministic sampling, so that repeated requests with the same seed and parameters return the same results |
| stop | List | No | Generation stop markers, stops text generation when the model’s output ends with any element in stop. Description: (1) Each element must not exceed 20 characters (2) Up to 4 elements |
| user | string | No | Represents a unique identifier for the end user |
| frequency_penalty | float | No | Description: (1) Positive values penalize new tokens based on their existing frequency in the text so far, reducing the model’s likelihood of repeating the same line word by word (2) Range: [-2.0, 2.0] |
| presence_penalty | float | No | Description: (1) Positive values penalize tokens based on whether they are already present in the text, increasing the model’s likelihood of talking about new topics (2) Range: [-2.0, 2.0] |
| web_search | [web_search] | No | Search-enhanced options, description: (1) Default is not to send, close |
| metadata | map<string,string> | No | Description: (1) Maximum of 16 elements supported (2) Key and value must be string type |
Request Example
curl --location 'https://deepseek.modelverse.cn/v1/chat/completions' \
--header 'Authorization: Bearer <your API Key>' \
--header 'Content-Type: application/json' \
--data '{
"stream": true,
"model": "deepseek-ai/DeepSeek-R1",
"messages": [
{
"role": "user",
"content": "say hello"
}
]
}'Response
Response Parameters
| Name | Type | Description |
|---|---|---|
| id | string | The unique identifier for this request, useful for troubleshooting |
| object | string | The package type chat.completion: returns for multi-turn conversations |
| created | int | Timestamp |
| model | string | Description: (1) If it’s a pre-existing service, returns the model ID (2) If it’s a service deployed after SFT, this field returns model:modelversionID, model is the same as the request parameter and is the large model ID used in this request; modelversionID is for traceability |
| choices | choices/sse_choices | Content returned when stream=false Content returned when stream=true |
| usage | usage | Token statistics, description: (1) Default return for synchronous requests (2) Default return for streaming requests when stream_options.include_usage=true is enabled, actual content will be returned in the last chunk, other chunks return null |
| search_results | search_results | Search results |
Response Example
{
"id": " ",
"object": "chat.completion",
"created": ,
"model": "deepseek-ai/DeepSeek-R1",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "\n\nHello, XXXCloud! 👋 If there's anything specific you'd like to know or discuss about XXXCloud's services (like cloud computing, storage, AI solutions, etc.), feel free to ask! 😊",
"reasoning_content": "\nOkay, the user wants to say hello to XXXCloud. Let me start by greeting XXXCloud directly.\n\nHmm, should I mention what XXXCloud is? Maybe a brief intro would help, like it's a cloud service provider.\n\nThen, I can ask if there's anything specific the user needs help with regarding XXXCloud services.\n\nKeeping it friendly and open-ended makes sense for a helpful response.\n"
},
"finish_reason": "stop"
],
"usage": {
"prompt_tokens": 8,
"completion_tokens": 129,
"total_tokens": 137,
"prompt_tokens_details": null,
"completion_tokens_details": null
},
"system_fingerprint": ""
}