API Description
Function Introduction
This API is used to call large models on the ModelVerse platform to implement intelligent conversation functions.
Supported Model List
1. Parameter Description
Parameter Name | Description | Example Value |
---|---|---|
URL | The API request URL used to specify the endpoint to be called. | https://api.modelverse.cn/v1/chat/completions https://api.modelverse.cn/v1 |
Model ID | Specifies the model name to be called, determining the specific functionality of the API call. | deepseek-ai/DeepSeek-R1-0528 deepseek-ai/DeepSeek-R1 deepseek-ai/DeepSeek-V3-0324 deepseek-ai/DeepSeek-Prover-V2-671B Qwen/QwQ-32B Qwen/Qwen3-235B-A22B |
API Key | Authentication key used to verify user identity, ensuring only authorized users can access the API service. | Get Key Here |
2. Domain URL Description
- Note: Different clients may require different URL links based on their functional needs. Please strictly follow the client’s instructions when filling in information.
Client Type | URL Link | Description |
---|---|---|
General API Calls | https://deepseek.modelverse.cn/v1 | Basic API endpoint suitable for general functionality calls. May require additional parameters depending on specific features. |
Chat Function Calls | https://deepseek.modelverse.cn/v1/chat/completions | Specialized API endpoint for chat functionality, optimized for conversation generation tasks. Parameters and responses are more focused on chat scenarios. |
Step 1: Obtain API Key
How to get the api_key value: Please go to Umodelverse Console → “Experience Center” → API Key Management to quickly create one.
Step 2: Chat API Call
Request
Request Header Field
Name | Type | Type | Description |
---|---|---|---|
Content-Type | string | Yes | Fixed value application/json |
Authorization | string | Yes | Enter the Key obtained in step 1 |
Request Parameters
Name | Type | Required | Description |
---|---|---|---|
model | string | Yes | Model ID |
messages | List[message] | Yes | Chat context information. Instructions: (1) The messages members cannot be empty, one member indicates a single round of conversation, multiple members indicate multiple rounds of conversation, for example: · A single member example, "messages": [ {"role": "user","content": "Hello"}] · A three-member example, "messages": [ {"role": "user","content": "Hello"},{"role":"assistant","content":"How can I help you?"},{"role":"user","content":"Please introduce yourself"}] (2) The last message is the current request information, and the previous messages are historical conversation information (3) Role explanation for messages: ① The role of the first message must be either user or system ② The role of the last message must be either user or tool ③ If the function call feature is not used: · When the role of the first message is user, the role value needs to be alternately user -> assistant -> user…, i.e., the role of messages with odd indices must be user or function, and the role of messages with even indices must be assistant, for example, in the sample, the role values of the messages are respectively user, assistant, user, assistant, user; the role values of messages at odd indices (red box) are user, i.e., the roles of messages 1, 3, and 5 are user; messages at even indices (blue box) have the role assistant, i.e., the roles of messages 2, 4 are assistant |
stream | bool | No | Whether to return data in the form of a streaming interface. Explanation: (1) Beam search model can only be false (2) Default false |
stream_options | stream_options | No | Whether the usage is output in a streaming response. Explanation: true: Yes, when set to true, a field will be output in the last chunk, showing the token statistics for the entire request; false: No, the streaming response does not output usage by default |
Request Example
curl --location 'https://deepseek.modelverse.cn/v1/chat/completions' \
--header 'Authorization: Bearer <your API Key>' \
--header 'Content-Type: application/json' \
--data '{
"reasoning_effort": "low",
"stream": true,
"model": "deepseek-ai/DeepSeek-R1-0528",
"messages": [
{
"role": "user",
"content": "say hello to ucloud"
}
]
}'
Response
Response Parameters
Name | Type | Description |
---|---|---|
id | string | The unique identifier of this request, can be used for troubleshooting |
object | string | Package type chat.completion : Multi-turn conversation return |
created | int | Timestamp |
model | string | Description: (1) If it is a pre-set service, the model ID is returned (2) If it is a service deployed after sft, this field returns model:modelversionID , where model is the same as the requested parameter and is the large model ID used in this request; modelversionID is used for tracing |
choices | choices/sse_choices | Returned content when stream=false Returned content when stream=true |
usage | usage | Token statistics information. Explanation: (1) Synchronous requests return by default (2) Streaming requests do not return by default. When stream_options.include_usage=true is enabled, the actual content will be returned in the last chunk, and other chunks will return null |
search_results | search_results | Search results list |
Response Example
{
"id": " ",
"object": "chat.completion",
"created": ,
"model": "deepseek-ai/DeepSeek-R1-0528",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "\n\nHello, XXXCloud! 👋 If there's anything specific you'd like to know or discuss about XXXCloud's services (like cloud computing, storage, AI solutions, etc.), feel free to ask! 😊",
"reasoning_content": "\nOkay, the user wants to say hello to XXXCloud. Let me start by greeting XXXCloud directly.\n\nHmm, should I mention what XXXCloud is? Maybe a brief intro would help, like it's a cloud service provider.\n\nThen, I can ask if there's anything specific the user needs help with regarding XXXCloud services.\n\nKeeping it friendly and open-ended makes sense for a helpful response.\n"
},
"finish_reason": "stop"
],
"usage": {
"prompt_tokens": 8,
"completion_tokens": 129,
"total_tokens": 137,
"prompt_tokens_details": null,
"completion_tokens_details": null
},
"system_fingerprint": ""
}
Error Codes
If the request is incorrect, the JSON text returned by the server includes the following parameters.
HTTP Status Code | Type | Error Code | Error Message | Description |
---|---|---|---|---|
400 | invalid_request_error | invalid_messages | Sensitive information | Sensitive message |
400 | invalid_request_error | characters_too_long | Conversation token output limit | Currently, the maximum max_tokens supported by the deepseek series model is 12288 |
400 | invalid_request_error | tokens_too_long | Prompt tokens too long | [User Input Error] The request content exceeds the internal limit of the large model. You can try the following methods to solve it: • Shorten the input appropriately |
400 | invalid_request_error | invalid_token | Validate Certification failed | Invalid bearer token. Users can refer to [Authentication Explanation] to get the latest key |
400 | invalid_request_error | invalid_model | No permission to use the model | No model permissions |