Skip to Content
API DocGeneral description

General API Description

Feature Introduction

This API is used to invoke large models on the ModelVerse platform to achieve intelligent conversation functionality.

Model List

Model IDModel Version
deepseek-ai/DeepSeek-R1DeepSeek-R1
deepseek-ai/DeepSeek-V3-0324DeepSeek-V3-0324
deepseek-ai/DeepSeek-Prover-V2-671BDeepSeek-Prover-V2-671B
Qwen/QwQ-32BQwQ-32B
Qwen/Qwen3-235B-A22BQwQ3-235B

Step 1: Obtain API Key

How to obtain the api_key value: Please enter the Umodelverse console - “Experience Center” - API Key Management for quick creation.

Step 2: Chat API Invocation

Request

Request Header Fields

NameTypeRequiredDescription
Content-TypestringYesFixed value application/json
AuthorizationstringYesPass the Key obtained from the API in step one

Request Parameters

NameTypeRequiredDescription
modelstringYesModel ID
messagesList[message]YesChat context information. Description:
(1) Messages members cannot be empty; 1 member indicates a single-turn conversation, multiple members indicate a multi-turn conversation. For example: · Single member example, "messages": [ {"role": "user","content": "Hello"}]
· Three-member example, "messages": [ {"role": "user","content": "Hello"},{"role":"assistant","content":"How can I help?"},{"role":"user","content":"Introduce yourself"}]
(2) The last message is the information for the current request, and the preceding messages are historical conversations. (3) Role description of messages: ① The role of the first message must be user or system ② The role of the last message must be user or tool ③ If the function call feature is not used: · When the role of the first message is user, the role values need to be in the order of user -> assistant -> user…, meaning the roles in odd-numbered messages must be user or function, and even-numbered messages must be assistant. For example, in the example given, the role values in the messages are user, assistant, user, assistant, user; odd-numbered (red box) roles are user, i.e., in the 1st, 3rd, and 5th messages; even-numbered (blue box) roles are assistant, i.e., in the 2nd and 4th messages.
streamboolNoWhether to return data as a stream, description:
(1) Beam search models can only be false
(2) Default is false
stream_optionsstream_optionsNoWhether to output usage in streaming response, description: true: Yes, when set to true, a field will be output in the last chunk, showing the token statistics for the entire request; false: No, by default, streaming response does not output usage
temperaturefloatNoDescription:
(1) Higher values make the output more random, while lower values make it more focused and deterministic
top_pfloatNoDescription:
(1) Affects the diversity of output text; the higher the value, the stronger the diversity of generated text
(2) Default is 0.7, range [0, 1.0]
penalty_scorefloatNoReduces the phenomenon of repetitive generation by penalizing already generated tokens. Description:
(1) The larger the value, the greater the penalty
(2) Default is 1.0, range: [1.0, 2.0]
max_completion_tokensintNoSpecifies the maximum number of output tokens for the model. Description:
(1) For the value of this parameter, please refer to the [Supported Model List - max_completion_tokens Value Range] section of this document.
seedintNoDescription:
(1) Range: (0,2147483647‌), will be randomly generated by the model, default is empty
(2) If specified, the system will make every effort to perform deterministic sampling, so that repeated requests with the same seed and parameters return the same results
stopListNoGeneration stop markers, stops text generation when the model’s output ends with any element in stop. Description:
(1) Each element must not exceed 20 characters
(2) Up to 4 elements
userstringNoRepresents a unique identifier for the end user
frequency_penaltyfloatNoDescription:
(1) Positive values penalize new tokens based on their existing frequency in the text so far, reducing the model’s likelihood of repeating the same line word by word
(2) Range: [-2.0, 2.0]
presence_penaltyfloatNoDescription:
(1) Positive values penalize tokens based on whether they are already present in the text, increasing the model’s likelihood of talking about new topics
(2) Range: [-2.0, 2.0]
web_search[web_search]NoSearch-enhanced options, description:
(1) Default is not to send, close
metadatamap<string,string>NoDescription:
(1) Maximum of 16 elements supported
(2) Key and value must be string type

Request Example

curl --location 'https://deepseek.modelverse.cn/v1/chat/completions' \ --header 'Authorization: Bearer <your API Key>' \ --header 'Content-Type: application/json' \ --data '{ "stream": true, "model": "deepseek-ai/DeepSeek-R1", "messages": [ { "role": "user", "content": "say hello" } ] }'

Response

Response Parameters

NameTypeDescription
idstringThe unique identifier for this request, useful for troubleshooting
objectstringThe package type chat.completion: returns for multi-turn conversations
createdintTimestamp
modelstringDescription:
(1) If it’s a pre-existing service, returns the model ID
(2) If it’s a service deployed after SFT, this field returns model:modelversionID, model is the same as the request parameter and is the large model ID used in this request; modelversionID is for traceability
choiceschoices/sse_choicesContent returned when stream=false
Content returned when stream=true
usageusageToken statistics, description:
(1) Default return for synchronous requests
(2) Default return for streaming requests when stream_options.include_usage=true is enabled, actual content will be returned in the last chunk, other chunks return null
search_resultssearch_resultsSearch results

Response Example

{ "id": " ", "object": "chat.completion", "created": , "model": "deepseek-ai/DeepSeek-R1", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "\n\nHello, XXXCloud! 👋 If there's anything specific you'd like to know or discuss about XXXCloud's services (like cloud computing, storage, AI solutions, etc.), feel free to ask! 😊", "reasoning_content": "\nOkay, the user wants to say hello to XXXCloud. Let me start by greeting XXXCloud directly.\n\nHmm, should I mention what XXXCloud is? Maybe a brief intro would help, like it's a cloud service provider.\n\nThen, I can ask if there's anything specific the user needs help with regarding XXXCloud services.\n\nKeeping it friendly and open-ended makes sense for a helpful response.\n" }, "finish_reason": "stop" ], "usage": { "prompt_tokens": 8, "completion_tokens": 129, "total_tokens": 137, "prompt_tokens_details": null, "completion_tokens_details": null }, "system_fingerprint": "" }