Message Description
| Name | Type | Required | Description |
|---|---|---|---|
| role | string | Yes | Currently supports the following: · user: Indicates the user · assistant: Indicates the conversational assistant · system: Indicates the system persona |
| name | string | No | Message name |
| content | string | Yes | Conversation content, explanation: (1) Cannot be empty (2) The content of the last message should not be blank characters, such as spaces, “\n”, “\r”, “\f”, etc. |
Stream Options Description
| Name | Type | Required | Description |
|---|---|---|---|
| include_usage | bool | No | Whether the stream response outputs usage, explanation: · true: Yes, when set to true, a field will be output in the last chunk, and the usage field on this chunk displays token statistics for the entire request · false: No, by default the stream response does not output usage |
Web Search Description
| Name | Type | Description |
|---|---|---|
| enable | bool | Whether to enable online search functionality, explanation: (1) If real-time search is turned off, badges and source information will not be returned (2) Optional values: · true: Enabled · false: Disabled, default is false |
Choices Description
When stream=false, the return content is as follows:
| Name | Type | Description |
|---|---|---|
| id | string | Conversation sequence number |
| message | [message] | Response information, returned when stream=false |
| finish_reason | string | Output content identifier, explanation: · normal: The output content is entirely generated by the large model without triggering truncation or replacement · stop: The output result was truncated after hitting the field specified in the input parameter stop · length: Reached the maximum number of tokens · content_filter: The output content was truncated, replaced with **, etc. · tool_calls: Function call |
Choices Message Description
| Name | Type | Description |
|---|---|---|
| role | string | Currently supports the following: · user: Indicates the user · system: Indicates the system persona |
| name | string | Message name |
| content | string | Conversation content |
| reasoning_content | string | Reasoning chain content, explanation: Only valid when the model is DeepSeek-R1 |
SSE Choices Description
When stream=true, the return content is as follows:
| Name | Type | Description |
|---|---|---|
| id | int | Index number in the choices list |
| delta | [delta] | Response information, returned when stream=true |
| finish_reason | string | Output content identifier, explanation: · normal: The output content is entirely generated by the large model without triggering truncation or replacement · stop: The output result was truncated after hitting the field specified in the input parameter stop · length: Reached the maximum number of tokens · content_filter: The output content was truncated, replaced with **, etc. · tool_calls: Function call |
Delta Description
| Name | Type | Description |
|---|---|---|
| content | string | Streamed response content |
| reasoning_content | string | Reasoning chain content, explanation: Only valid when the model is DeepSeek-R1 |
Usage Description
| Name | Type | Description |
|---|---|---|
| prompt_tokens | int | Number of tokens in the question |
| completion_tokens | int | Number of tokens in the answer |
| total_tokens | int | Total number of tokens |
Search Results Description
Online Search Functionality Results Description
If the online search fails, it returns:
{
"xxx": "xxx",
"search_results":
"error": {
"message": "web search error",
"type": "invalid_request_error",
"code": "web_search_error"
}
}