Skip to Content
API CallQuick Start

Quick Start

This guide is designed to help you quickly get familiar with and call the API of the Model Service Platform. By following the steps below, you will complete your first API call in just a few minutes.

We strongly recommend using the OpenAI API call method. Since the OpenAI API has become the de facto standard in the large model industry, it means there are a vast number of tutorials, tools, and codebases that can be directly reused. Our service is fully compatible with this standard, allowing you to seamlessly integrate with the mainstream ecosystem and save a lot of learning costs.

The OpenAI compatible interface currently supports:

  • /v1/chat/completions, a core interface for communicating with models.
  • /v1/response, the most advanced model response generation interface from OpenAI. Supports text and image input, as well as text output.
  • /v1/models for retrieving the list of models.

Step 1: Get an API Key

Before calling any API, you need a valid API key. Please refer to the Authentication and Authorization document to see how to obtain and manage your keys.

Step 2: Select a Model

You can use the API below to get the list of models and choose the one you need.

GET https://api.umodelverse.ai/v1/models

Request Example:

curl https://api.umodelverse.ai/v1/models \ -H "Content-Type: application/json" | jq .

Expected Return:

{ "data": [ { "created": 1762741377, "id": "deepseek-ai/DeepSeek-R1", "object": "model", "owned_by": "XXXCloud_UModelverse" }, { "created": 1762741326, "id": "gpt-5", "object": "model", "owned_by": "XXXCloud_UModelverse" }, ...... ], "object": "list" }

The id field is the model name. Please base your selections on the actual return.

Step 3: Call the API

Typical Method 1 - HTTP Call in Any Language

This is the most basic and universal method. No matter what programming language you use, as long as it can send network requests (HTTP requests), you can call the API using this method. You need to know three core pieces of information: the model name, your API key, and our API address.

We fully support OpenAI API request specifications. As the OpenAI API interface standard is often updated, it’s recommended to refer directly to the OpenAI API Documentation.

Replace {api_key} with your API key and {model_name} with the model name from the list retrieved in the previous step (choose one).

curl https://api.umodelverse.ai/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer {api_key}" \ -d '{ "model": "{model_name}", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Describe the company XXXCloud in one sentence." } ], "stream": true }' | jq .

Parameter Description:

  • model: The model name, as retrieved from the previous step, e.g., “deepseek-ai/DeepSeek-R1”.
  • messages: The content you want to send to the model.
  • stream: Whether to return in a “streaming” manner.
    • true: The model will return results character by character or word by word, suitable for real-time chat interfaces. (Still in JSON format)
    • false: The model will generate the entire response at once and return it to you completely.

Expected return is as follows. Focus mainly on the choices field, which contains the model’s reply. The usage field contains the usage statistics of the model (content may vary and is for reference only):

{ "id": "52ba2d24-f745-42b3-82c3-610a7b2658b0", "object": "chat.completion", "created": 1763020876, "model": "gemini-2.5-pro", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "XXXCloud (Youke) is a neutral, safe, and reliable cloud computing service platform dedicated to providing comprehensive cloud service solutions to global enterprise customers." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 9, "completion_tokens": 1505, "total_tokens": 1514, "prompt_tokens_details": { "audio_tokens": 0, "cached_tokens": 0 }, "completion_tokens_details": { "audio_tokens": 0, "reasoning_tokens": 1357, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 } }, "system_fingerprint": "", "search_result": null }

Typical Method 2 - OpenAI SDK

OpenAI provides developers with a very convenient SDK (Software Development Kit) that wraps complex HTTP requests into simple function calls, making the code more readable and maintainable. We highly recommend this method for developers.

Refer to the OpenAI SDK Documentation. You can also look for SDKs for your desired language on the OpenAI GitHub.

pip install -U openai
from openai import OpenAI import os client = OpenAI( api_key="{api_key}", base_url="https://api.umodelverse.ai/v1/", ) chat_completion = client.chat.completions.create( messages=[ { "role": "user", "content": "Describe the company XXXCloud in one sentence.", } ], model="{model_name}", ) print(chat_completion.choices[0].message.content)

Typical Method 3 - LangChain

When you want to build more complex AI applications (such as an AI assistant that can call tools, or a robot that can analyze documents) rather than just simple Q&A, LangChain is a powerful development framework. It integrates well with our API.

Refer to the LangChain Python SDK Documentation or the LangChain JavaScript SDK Documentation.

from langchain_openai import ChatOpenAI from langchain import LLMChain from langchain.prompts import ChatPromptTemplate llm = ChatOpenAI( model_name="{model_name}", openai_api_key="{api_key}", openai_api_base="https://api.umodelverse.ai/v1/", ) prompt = ChatPromptTemplate.from_template( """ {input} """ ) chain = LLMChain(llm=llm, prompt=prompt) print(chain.run("Describe the company XXXCloud in one sentence."))