Quick Deployment of ChatGLM-6B Model
Introduction
ChatGLM-6B is an open-source chatbot released by the Knowledge Engineering and Data Mining Group of Tsinghua University. According to the official introduction, this is a language model of 100 billion parameters in Chinese and English that has been optimized for Chinese. The open-sourced version is a smaller-scale version with 6 billion parameters, approximately 6 billion parameters. The base model of this model is GLM (GLM: General Language Model Pretraining with Autoregressive Blank Infilling), which is a base model with 100 billion parameters.
Quick Deployment
Log into the UCloud Global console (https://console.ucloud-global.com/uhost/uhost/create), select the machine type “GPU type”, “V100S”, and choose the specific configuration of CPU and GPU cores as needed.
Minimum recommended configuration: 10-core CPU 32G memory 1 V100S.
Choose “Image Market” for the image, search for “ChatGLM-6B” by image name, and select this image to create GPU cloud host.
After the GPU cloud host is successfully created, log in to the GPU cloud host.
The pre-installed image we provide contains the following information:
- Local model address
- Running local chatbot
- Running web chatbot
- Tuning the model with ADGEN dataset