Alpaca-LoRA Model
Introduction
Alpaca LoRA is fine-tuned on Meta’s LLaMA 7B model using Lora (Low-rank Adaptation) technology, only needs to train a small part of parameters to achieve the effect comparable to Standford Alpaca model. It can be considered as a lightweight open source version of ChatGPT.
Quick Deployment
Log in to the UCloud Global console (https://console.ucloud-global.com/uhost/uhost/create), choose “GPU type” for the model type, “V100S”. Detailed configurations like the number of CPU and GPU cores can be selected as per requirement.
Recommended minimum configuration: 10-core CPU, 64GB memory, 1 V100S.
Select “Image Market” for image choice, search for “Alpaca-LoRA7B” in image name, and choose this image to create a GPU cloud host.
After the GPU cloud host is successfully created, log in to the GPU cloud host.
The pre-installed image provides the following information:
- Fine-tuning
- Inference