Skip to content

Adapter-based fine-tuning for text generation

Llama2-7b Fine-Tuning 4bit (QLoRA)

Open In Colab

This example shows how to fine-tune Llama2-7b to follow instructions. Instruction tuning is the first step in adapting a general purpose Large Language Model into a chatbot.

This example uses no distributed training or big data functionality. It is designed to run locally on any machine with GPU availability.

Prerequisites

Running

Install Ludwig

pip install ludwig ludwig[llm]

Command Line

Set your token environment variable from the terminal, then run the API script:

export HUGGING_FACE_HUB_TOKEN="<api_token>"
./run_train.sh

Python API

Set your token environment variable from the terminal, then run the API script:

export HUGGING_FACE_HUB_TOKEN="<api_token>"
python train_alpaca.py

Upload to HuggingFace

You can upload to the HuggingFace Hub from the command line:

ludwig upload hf_hub -r <your_org>/<model_name> -m <path/to/model>