Repository for training a LoRA for the LLaMA model on HuggingFace with 8-bit quantization. Research only.
👉 Join our Discord Server for updates, support & collaboration
Dataset creation, training, weight merging, and quantization instructions are in the docs.
docs | ||
examples | ||
research | ||
.gitignore | ||
accelerate_config.yaml | ||
finetune_peft_8bit.py | ||
merge_adapter_weights.py | ||
README.md |
Chat LLaMA
8bit-LoRA
Repository for training a LoRA for the LLaMA model on HuggingFace with 8-bit quantization. Research only.
👉 Join our Discord Server for updates, support & collaboration
Dataset creation, training, weight merging, and quantization instructions are in the docs.