Repository for training a LoRA for the LLaMA model on HuggingFace with 8-bit quantization. Research only. 👉 Join our Discord Server for updates, support & collaboration Dataset creation, training, weight merging, and quantization instructions are in the docs.
Go to file
2023-05-04 05:20:57 -07:00
docs fix training commands 2023-04-10 19:33:05 -06:00
examples better search results 2023-03-21 05:53:05 -06:00
research Update abstract.md 2023-04-21 09:19:58 -07:00
.gitignore Initial commit 2023-03-20 15:42:56 -06:00
accelerate_config.yaml Initial commit 2023-03-20 16:23:47 -06:00
finetune_peft_8bit.py comment out gradient_checkpointing to avoid issue 2023-04-10 19:57:40 -06:00
merge_adapter_weights.py Replace automodel and tokenizer with llama version 2023-03-22 04:10:30 -06:00
README.md Update README.md 2023-05-04 05:20:57 -07:00

Chat LLaMA

8bit-LoRA

Repository for training a LoRA for the LLaMA model on HuggingFace with 8-bit quantization. Research only.


👉 Join our Discord Server for updates, support & collaboration


Dataset creation, training, weight merging, and quantization instructions are in the docs.

Check out our trained LoRAs on HuggingFace

Anthropic's HH