LoRA
Contents
LoRAยถ
Summaryยถ
LoRA (Low-Rank Adaptation) is a technique used for fine-tuning large language models (LLMs) in a parameter-efficient way. It modifies the fine-tuning process by freezing the original model weights and applying changes to a separate set of weights, which are then added to the original parameters. This approach significantly reduces the memory and computational requirements for fine-tuning, making it possible for smaller organizations and individual developers to train specialized LLMs over their data.
Key Conceptsยถ
LoRA์ ์ ์ : LoRA๋ ๋ํ ์ธ์ด ๋ชจ๋ธ์ ํจ์จ์ ์ผ๋ก fine-tuningํ๊ธฐ ์ํ ๊ธฐ๋ฒ์ผ๋ก, ๋ชจ๋ธ์ ์๋ ๊ฐ์ค์น๋ฅผ ๊ณ ์ ํ๊ณ ๋ณ๋์ ๊ฐ์ค์น๋ฅผ ์ถ๊ฐํ์ฌ fine-tuning์ ์ํํ๋ค.
LoRA์ ์ฅ์ : LoRA๋ fine-tuning ๊ณผ์ ์์ ํ์ํ ๋ฉ๋ชจ๋ฆฌ์ ๊ณ์ฐ ์์์ ํฌ๊ฒ ์ค์ฌ์ฃผ์ด, ์๊ท๋ชจ ์กฐ์ง์ด๋ ๊ฐ์ธ ๊ฐ๋ฐ์๊ฐ ๋ํ ์ธ์ด ๋ชจ๋ธ์ ํน์ ๋๋ฉ์ธ์ ๋ง๊ฒ fine-tuningํ ์ ์๋๋ก ํ๋ค.
LoRA์ ์ ์ฉ : LoRA๋ ๋ค์ํ ๋ํ ์ธ์ด ๋ชจ๋ธ์ ์ ์ฉํ ์ ์์ผ๋ฉฐ, ํนํ ๋ค์ค ํด๋ผ์ด์ธํธ๊ฐ ์๋ก ๋ค๋ฅธ ์ ํ๋ฆฌ์ผ์ด์ ์ ์ํด fine-tuned ๋ชจ๋ธ์ ํ์๋ก ํ ๋ ์ ์ฉํ๋ค.
Referencesยถ
URL ์ด๋ฆ |
URL |
---|---|
Easily Train a Specialized LLM: PEFT, LoRA, QLoRA, LLaMA |
https://cameronrwolfe.substack.com/p/easily-train-a-specialized-llm-peft |
Mastering Low-Rank Adaptation (LoRA): Enhancing Large Language Models for Efficient Adaptation |
|
Understanding LLM Fine Tuning with LoRA (Low-Rank Adaptation) |
|
A beginners guide to fine tuning LLM using LoRA |
https://zohaib.me/a-beginners-guide-to-fine-tuning-llm-using-lora/ |
What is low-rank adaptation (LoRA)? |