[논문] LoRA: Low-Rank Adaptation of Large Language Models
https://arxiv.org/abs/2106.09685 LoRA: Low-Rank Adaptation of Large Language Models An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes le arxiv.org 해당 논문을 보고 작성했습니다. Abstract 자연어 처리 분야에서 중요한 패러다임..
연구실 공부
2024. 2. 20.