Low-Rank Adaptation Of Large Language Models

Low-Rank Adaptation Of Large Language Models#

Twitter Handle LinkedIn Profile GitHub Profile Tag

This blog post provides a brief overview on the concept of low-rank adaptation of large language models (LLMs) and its implementation in PyTorch.

Citations#

  • [1] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “LoRA: Low-Rank Adaptation of Large Language Models,” arXiv preprint arXiv:2106.09685, submitted Jun. 17, 2021, revised Oct. 16, 2021. [Online]. Available: https://arxiv.org/abs/2106.09685

  • [2] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. “Attention is all you need”. In Advances in Neural Information Processing Systems, pp. 5998–6008, 2017.

References#