Featured
PaperCard: LoRA
The very first Golden Paper Card. This is another oldie but goodie, and I have to say is probably my most favourite paper I've read in the last couple of months. The idea of decomposing pre-trained weight matrices to associated low-rank matrices is pretty neat and intuitive. More importantly, LoRA is quite effective as a parameter-efficient fine-tuning strategy able to achieve performance levels that are comparable to that of full-finetuning. Deploying DL inference systems that need to leverage or switch between various tasks is also made easy thru LoRA (i.e. by switch adapter matrices).