Instant LLM Updates with Doc-to-LoRA and Text-to-LoRA
Summary
Doc-to-LoRA and Text-to-LoRA propose using hypernetworks to generate LoRA adapters for instant LLM updates. Doc-to-LoRA internalizes documents to provide long-term memory and faster queries, while Text-to-LoRA enables on-demand task adaptation from natural-language descriptions; both rely on a meta-trained update generator to amortize costs for deployment. The work discusses training, evaluation across memory and adaptation tasks, visual information transfer, and limitations such as expensive meta-training and scalability considerations.