Domino Data Science Blog
Subir Mansukhani is Staff Software Engineer - 2 at Domino Data Lab. Previously he was the Co-Founder and Chief Data Scientist at Intuition.AI.
This blog post explores the challenges of fine-tuning large language models (LLMs) and introduces resource-optimized and parameter-efficient techniques such as quantization, LoRA, and Zero Redundancy Optimization (ZeRO). By fine-tuning Falcon-7b, Falcon-40b, and GPTJ-6b, we demonstrate how these techniques offer improved performance, cost-effectiveness, and resource optimization in LLM fine-tuning. The blog post also discusses the future of fine-tuning and its potential for unlocking new possibilities in enterprise AI applications.
By Subir Mansukhani9 min read
Subscribe to the Domino Newsletter
Receive data science tips and tutorials from leading Data Science leaders, right to your inbox.