Tag: Tuning

Fine-tuning Llama 2 for news category prediction: A step-by-step comprehensive guide to???

In this blog, I will guide you through the process of fine-tuning Meta’s Llama 2 7B model for news article categorization across 18 different categories. I will utilize a news classification instruction dataset that I previously created using GPT 3.5. If you’re interested ...

Fine-tuning LLMs

Catastrophic Forgetting (degrades model performance) Catastrophic forgetting occurs when a machine learning model forgets previously learned information as it learns new information. This process is especially problematic in sequential learning scenarios where the model is trained on multiple ...

A Beginner???s Guide to LLM Fine-Tuning

The growing interest in Large Language Models (LLMs) has led to a surge in tools and wrappers designed to streamline their training process. Popular options include FastChat from LMSYS (used to train Vicuna) and Hugging Face’s transformers/trl libraries (used i...

Fine-Tuning a Llama-2 7B Model for Python Code Generation

About 2 weeks ago, the world of generative AI was shocked by the company Meta's release of the new Llama-2 AI model. Its predecessor, Llama-1, was a breaking point in the LLM industry, as with the release of its weights along with new finetuning techniques, there was a massive creation of open-s...