Large Language Models are more and more used and their skills are surprising. Part of their success is their ability to learn from a few examples, a phenomenon known as in-context learning; in the previous article, we discussed in detail what is it and from where it originates, now we will learn how to harness the true power.
All You Need to Know about In-Context Learning
What is and how does it work what makes Large Language Models so powerful
towardsdatascience.com
This article is divided into different sections, for each section we will answer these questions:
- A brief recap on in-context learning
- How do you interact with a model? Which element should be inserted? Can changing the prompt impact the answer?
- How can we increase the ability of a model in ICL? What is Zero-shot or few-shot prompting? What is Chain-of-thought (COT) or zero-shot COT? How do you get the best from your COT? Why LLMs Can Perform CoT Reasoning?
- What is the tree-of-thoughts?
- Can we automatize this process?
Check the list of references at the end of the article, I provide also some suggestions to deepen the topics.