Prompt Engineering – Part 2: Few-shots Example

In this article, I will be discussing a popular technique in prompt engineering. Sometimes, providing instructions alone is not sufficient to guide LLM in producing the desired output. Therefore, we can provide a few examples to make it easier for LLM to understand our instructions. This technique is known as few-shot examples. Additionally, I will discuss some drawbacks of this technique.

Few-shots example is a technique to guide an LLM to follow instructions more strictly by giving it input and output that we expected then LLM will learn and follow our way to produce the answer.

Look at this example:

# You
Input: I love sunny days!
Sentiment: Positive

Input: This movie is terrible.
Sentiment: Negative

Input: It's an average coffee shop, nothing special.
Sentiment: Neutral

Input: I really love dishes from that restaurant
Sentiment:

# ChatGPT
Positive

In this example, you can see, that there is no instruction to guide it to answering the Sentiment but it’s following our pattern defined in previous examples.

This technique is helpful when you have complex instructions that are hard to describe via prompts. It narrows AI’s possible answers closer to your provided examples. It is also used to fine-tune models by training a subset of data (examples) to guide LLM answers in the way we want.

As you know when querying from LLM we need to send the entire prompt (including instructions, examples, and user input) to LLM on every single call so we will face some problems:

  • Context window size: LLM can’t proceed prompt that is bigger than its window size. E.g. gpt-3.5-turbo-16k can receive max 16k tokens in one go. Window size is getting bigger and bigger but there will always have a limit.
  • Token usages: Most LLM services now are charged based on token usages so the bigger prompt, the bigger bill. GPT4.0 price is now 60x compare to GPT3.5
  • Latency: Sure that bigger payload will take time to transfer via networks

If you have a way to prompt LLM to follow your instructions using fewer tokens than providing examples, do that so. Try with smaller examples first and add until you are satisfied with the answers.

In conclusion, this is a powerful technique that you should always consider when you design prompts for AI Agents. Happy prompting! See you on the next technique.