Tuesday, March 31, 2026

Stop Guessing, Start Directing: From Zero-Shot to Few-Shot Guide for AI Precision

When I started using AI almost a year ago I found the whole thing just utterly amazing. It took me quite a while to realise that the answer wasn’t just something that it had found on the web, it had actually generated that answer for me. I was using it like Google Search and it’s much more powerful than that.

After issuing a command, five seconds later my screen is filled with a response that is… technically correct, but completely useless. It’s too wordy and the tone is wrong. It hallucinated facts and worst of all, it sounds like a robot trying too hard to be human. There are so many tell-tale signs of AI generated text. Those long “em dashes” and nearly every paragraph is summarized with bullet points. Don’t worry I get rid of all my bullet points before posting my blogs ;-)

It took me a while to realise that continually asking AI to ‘regenerate’ an answer was effectively asking it to roll the dice again and again. Then I stumbled across Zero-Shot, One-Shot and Few-Shot Prompting…

Learning to make use of these techniques will be the single most powerful shift you can make to your workflow: understanding when to use Zero-Shot Prompting (your quick-and-dirty command) and One-Shot Prompting (The Goldilocks technique) and when to switch to Few-Shot Prompting (giving the AI a template to follow).

If you want the AI to stop guessing and start mimicking your brand’s voice, your logic, or your formatting, you need to master these techniques and understand when each one is most appropriate. In 2026, the era of treating AI like a magic 8-ball is over. We are now in the era of structured prompting.

So let’s try to understand these techniques a little better…

Zero-Shot : The "Quick & Dirty" Method

Zero-shot is the "Google Search" that I was unwittingly using when I started working with AI, and I’m sure everyone also started here. It’s built for speed, intuition, and broad strokes. You aren't teaching the AI; you are tapping into its existing massive library of patterns.

If you’re interested in the “sciency” bit, Zero-shot relies on “Global Probability”. When you ask for a "legal summary," the AI looks at the trillions of words it was trained on and predicts what a "standard" legal summary looks like. It’s essentially playing a high-stakes game of "predict the next word" based on general consensus.

What is it good for?…

After spending an initial period of asking it to create a poem about pirates lost in a garden centre and writing a story about a grubby bear with a gambling addiction, I really found it useful for brainstorming and providing me with a list of ideas, such as:

  • Suggesting 10 titles for a blog article.

  • Summarize this 40-page PDF into 5 bullet points.

  • Broad Fact-Finding such as "What were the three primary causes of the French Revolution?". These types of prompts lead to those Google Search AI Overviews which provide a deeper, more direct answer than sifting through loads of websites and Wikipedia articles.

  • Translate this menu into conversational Italian.

The Danger Zone

Once you’ve been around the block on reading AI generated text you’ll understand you’ll start spotting it instantly. So this Zero-Shot method is the worst for providing generic or clichéd blocks of text, mostly due to this "Global Probability" mechanism.

And for that reason alone it is more likely to hallucinate with a "plausible-sounding" answer if it doesn't know the fact, especially without giving it examples to anchor it.

Lastly, If you need the data in a specific format like JSON or CSV, Zero-shot will almost always include "here is your data!" text preceding the data that breaks your code.

One-Shot : The "Goldilocks" Technique

So, onwards and upwards. Sometimes you don’t need a whole training set; you just need to clear up the confusion. One-shot is providing exactly one example. It’s the most efficient way to define a "style" or "format" without cluttering your context window.

Note : A context window refers to the amount of text (measured in tokens) that a Large Language Model (LLM) can process or "remember" at one time.

One-shot acts as a structural anchor. While Zero-shot leaves the AI guessing about your preferred format, a single example removes 90% of that ambiguity. It’s particularly effective for high-performing 2026 models like Gemini 3 Flash or GPT-5, which are now sensitive enough to pivot their entire behavior based on a single data point.

What is it good for?…

  • If you want the output in a specific JSON structure or bullet-point style, you can define the exact format you want just by providing a verbatim example.

  • Provide one previous email you wrote so the AI can mimic your specific tone and level of formality.

  • Or just to be a little more obscure, maybe you’re translating English to "Legal-Speak" where one example shows the level of complexity that you are trying to achieve.

Few-Shot : The "Pattern-Match" Powerhouse

If Zero-shot is “suck it and see”, Few-shot is a 1-on-1 coaching session. You are providing a "mini-dataset" within the prompt, forcing the AI to ignore its global averages and follow your specific logic.

What is it good for?...

In 2026, models now have massive "context windows" (their short-term memory). Few-shot works because the AI prioritizes the patterns found inside the prompt over the patterns it learned during training. You are essentially creating a temporary "Custom GPT" for that single chat.

Why "Three" is the Magic Number

  • One Example is a suggestion (the AI might think it's a fluke).

  • Two Examples create a line (a basic direction).

  • Three Examples create a pattern. Once the AI sees a pattern repeated three times, its mathematical confidence in mimicking that pattern skyrockets.

Pro-Tip: "Diverse Few-Shotting"

Don’t just give three identical examples. Give three different versions of a success.

  • Example 1: Short sentence success.

  • Example 2: Long, complex paragraph success.

  • Example 3: Success with an "edge case" (like a negative or a question).
    This teaches the AI the boundaries of your request, not just the middle.

The "Shot" Summary

Here’s a wee summary to maybe give you a rule of thumb:

Technique

Method

Accuracy

Token Cost

Best For...

Zero-Shot

Just a command.

⭐⭐

🟢 Lowest

General knowledge, brainstorming

One-Shot

Command + 1 Example.

⭐⭐⭐

🟡 Low

Setting a specific format or tone

Few-Shot

Command + 3-5 Examples.

⭐⭐⭐⭐⭐

🔴 Higher

Logic, complex classification, data clean-up


Some Examples (The "Secret Sauce")

Now I will attempt to summarize with some solid examples to make it less abstract and academic…

Zero-shot prompting

This involves giving the model a direct instruction to perform a task without providing any examples.


Example : Translate this sentence from French to English: 'Bonjour le monde'..


Where it succeeds : Zero-shot succeeds because it is highly efficient for simple, well-understood tasks that the model has frequently encountered during its training, such as straightforward translations like this.

Where it fails : It often falls short when a task requires a specific output structure or when the prompt involves ambiguity, as the model is left guessing the desired format without a pattern to follow.

One-Shot Prompting

One-shot prompting enhances the zero-shot approach by providing exactly one input-output example before presenting the actual request.


Example : Translate the following sentence. Example: 'Salut' → 'Hello'. Now translate: 'Bonjour' → ?.


Where it succeeds : This technique is ideal when the model needs a specific format or context to understand a fairly simple task, giving it a basic starting point to imitate.

Where it fails : One-shot prompting struggles with nuanced tasks because a single example cannot fully capture the range of possible edge cases or complex formatting rules.

Few-Shot Prompting

Few-shot prompting provides multiple examples (typically two to five) to help the model recognize patterns and learn in-context.


Example : Parse a customer's pizza order into valid JSON

EXAMPLE 1 : I want a small pizza with cheese, tomato sauce, and pepperoni.

JSON Response: { "size": "small", "type": "normal", "ingredients": [["cheese", "tomato sauce", "peperoni"]] }

EXAMPLE 2 : Can I get a large pizza with tomato sauce, basil and mozzarella JSON Response: { "size": "large", "type": "normal", "ingredients": [["tomato sauce", "basil", "mozzarella"]] }

EXAMPLE 1 : Now, I would like a large pizza, with the first half cheese and mozzarella. And the other tomato sauce, ham and pineapple. JSON Response: { "size": "large", "type": "normal", "one-half-ingredients": [["tomato sauce", "basil", "mozzarella"]], "second-half-ingredients": [["tomato sauce", "ham", "pineapple"]] }


Where it succeeds : Few-shot prompting dramatically succeeds where zero-shot and one-shot fail by enforcing strict structural patterns (like generating JSON, YAML, or bulleted lists) and teaching the model how to handle varied, nuanced inputs. It allows the model to learn entirely new concepts in-context, such as successfully using a made-up word in a sentence after seeing a few examples of how it is done.

Where it fails : Few-shot prompting hits its limits when dealing with complex, multi-step reasoning or arithmetic tasks. For instance, providing multiple examples of whether a group of odd numbers adds up to an even number might still result in the model returning an incorrect answer for a new list of numbers. Because standard few-shot prompting only shows the final answer rather than the process of getting there, the model fails to learn the underlying logic. To succeed where few-shot fails, you must transition to Chain-of-Thought (CoT) prompting, which provides examples that break the problem down into intermediate reasoning steps. I will delve into CoT prompting in a future post.

Conclusion (and a wee challenge)

The difference between basic AI usage and true mastery often comes down to context.

Use Zero-Shot when you are exploring, brainstorming, or doing a task so simple that it’s almost impossible to mess up. It’s built for speed.

But when reliability, predictability, and precise formatting matter—especially if you are automating workflows—you must use Few-Shot. By providing just three curated examples, you anchor the model's logic, eliminate "AI-isms," and ensure consistent results.

A wee challenge for this week:

  1. Take the last prompt you wrote that gave you a generic, frustrating result.

  2. Structure that same task as a Few-Shot prompt, providing the AI with three examples of what a perfect response looks like.

  3. Compare the outputs.

No comments:

Post a Comment

The End of Prompt Sorcery: Why We Are Engineering Systems, Not Sentences in 2026

  Now this post might seem like a complete contradiction! Previously, I have been waxing lyrical on all sorts of prompting techniques from Z...