Have you ever typed something into ChatGPT, and the answer was… just okay? 😞 And then, another time, you asked differently — and suddenly it was brilliant?.
That difference is all about prompting.
We’re going to look at seven prompting techniques that researchers and creators use to get the best out of language models. By the end of this post, you’ll have a toolbox you can actually use — whether you’re coding, writing, studying, or just exploring.”
Prompting is basically the way we talk to an AI.
But here’s the catch — prompting isn’t a built-in feature. It’s an emergent property. In other words, nobody coded a special module called prompting
. We discovered these tricks by experimenting with how language models behave.
It’s like teaching someone a new language — the way you ask the question changes the answer you get.
Let’s start with the simplest one: zero-shot prompting.
You ask an AI model to perform a task without providing any examples of how it should be done. Instead, you only describe the task in natural language, and the model has to infer the pattern and generate the correct output based purely on instructions and its prior knowledg.
This technique easy to use, fast, and works for many tasks, but the results can be less accurate, depend a lot on how you phrase the request, and may be inconsistent.
In this context, “shot” means example.
It’s like giving the AI a “shot” at learning the pattern from your prompt. The more “shots” (examples) you give, the better it usually understands the task.
People often talk more about zero-shot (fast, no examples) and few-shot (stronger, with several examples). One-shot is useful, but in practice, if you’re already preparing a prompt, it’s almost as easy to add more than one example — so “few-shot” gets more attention.
Sometimes you don’t just want an answer, you want reasoning. That’s where chain-of-thought prompting comes in. You literally ask the model to ‘think step by step.’
Instead of jumping straight to the result, the model “thinks out loud” in natural language, which usually improves accuracy on tasks that involve logic, math, or multi-step reasoning.
Common trigger phrases include:
The benefit is clearer reasoning and usually better accuracy, but the downside is longer responses and sometimes unnecessary or incorrect steps. It’s best used for complex problems, not simple factual tasks.
A technique where the model is first asked to generate some background information or context about a problem, and then that knowledge is used to answer the actual question. Instead of jumping straight to the solution, the model creates intermediate facts or explanations to “prime” itself. This often improves accuracy, especially for reasoning or domain-specific tasks.
A technique where a complex problem is broken down into a sequence of simpler subproblems, and the AI is guided to solve them step by step, starting with the easiest part and gradually moving to the hardest.
The idea is to reduce cognitive load for the model — instead of tackling the whole problem at once, it solves simpler pieces first, building toward the final answer. This often improves performance on multi-step reasoning, math, or logic task.
A technique where the AI is asked to review and improve its own initial answer, often iteratively, to produce a more accurate or polished result. Instead of just providing one answer, the model evaluates its output, identifies mistakes or gaps, and refines it.
It’s distinct from other techniques because it doesn’t require a human to provide the feedback or a separate model for the critique. Particularly effective for complex tasks like code optimization, creative writing, or long-form content generation, where a single prompt is unlikely to produce a perfect result.
A technique inspired by the Socratic method of “maieutics,” or “midwifery.” The core idea is to guide the AI to “give birth” to its own insights and understanding through a series of structured, open-ended questions, rather than simply giving it a direct instruction.
Just as Socrates would ask questions to help his students uncover truths they already held within their minds, maieutic prompting uses questions to draw out the reasoning, knowledge, and potential solutions that the AI already possesses.
Instead of a single, long-winded prompt, maieutic prompting involves a back-and-forth conversation. You might start with a broad question and then follow up with more specific, probing questions based on the AI’s response.
Mastering AI prompting techniques is the key to unlocking the full potential of large language models. From zero-shot simplicity to few-shot guidance, from reasoning-oriented methods like chain-of-thought and least-to-most, to advanced strategies like self-refine and maieutic prompting, each approach offers unique advantages depending on your task.
By understanding and experimenting with these techniques, you can guide AI more effectively, produce higher-quality outputs, and solve complex problems with confidence. Ultimately, the best results come from combining these strategies thoughtfully, adapting them to your specific goals, and iterating to find what works best.