Skip to main content

How to ask an AI model. Please?

“Prompt engineering,” or the art of asking the right question in the right way, is somewhat like being a manager and asking your subordinates to work. For example, if you’re working with a Generative Artificial Intelligence model and you want it to write a cover letter for a job, you don’t just tell it to “write something.” You give it a well-constructed prompt, showing your resume and describing the job you’re applying for. Prompt engineering is about how to provide information to a model, and how to ask for results. But it’s not an abstract art: just a few years after the emergence of generative models, hundreds of scientific articles have already been written on the subject. Fortunately, on February 5, 2024, an excellent review of these articles was published. It is about 8 pages long, easy to read, but we realize it’s not for everyone! So, we thought of extracting the 6 most important techniques and adding some examples.

1. Zero-Shot Prompting: The most immediate use. It’s a bit like walking into a room where there’s an intern and asking them a point-blank question. You present the model with a task it has never seen before, without any examples, and it uses what it already knows to make a guess. For instance, you could ask an AI: “What is the best cold brew coffee on the market?” without ever having taught the model anything about coffee, extraction methods, and how humans appreciate good flavors.

2. Few-Shot Prompting: This is like giving the AI a little help. Instead of sending it blindly into the task, you provide some examples to help it understand what you’re looking for. Say you’re teaching the AI animal sounds. You might say: “A dog goes ‘woof,’ a cat goes ‘meow,’ what does a cow do?” With these few examples, the AI grasps the concept and can respond: “A cow goes ‘moo’.” This technique is, in our opinion, essential. Providing examples to the model infinitely improves the responses.

3. Chain of Thought Prompting (CoT): Sometimes, problems require a bit of step-by-step reflection. CoT is like encouraging the AI to think out loud as it solves a problem. Imagine a complex math problem like: “If you have 5 apples and you give away 2, how many do you have left?” You wouldn’t believe it, but adding “Break down the problem and reason step by step” allows the model to perform calculations it otherwise wouldn’t.

4. Retrieval-Augmented Generation (RAG): It’s not exactly prompt engineering, but we can’t help but mention it. Have you ever used a cheat sheet during a test? That’s RAG for AI. Faced with a question, the AI pulls in extra information from a vast database to enrich its answer. So, if you ask: “Who was the first person on the moon?” the AI might pull in extra details about the Apollo 11 mission to give you a more comprehensive answer.

5. Self-Consistency: This technique is like double-checking your work in a test to make sure the answers are consistent. You tell the model to generate multiple answers to a problem, and then compare them to find the most coherent solution. So, if it’s solving a riddle, it might propose several hypotheses and then focus on the one that makes the most sense based on what it knows.

6. Be kind: This means you should always say “Please” when asking and “Thank you. Could you now…” when re-asking. This technique actually is not in the review—it comes from my gramma. But, believe it or not, we noticed that it works also with GPTs, not just people!

Happy prompting everyone!