Skip to main content

How to ask an AI model. Please?

six good techniques for prompt engineering

“Prompt engineering,” or the art of asking the right question in the right way, is somewhat like being a manager and asking your subordinates to work. For example, if you’re working with a Generative Artificial Intelligence model and you want it to write a cover letter for a job, you don’t just tell it to “write something.” You give it a well-constructed prompt, showing your resume and describing the job you’re applying for. Prompt engineering is about how to provide information to a model, and how to ask for results. But it’s not an abstract art: just a few years after the emergence of generative models, hundreds of scientific articles have already been written on the subject. Fortunately, on February 5, 2024, an excellent review of these articles was published. It is about 8 pages long, easy to read, but we realize it’s not for everyone! So, we thought of extracting the 6 most important techniques and adding some examples.

1. Zero-Shot Prompting: The most immediate use. It’s a bit like walking into a room where there’s an intern and asking them a point-blank question. You present the model with a task it has never seen before, without any examples, and it uses what it already knows to make a guess. For instance, you could ask an AI: “What is the best cold brew coffee on the market?” without ever having taught the model anything about coffee, extraction methods, and how humans appreciate good flavors.

2. Few-Shot Prompting: This is like giving the AI a little help. Instead of sending it blindly into the task, you provide some examples to help it understand what you’re looking for. Say you’re teaching the AI animal sounds. You might say: “A dog goes ‘woof,’ a cat goes ‘meow,’ what does a cow do?” With these few examples, the AI grasps the concept and can respond: “A cow goes ‘moo’.” This technique is, in our opinion, essential. Providing examples to the model infinitely improves the responses.

3. Chain of Thought Prompting (CoT): Sometimes, problems require a bit of step-by-step reflection. CoT is like encouraging the AI to think out loud as it solves a problem. Imagine a complex math problem like: “If you have 5 apples and you give away 2, how many do you have left?” You wouldn’t believe it, but adding “Break down the problem and reason step by step” allows the model to perform calculations it otherwise wouldn’t.

4. Retrieval-Augmented Generation (RAG): It’s not exactly prompt engineering, but we can’t help but mention it. Have you ever used a cheat sheet during a test? That’s RAG for AI. Faced with a question, the AI pulls in extra information from a vast database to enrich its answer. So, if you ask: “Who was the first person on the moon?” the AI might pull in extra details about the Apollo 11 mission to give you a more comprehensive answer.

5. Self-Consistency: This technique is like double-checking your work in a test to make sure the answers are consistent. You tell the model to generate multiple answers to a problem, and then compare them to find the most coherent solution. So, if it’s solving a riddle, it might propose several hypotheses and then focus on the one that makes the most sense based on what it knows.

6. Be kind: This means you should always say “Please” when asking and “Thank you. Could you now…” when re-asking. This technique actually is not in the review—it comes from my gramma. But, believe it or not, we noticed that it works also with GPTs, not just people!

Happy prompting everyone!

AI and Us

Have you ever noticed how quickly our world has changed with technology? One day we were talking with Nokia phones, the next everyone was posting on Instagram and the like. And the next big change might just be AI, or Artificial Intelligence.

The question is: should we be happy that AI is entering our lives, or should we eye it suspiciously from across the room?

AI is certainly the ideal helper when it comes to chatting and sharing interesting things. It can hold onto a mountain of information and find what we need in a few moments. Generative AI can now even read and digest for us. That’s why in the book The Amazing Journey of Reason, written by Mario Alemi, the co-founder of MrCall, we find the idea that human society is increasingly transforming into a network of brains connected by digital synapses. We as a species as a whole are becoming smarter because we are getting better at saving a trace of what we know and sharing it, just like neurons do in our brain, and proteins in our cells.

But there’s a problem – AI doesn’t intimately understand being human. It doesn’t understand, for example, our jokes, our dreams, or why so many people love kitten videos (we don’t understand that either…). That’s why we need to treat AI as a team project: involve thinkers, artists, and emotional people – not just technicians – to ensure that AI doesn’t become a party crasher. Otherwise, AI could become like an autoimmune disease, mistakenly attacking the very entity that produced it and was designed to protect it. This metaphor serves as a reminder to align AI’s capabilities with humanity’s wellbeing rather than its potential harm.

Let’s say AI should be like our right hand, not the other way around. It shouldn’t be humans “training” algorithms with “likes” and “dislikes” to teach them which image sells more (it’s always the kittens in the end).

Otherwise, we risk ending up chatting with bots instead of other humans. AI is like any other powerful tool – to be handled with care.

So, what’s the situation? It’s really up to us to decide – AI is like a river. Channeled correctly, rivers have nourished civilizations, empowered communities, and created pathways to unexplored knowledge. But if left unchecked, rivers can flood and irreparably damage the very city they made great.

It should be our hands, and not just those invisible ones of the market, guiding the development of such powerful technologies. Nuclear energy has brought well-being where it has been used well, but disasters where it has been used poorly. And missed wealth where it has been rejected.

From Fire to Generative Artificial Intelligence

From the time our ancestors discovered fire to today’s solar and nuclear power plants, we have witnessed energy revolutions that have fueled human progress. But that’s not all: there have also been information revolutions – from the birth of language to writing, from printing to telephony, and, of course, the internet.

Now here we are, in the midst of the web era, overwhelmed as if by a raging river. Our daily life, our species, our society, everything has changed. It hasn’t even been two decades, and we’ve gone from not knowing whether it was summer or winter in Brazil to being constantly informed about what’s happening on the other side of the world. In the end, we feel it close, as if it were right next to us. The world has become small, and perhaps a little better, despite everything. Yes, with ups and downs…

But there’s more. Illiteracy is becoming a thing of the past, and with it, population growth is slowing down. Women are studying, working, and the world, step by step, seems to be moving towards a slightly more sustainable future.

But there’s something that’s growing and getting worse: digital bureaucracy. Yes, that dark forest of rules, numbers, documents, and interfaces that seem to come out of a Kafkaesque nightmare. Who hasn’t cursed the municipal website while trying to pay a fine, right? Worse than the fine itself!

The realm of the absurd: we, the sapiens, with two million years of linguistic evolution behind us, after writing the Odyssey, now find ourselves pressing buttons on glossy screens like neurotic monkeys at the zoo. But, we believe, things are about to change: thanks to Generative Artificial Intelligence.

Did you think Generative Artificial Intelligence would only serve to make us laugh and cry with online content? That too. But the real magic lies in its power to change the way we interact with the digital world. Imagine being able to talk to a database as if it were an old friend. “Give me the addresses of the customers?” Done. And in the blink of an eye, without needing to know SQL.

An agent with Generative Artificial Intelligence could even navigate that awful municipal website for you. No matter how complicated it is: it doesn’t give up in front of a bad interface (but you still have to pay the fine).

And now, the million-dollar question (or rather, billion-dollar): how much is this technology worth? Look at OpenAI: from zero to 90 billion in no time. And Nvidia, with its microchips that run these software, is approaching a capitalization of almost a thousand billion.

A bubble? Maybe. But it might be not. If we think about the astronomical increases in market capitalizations in past years, who can say where we will arrive in a not-too-distant future? OpenAI, Nvidia, or some unknown new player – someone will scale these new heights. And we will be here, perhaps more neurotic, perhaps less, but certainly more connected to our digital world.

Artificial Intelligence and the Interface

As (almost) everyone knows, the first artificial intelligence program was created by Herbert Simon in the 1950s, who won the Nobel Prize for… economics about twenty years later!

In his book “The Sciences of the Artificial,” Simon writes:

An artifact [the artificial] can be thought of as a meeting point—an “interface” in today’s terms— between an “inner” environment … and an “outer” environment …

Considering that the software he wrote with others in 1956, the Logic Theorist, proved theorems of logic, this seems strange. Yet today more than ever, it is clear how right Simon was.

More than ever today, with arrival of generative artificial intelligence, the technology behind ChatGPT and company. Have a look at the latest article of your favourite influencer and…. GPT is so intelligent! So much more than just an interface, it understands!

Yes, it behaves intelligently—but so does a car that accelerates and brakes on its own. Nonetheless, we don’t call the miracle, nor the sapiens-machine singularity.

The intelligence we see in AI today is not really its own – AI is more like a puppet that shows the cunning of those who pull the strings. Those hundreds of millions of dollars spent by OpenAI for ChatGPT? They were used to pay 1000s people to teach it how to behave intelligently. Basically, GPT-4 on its own is just very good at playing with words, but it doesn’t really understand them.

This brings us back to Simon and the artificial seen as an interface between its internal world (the digital) and ours (the analog). Think of generative AI as a fantastic gadget that sits between us and our technological tools. If used well, it will do an excellent job in helping us use digital tools.

Generative Artificial Intelligence, in our opinion, will explode as an intermediary between the Sapiens and the digital world. Put a bit technically, to translate requests made in natural language into programmatic languages and vice versa. For software, understanding “I would like to come on November 9th at 10:15” is not trivial. On the contrary, for a language model like GPT, it is easy—as it is easy for it to translate it into a “computer format,” i.e., asking for the year (which it does not know) and generating 2024-11-09T10:15:00+02, which means the same thing but every developer can easily use in their own software code. This is the interface!

For now, we use models like GPT in this way. We take them for what they know best: interpret language. The result? Useful software. Which doesn’t get lost in chatter but tries to solve the problem in the best possible way.

Trustworthy Artificial Intelligence—Our Vision

The fascinating world of data analysis has progressed leaps and bounds, with every bit of the digital sphere becoming a potential goldmine of insights. In politics, it has re-written the rules of engagement and redrawing the battle lines in many instances.

Two prominent examples, Brexit and the election of Donald Trump, exemplify the profound implications that the intersection of Artificial Intelligence, social media, and politics can bring about.

To begin with, the case of Brexit provides an enlightening illustration. The decision of the UK to exit the European Union was a significant political and economic event with global repercussions. It may seem that the vote was purely a democratic expression of the will of the British people. However, the exploitation of data produced by social media platforms in this saga is a factor that can’t be underestimated.

Algorithms trawled through enormous amounts of data to identify patterns. These algorithms were the product of a specialised field that combines computer science with psychology, economics, and sociology. Using this information, as Cambridge Analitica proudly touted to every one (they defined themselves a “global election management agency“), political strategists were able to target users with tailored content that played upon their fears, aspirations, and biases.

This gave birth to the phenomenon of Fake News. People were targeted with skewed or outright false narratives about the European Union, designed to fuel Euroscepticism. These messages were cleverly designed to resonate with individuals’ existing views and fears, making them more likely to vote ‘Leave.’

Similar tactics were witnessed during the 2016 U.S. presidential elections, which led to the victory of Donald Trump. Here, the plot thickened with the alleged involvement of foreign powers. Accusations were made about an army of social media bots, created and controlled by foreign entities, designed to flood American social media platforms with propaganda and disinformation. These bots exploited the divisions in American society, sowing confusion, discord, and distrust.

The episode serves as a warning about the potential abuse of artificial intelligence and social media in political contexts.

These cases point to a potential danger that looms in the future: the prospect of an Artificial Intelligence agent with the capability to influence each voter individually. Imagine a software so advanced that it could craft the perfect argument to sway every single voter. Such an AI could manipulate voters into supporting a candidate not on the basis of their policies or merits but based on its capacity to tap into their fears and desires.

This raises the spectre of a potential ‘strong (wo)man’ figure, reminiscent of past fascist dictators such as Mussolini or Hitler. Such an individual could potentially exploit this AI capability to manipulate public opinion on an unprecedented scale. They could bend the will of the masses to their liking, undermining the very basis of democratic decision-making. In essence, democracy could be hijacked by a powerful AI tool and its unscrupulous handlers.

The idea of a manipulative AI is not purely speculative anymore. The leaps in technological advancements we are witnessing and the existing examples of technology’s role in manipulating public opinion make it a possibility that we can’t afford to ignore.

Unregulated Artificial Intelligence shown its ability to manipulate public sentiment and influence democratic outcomes, as seen in the Brexit referendum and the 2016 U.S. Presidential election. They have demonstrated their capacity to be used as vectors for misinformation and propaganda.

While the World Wide Web promised more informed and engaged citizens,  its conubium with Artificial Intelligence and free-style capitalism gave birth to a three-head Cerberus leading our global society into a dystopian society where a non-sentient (sorry Google Bart) machine will, in the end, control us.

In the face of such threats, it’s crucial to cultivate a digital ecosystem that champions truth, transparency, and integrity. This includes regulations on the use of personal data, stringent fact-checking mechanisms to combat misinformation, and public education on digital literacy. Technological advances in AI must be matched with equal progress in ethical guidelines and accountability mechanisms to prevent misuse.

The digital revolution which started more than 50 years ago with the creation of a network of computers, the Internet, still holds vast potential to enhance democratic processes. However, without appropriate checks and balances, these same tools can become threats to the very ideals they promise to uphold. As we move further into this new frontier, our challenge lies in harnessing the power of these technologies while safeguarding the principles of democracy.

This isn’t just the responsibility of policymakers: first and foremost, it’s the responsibility of people, like us, who are developing products based on Artificial Intelligence.