As (almost) everyone knows, the first artificial intelligence program was created by Herbert Simon in the 1950s, who won the Nobel Prize for… economics about twenty years later!
In his book “The Sciences of the Artificial,” Simon writes:
An artifact [the artificial] can be thought of as a meeting point—an “interface” in today’s terms— between an “inner” environment … and an “outer” environment …
Considering that the software he wrote with others in 1956, the Logic Theorist, proved theorems of logic, this seems strange. Yet today more than ever, it is clear how right Simon was.
More than ever today, with arrival of generative artificial intelligence, the technology behind ChatGPT and company. Have a look at the latest article of your favourite influencer and…. GPT is so intelligent! So much more than just an interface, it understands!
Yes, it behaves intelligently—but so does a car that accelerates and brakes on its own. Nonetheless, we don’t call the miracle, nor the sapiens-machine singularity.
The intelligence we see in AI today is not really its own – AI is more like a puppet that shows the cunning of those who pull the strings. Those hundreds of millions of dollars spent by OpenAI for ChatGPT? They were used to pay 1000s people to teach it how to behave intelligently. Basically, GPT-4 on its own is just very good at playing with words, but it doesn’t really understand them.
This brings us back to Simon and the artificial seen as an interface between its internal world (the digital) and ours (the analog). Think of generative AI as a fantastic gadget that sits between us and our technological tools. If used well, it will do an excellent job in helping us use digital tools.
Generative Artificial Intelligence, in our opinion, will explode as an intermediary between the Sapiens and the digital world. Put a bit technically, to translate requests made in natural language into programmatic languages and vice versa. For software, understanding “I would like to come on November 9th at 10:15” is not trivial. On the contrary, for a language model like GPT, it is easy—as it is easy for it to translate it into a “computer format,” i.e., asking for the year (which it does not know) and generating 2024-11-09T10:15:00+02, which means the same thing but every developer can easily use in their own software code. This is the interface!
For now, we use models like GPT in this way. We take them for what they know best: interpret language. The result? Useful software. Which doesn’t get lost in chatter but tries to solve the problem in the best possible way.