Skip to main content

Artificial Intelligence and the Interface

As (almost) everyone knows, the first artificial intelligence program was created by Herbert Simon in the 1950s, who won the Nobel Prize for… economics about twenty years later!

In his book “The Sciences of the Artificial,” Simon writes:

An artifact [the artificial] can be thought of as a meeting point—an “interface” in today’s terms— between an “inner” environment … and an “outer” environment …

Considering that the software he wrote with others in 1956, the Logic Theorist, proved theorems of logic, this seems strange. Yet today more than ever, it is clear how right Simon was.

More than ever today, with arrival of generative artificial intelligence, the technology behind ChatGPT and company. Have a look at the latest article of your favourite influencer and…. GPT is so intelligent! So much more than just an interface, it understands!

Yes, it behaves intelligently—but so does a car that accelerates and brakes on its own. Nonetheless, we don’t call the miracle, nor the sapiens-machine singularity.

The intelligence we see in AI today is not really its own – AI is more like a puppet that shows the cunning of those who pull the strings. Those hundreds of millions of dollars spent by OpenAI for ChatGPT? They were used to pay 1000s people to teach it how to behave intelligently. Basically, GPT-4 on its own is just very good at playing with words, but it doesn’t really understand them.

This brings us back to Simon and the artificial seen as an interface between its internal world (the digital) and ours (the analog). Think of generative AI as a fantastic gadget that sits between us and our technological tools. If used well, it will do an excellent job in helping us use digital tools.

Generative Artificial Intelligence, in our opinion, will explode as an intermediary between the Sapiens and the digital world. Put a bit technically, to translate requests made in natural language into programmatic languages and vice versa. For software, understanding “I would like to come on November 9th at 10:15” is not trivial. On the contrary, for a language model like GPT, it is easy—as it is easy for it to translate it into a “computer format,” i.e., asking for the year (which it does not know) and generating 2024-11-09T10:15:00+02, which means the same thing but every developer can easily use in their own software code. This is the interface!

For now, we use models like GPT in this way. We take them for what they know best: interpret language. The result? Useful software. Which doesn’t get lost in chatter but tries to solve the problem in the best possible way.

Brain and Memory

Networks and Memory

When we need to remember something, we create links between the various parts of what we need to memorize. It’s as if we’re building a network, and each part of what we need to remember is a piece of this network.

There are two networks that we use every day to remember things. One is language: when we speak, we connect words together, and every sentence we utter is a “path” in the network of words. Some words we use often and are linked to many other words, while other words we use rarely.

The Brain Network

The other network we use is our brain. One of the greatest brain experts, Kandel, explained that in the brain, information is carried by groups of interconnected neurons, not by single neurons. Therefore, even in the brain, the keyword is “connection”.

Even though our brain is very complex and can remember things in different ways (for example, it remembers some things for a short time, others for a long time, some as places, others as actions), the way it remembers things is always the same: it creates new “paths” between neurons. This is what a psychology expert, Hebb, understood in 1949.

Neuron Activation

Hebb understood that when two neurons activate together many times, the bond between them strengthens, and this is the trace that makes us remember things. If two neurons activate together often, our brain understands that they need to be connected, and thus it creates a “path” between them.

When we need to remember something for a long time, the link between the neurons becomes definitive. However, if we need to remember something for a short time, the bond weakens if we do not often use those neurons. In essence, in our brain, the bonds we use often become stronger, while those we use less weaken and eventually disappear. It’s a bit like when we talk: if we often use two words together, our brain understands that they need to be connected. A hundred years ago, for example, nobody said “too cool”, so these two words were not connected. Today, however, we often use them together, so our brain has created a bond between them.

Natural and Artificial Neural Networks

The first neural network: the brain

We know relatively little about how our brain works, but we know exactly how artificial neural networks work, i.e. the software behind various products based on Artificial Intelligence, such as MrCall. We can then compare simple biological “brains” to an artificial neural networks.

The simplest brain: Elegans

At the moment, the brain we know best is that of a small worm called C. elegans. In 1986, all 302 neurons in the brain of a female C. elegans were mapped, and then studied extensively. In 2012, the same thing was done for the brain of male C. elegans, which has 383 neurons. The difference is due to the fact that the male must try to mate with the female, while the female can have children alone.

This coupling is not a simple thing. Although C. elegans is a very simple organism, its way of mating is complex. Yet, it manages to do all this with just a few neurons.

Elegans and neural networks

If we looked at the elegans’ brain as an artificial neural network, we would say that Nature has developed a simple—technically “shallow”—neural network. Shallow networks have only a few layers of neurons.

And so it is for the elegans’ neural network: first there are the sensory neurons, which recognise things around them, for example if there is a female nearby. Then, the information passes to a second group of neurons, and finally to the third group, the motor neurons, which initiate the movements.

There are therefore only 3 layers of neurons. In addition, as in the artificial neural networks we use today, some neurons of the C. elegans are able to remember information and reuse it.

It took Nature a few million years to develop nervous systems like that of the C. elegans, but the apparently simplicity of the of the brain of this little worm must not mislead us: “natural” brains are in any case complex, because they are made up of neurons, i.e. cells, which in turn are capable of analysing information. The neurons of artificial neural networks, on the other hand, are very simple, and in fact an artificial neural network of 300 neurons does very little!