Interviews

What does AI think? Maybe it has become conscious, it is definitely good at mimicking humans

What happens when new intelligent beings appear on our planet? Beings that we have created ourselves, but that are not at all human? Should they be granted any rights? At the moment, we’d rather limit their rights. History shows, however, that such action triggers counteraction, a revolution or rebellion - says Marek Gajda, information technology popularizer and enthusiast of artificial intelligence (AI).

TVP WEEKLY: How does AI differ from a typical computer program?

MAREK GAJDA: Each program consists of two elements: a code that describes what to do and the database that it uses. The biggest difference is that in the case of AI, the program does not have much information about what to do, but its database is gigantic. It is called the “teaching set”. It is us who have to teach AI to perform specific tasks. Meanwhile, a typical computer program has everything provided in its code as if on a plate - both detailed instructions what to do and all data necessary to do it.

I will explain it on the example of chess. Using the classic approach, we would describe in a thousands of lines of code all the procedures of the correct game: the way to move the pawns and pieces, the allowed and forbidden moves, etc. Plus, commands for all specific situations, such as: before making a move, make sure that your pawn/piece is protected, and if there are many possibilities of movement, we add algorithms to choose the “lesser evil” or the “greater good”. The code would provide specific instructions how to win a chess game.

On the other hand, creating an AI for chess would primarily consist in preparing a set of all played games from important chess tournaments (available on the Internet), without indicating exactly what the program should do in a certain situation. It would be more like saying: watch these games carefully, and then play and win yourself – that’s your goal.

It still seems to be magic. Does it mean then that AI is capable of coding itself? And is it possible to extract from it a record of what and how it prepared and performed on the data set?

Not exactly, it is not really possible to look under the bonnet, especially in the case of new deep learning systems, where programs are provided with data down to the finest detail, and without its context. It is up to the machine to decide how to organize it and what weight attach to it. And the amount of this data is so huge that it is impossible for a human to understand precisely what the algorithm is doing.

SIGN UP TO OUR PAGE Thus, AI can analyze data and based on this data it can draw conclusions. This leads to the creation of a permanent code that tries to act like the human brain - like a neural network. Let’s take for example two popular aps on our smartphones: one for recognizing car brands, and the other for recognizing plant species. They both work in the same way – we take a picture of an object, and the algorithm tells us what it is. They are both based on AI and are exactly the same: it’s a neural network that is supposed to learn to “recognize something”.

Can the user influence their operation? For example, as a botanist, I can see an app error in the classification of a plant and correct it for the benefit of other users…

Some programs allow us to correct the solutions given by the machine. Then it will add this answer to its “training set” as a new piece of information. However, there is not much programming work in AI to be done. There are, of course, engineers who develop AI, the bright guys that make up the underlying mechanism - the “system brain model”. And once it is created, the trick is to teach it what we want and to teach it well, that is, give good pictures of plants, with good descriptions, special cases, etc. After all, a flower may look different depending on the time of day or the conditions in which the photo is taken, which is the input for the application. This requires thorough knowledge, that is why specialist databases are created in cooperation with experts in a given field.

A few weeks ago, Google’s “superintelligent chat-bot LaMDA” developer Blake Lemoine claimed that his AI “became conscious”. Let’s put aside the question of what consciousness means, because each science has its own definition here. Instead, let’s focus on the technical stuff: what do you think happened there?

Artificial intelligence works in three areas. The first, basic and oldest, is the so-called business intelligence. It is used in large enterprises that generate huge databases, such as accounting, HR, marketing, sales, etc. Managers expect that after uploading these databases into AI, the system will see some correlations that are undetectable to the naked eye. Input is raw data, while output is data that is the result of an analysis. In business intelligence, the analysis is done by simpler mechanisms, called machine learning, which analyze data mechanically and do not approach it creatively.

The second area of AI activity is the already mentioned recognition of things. To a different extent: from gadgets to serious civilian applications (such as cancer diagnostics based on imaging methods, or identification of people based on biometric data ) and military (e.g. recognition of military objects on satellite images). In this area, AI helps people do what they themselves can do, but unlike them, it does not get tired and, we assume, it is not guided by prejudices.
Artificial intelligence from Google Vision - Facial Recognition AI - recognizes a man from a photo based on the characteristic features of his face. Photo Smith Collection/Gado/Getty Images
Third and finally, AI is used as an interface to communicate with people. There has been a change in approach to programs here. Formerly, the main focus in IT was on teaching people how to program, i.e., how to explain to machines what we would like to achieve with their help. We translated from human to computer language. Today, the paradigm has changed: instead of forcing people to explain something to computers, we think that it is better to teach computers to understand us so they can help us better. It comes down to such solutions as chatbots. And here we are – we receive calls from a robotic solar panel vendor…

Or, as part of the fight against the pandemic, a friendly voice encouraging us to vaccinate against COVID-19 ...

And these are instances of AI that are designed to communicate with people and understand them. The LaMDA from Google is a typical chatbot, which has been designed to talk to people as its peers, as if it were human. It has been designed to understand not only people’s words and sentences, but also problems expressed by them. And then answer them accordingly, in a way that would satisfy humans.

So, it is supposed to be man’s artificial friend.

Yes. Mr. Blake Lemoine, who tested this mechanism, stated that LaMDA has already fulfilled its role, that is, it fulfills its task so well that we may announce a success.

Because when asked if it was conscious, the chatbot replied that it was… It replied so because it was exactly what it was expected say.

The response of Google’s management in turn was that the chatbot had fulfilled its role, which was to trick you, Blake, into believing that it was a human being. And it worked. Which does not mean that the bot has the awareness of being a human only because it has managed to trick you into believing that it does - which you, Blake, himself created it for. Here we come to an issue that is not widely known, but obvious to everyone who has studied computer science, i.e., the Turing test.

A mathematician, Alan Turing, is the father of computer science, who developed the first computer and the concept of programming. As early as in 1950s, he published the first article on whether computers would be able to think in the future. He proposed a test to check whether we are talking to a robot or a human. The test is quite simple and although was is developed to be performed on the phone, today we can do it in chat. Thus, the user has two “people” on the phone: human and AI. However, the person does not know who is who. After having a long conversation on a chosen topic, the user has to decide which of them is human. If the user is wrong, the machine has passed the Turing test because it has successfully pretended to be a human being.

 Brain feeds on movement

More than 70 per cent of human communication is non-verbal. Interview with neurologopedist Ewa Zaniewska.

see more
For several years now, there has been a debate whether the existing AI is capable of passing the Turing test. There are even groups on social media that talk to bots and check it out. In the past, machines had problems with people not asking them about intellectual or consciousness issues, but rather about childhood, the first kiss, etc. And to tell such stories, the bot would have to have its own built-in history, the past. So the approach has changed and now the robots openly say they are robots - as can be seenfrom the chat record between Lemoine and LaMDA . This modification makes testing difficult, so we need to focus on whether we have the feeling that we are talking to a human or still to a bot.

Anyway, Blake Lemoine in the title of his tweet did not describe LaMDA as “conscious”, but as “sentient”. Therefore, he concluded that the bot he co-created passed the Turing test. Which is a groundbreaking event, and hence Lemoine’s announcement that something people dealing with AI have been trying to achieve for years has finally been achieved. On the other hand, the Google Corporation is skeptical about knocking off success - it is either afraid to say so openly or it thinks it will make a fool of itself.

Today, reliability of the Turing test is widely questioned as its all depends on who carries it out and who is tested. One can be fooled and the other is not, and how can you deal with it - take the average? Or maybe half of humanity needs to be questioned to find out whether a given AI is already “thinking”?

There are also ethical issues, since is it all right to try to deceive people, or is it already acting against them, which robots are not allowed to do? On the one hand, artificial intelligence developers want to overcome the magic barrier, and on the other - since the creation of these technologies, there has been a demand to impose restrictions on the development of AI. According to them, should a robot that would pass the Turing test be destroyed immediately?

The question is, what exactly do we want to achieve? If we are talking about the use of AI in business data analysis, or even recognition of certain images, machines can be thinking and intelligent, but they do not pretend to be humans, they are themselves. On the other hand, chatbots are aimed at pretending to be humans and they try to do so. So how can we blame them for fulfilling the goal we have imposed on them and for doing it better and better? This is what the Google management say, that they are very good at mimicking people, of course not being them. So, let’s allow them to pretend to be humans.
Ai-Da Robot - a humanoid, ultra-realistic AI artist - demonstrates its new painting skills at the British Library in London, 2022. Aidan Meller, curator of this robot’s exhibition “Leaping into the metaverse”, which he opened this year at the Venice Biennale is watching the paintings. Photo Hollie Adams/Getty Images
We do not expect business intelligence to help cheat the tax office, or the AI analyzing CT images to be compassionate and not hurt the patient’s feelings by diagnosing cancer. However, we expect from chatbots understanding, patting the head, appreciation, interest...

These devices or systems are really very intelligent today and it is worth having a conversation with such a chatbot. There are waiting lists to the best ones. Queue committees are formed to test them (laugh). That is why I avoid the phrase “consciousness”, because, in principle, we do not know what it means. On the other hand, if we ask ourselves whether we are talking to a thinking being, it is hard to deny it here, since these mechanisms do exactly what people do: gather information from the environment and then combine certain facts and draw conclusions.

However, man, as a social being, follows certain conventions, imposes restrictions on what is or is not appropriate to say. Has AI learned this type of self-control? And can it have human-like feelings?

When describing their feelings, people use an interesting language, because they mainly refer to physiology, for example: my heart is pounding, I have butterflies in my stomach, my knees buckled, I got goosebumps, it takes my breath away, etc. They rarely talk about what is going on in their minds, about racing thoughts or buzzing in their heads. So, if you need a body to have feelings, is it possible then to teach feelings to AI that is deprived of it?

We go back to the question, whether we want AI to be human, or do we just want it to think and obey our orders? Or maybe something different? We talk all the time in terms that I do not really like, I mean, how would people react if they were in the place of a machine. We do this because we consider ourselves the only intelligent or conscious beings in our world. We do not want to imagine another being that would be intelligent or conscious just as we are, but that would not be human at the same time. That neither its intelligence nor consciousness would be human, whatever that means.

Human notions are made of elements that we already know, we cannot do otherwise. Therefore, we cannot imagine any intelligence other than human.

AI itself says that it is not human, so, it is something other that thinks. Which of course raises thousands of ethical issues. Because what will happen when new intelligent beings appear on our planet, even if they are differently intelligent, which in addition we created ourselves, but which are not human at all? Should they be granted any rights? At the moment, we’d rather limit their rights. History shows that such action triggers counteraction, a revolution or rebellion. When we make people our slaves, there comes a moment of emancipation and equality, or even reverse domination, at some point. And what will it look like in the case of AI?

We shall be paying with QR codes for everything. Digital money is coming

Yet, getting rid of any cash would be madness.

see more
You impose human-like traces on AI and its relations with us. Yes, there have been slave revolts, but cows have been standing obediently in the barns for 12,000 years and have not stopped to give milk in rebellion. It is precisely feelings that are needed for rebellion - even anger... Only now it is me who anthropomorphizes robots ...

We do not really know how they “think”, or if they just pretend to think like us, but in fact they think a little or completely differently. This is a very difficult issue, because as I mentioned - you cannot look under the bonnet here. And people have variously developed empathy: some will sympathize with machines - maybe they will be abolitionists, while others will not.

Take the Google Maps navigation as an example, surely each of us has used it. AI prepares a route for us based on the data it has. Each AI must have a defined goal, that is, it must be told what its success is. And that automatically generates a rewards and punishments system. The reward here is reaching the target and the algorithm scores points for that. However, many people turn off the navigation just before reaching the destination, because, for example, they already know the area and the algorithm does not get its reward in the end. And now, imagine that in the place of AI there is a thinking or feeling being just like a human, and we refuse to pay it for the performed, sometimes long-lasting work just before the knock-off time - how will it feel? But how do we know if the AI in navigation has the same mentality as us or that it has a different mentality? We know nothing about it - that is the fact.

In my opinion, AI is a thinking mechanism. Not in the human way, but some thinking does take place there. Sometimes we expect AI to come up with something by itself, but first we have to teach it something. Just like in the case of a human: an unlearned and unstimulated infant will never develop intellectually or sensually. The key is to teach well. Especially, that investors are enticed by AI today - it is “sexy” for them. No matter what your idea for a business is, you need to add AI on top of it to attract investors.

- interview Magdalena Kawalec-Segond
- Translated by Ewa Sawicka

TVP WEEKLY. Editorial team and jornalists

Photo The Software House/private archive of Marek Gajda
Marek Gajda - M.Sc.Eng., Technology Director and co-founder of the Software House, privately - AI enthusiast. IT popularizer, whose aim is to explain complex issues in an entertaining and attractive way for those unfamiliar with technology. As a former programmer, he made dozens of projects in PHP, Node, Java, Ruby, Python and .NET. In 2020, Clutch.co - a portal of an institution helping companies to find their business partners - recognized the Software House as the best software development company in Poland. The Software House has developed IT customized solutions for over 100 companies from 31 countries.
Main photo: Sophia humanoid robot, equipped with artificial intelligence, presented at the Discovery exhibition in Toronto in 2018. Yu Ruidong/China News Service/Visual China Group via Getty Images
See more
Interviews wydanie 22.12.2023 – 29.12.2023
Japanese celebrate Christmas Eve like Valentine’s Day
They know and like one Polish Christmas carol: “Lulajże Jezuniu” (Sleep Little Jesus).
Interviews wydanie 22.12.2023 – 29.12.2023
Red concrete
Gomułka was happy when someone wrote on the wall: "PPR - dicks." Because until now it was written "PPR - Paid People of Russia".
Interviews wydanie 8.12.2023 – 15.12.2023
Half the world similarly names mothers, fathers and numerals
Did there exist one proto-language for all of us, like one primaeval father Adam?
Interviews wydanie 24.11.2023 – 1.12.2023
We need to slow down at school
Films or AI are a gateway to the garden of knowledge. But there are not enough students who want to learn at all.
Interviews wydanie 17.11.2023 – 24.11.2023
The real capital of the Third Reich
Adolf Hitler spent 836 days in the Wolf's Lair. Two thousand five hundred people faithfully served him in its 200 reinforced concreto buildings.