Columns

The chatbot who almost became a doctor. The protagonist of the "Sensation of the 21st Century"

What do I think of him? To quote a fellow Doctor of Philosophy: 'Instead of admitting he doesn't know, he lies and collects data from his artificially intelligent bottom'.

It was developed by a previously unknown startup called OpenAI, and today Microsoft is investing $10 billion in it! ChatGPT has taken off - everyone has said what they know about it, so I will also speak. Let me warn you in advance that I have a problem with this invention like Galkiewicz had with Slowacki in 'Ferdydurke': "I cannot understand how it delights if it does not delight". I will try to show why. To be clear too: I'm not particularly afraid of ChatGPT. It just bores me. Such a "guggle" (after all, it draws handfuls from Google) with an afterburner - an internal editor that does the copy-pasting and cutting-bending for us. Instead of using the 10 texts it finds, it scans and compiles thousands. So "what's the big deal?" What's the hype about?

Because the hype is considerable. It took it just five days to cross the one million user barrier - which means it did it 60 times faster than Facebook (although it was a different time when FB took off 18 years ago, even in internet accessibility), but also 15 times faster than Instagram! This dragon's entry into the IT agora was announced on 30 November 2022, and by early December had already given rise to headlines in the press (still formulated by human editors): "As the sophistication of chatbots increases, the debate on artificial intelligence intensifies". And we can already hear in our heads the jingle of the editor Wołoszański's programme, let's call it this time "Sensations of the 21st century".

Then it was reported regularly about the extraordinary talents of this chatbot. A week ago, for example, "it turned out that ChatGPT was able to pass MBA exams and an exam giving a licence to practice medicine in the US". And the next day, "the ChatGPT bot passed exams at a US law school" - with what was not a multiple-choice test, but exams involving writing robust dissertations on topics ranging from constitutional law to taxes and torts. One would like to burst one's face behind the crowd in a big "WOW!", or possibly shake with fear combined with indignation. However, I suggest you take a break.

Faster, but not better. Unless in Polish

What a chatbot and artificial intelligence (AI) are and why they are being worked on at all I have already explained in these pages. To reiterate the point: we are trying to create a self-learning computer programme, capable of communicating with humans in the best possible way and mimicking perfectly our courses of thought, information gathering and processing, and storytelling. The fundamental problem that society could have with this is that we have no insight into the 'guts' of this computing machine. Even with a thousand intellectual athletes, it is impossible to verify within the code what this AI has 'sculpted' there and how it got there.

The war verifies Russian version of “the end of history”

Fukuyama’s version was contested by the representatives of “big-power patriotism”- the communists and nationalists.

see more
Why do we think a chatbot will do something better than us if we 'make it like us'? Yes, it will do it faster than us, because that's what it was created to do. But not necessarily more accurately - because that's not what it's made to do, to be a better, more intellectually honest version of us, just to be us on boost. Journalist and editor Wojciech Mucha, inspired by AI technologies, gave interesting testimony to this on social media. As he wrote, he hadn't had time to make himself a cup of tea, and the chatbot had already refined a large chunk of text from the information he had gathered - a paragraph that would have taken the average newsroom intern three-quarters of an hour to write.

ChatGPT searches instantly, but the problem is that it searches in online resources where there is a lot of rubbish. Where, even if one relies only on rather reliable information - from encyclopaedias or the official scientific circuit - one still encounters factual errors or incorrect ways of solving problems, including simple school assignments. After all, there are plenty of "scientific journals" of the "predatory" genre, i.e. money-grubbing, so they publish everything that is sent to them for a fee. They do not subject the text to any kind of proofreading, not even a spelling check, let alone a substantive review. A chatbot, on the other hand, is incapable of properly verifying the information it finds.

He himself will not define what truth is - when asked, he will rake from the web some definitions of truth formulated by philosophers and theologians and give them to us. We know that truth, according to ChatGPT, is what is the most common answer to a given question in the sources that the creators of his database considered reliable and asked him. Is he learning how to verify sources himself, how to deal with separating the internet grains from the chaff? I dare to doubt at times, for which I will give examples. Today we live in social media, so these will be examples from there, but only from people I know personally in real life. Because we have to stick to something when it comes to getting information in these strange times.

Our hero failed a trivially simple physics task. Professor Lech Mankiewicz, PhD, from the Centre for Theoretical Physics of the Polish Academy of Sciences and ambassador of the Khan Academy for Poland, described on his "wall" on 16 January (and confirmed with a screenshot of a conversation with the bot) how he tried to get an answer to a question asked in English about the average speed of a car moving from town A to town B, 100 km apart. The car was travelling at 40 kilometres per hour for the first half and 60 km/h for the second one. The bot solved the task as if it was about the first half of the driving time, even though no time was given, but ok. So the professor made the question more precise, saying that it was about half the route and not the driving time, but that didn't help at all. It still read exactly the same, namely: (40+60)/2=50. Which can hardly be right, only this is not the solution to THIS particular task by any means.

SIGN UP TO OUR PAGE Interestingly, and even more instructive in dealing with this machine, one of the professor's 'wall' guests asked the chatbot the same physics question during the discussion, only in Polish. And here the answer was as appropriate as possible. What this tells us about the prevalence of incorrect solutions to Newtonian mechanics tasks on the English-language internet, I don't know exactly, but I have a feeling it's nothing good. No one has checked the Spanish, French or any other version as part of the ongoing exchanges between physicists and so-called familiar laymen, but let's agree that - as Prof Mankiewicz put it - "I have no further questions". Here, after the bots, it is essentially "swept away".

A village like a river, or plasitic intelligence

Teachers all over the world are loudly lamenting that students have started downloading homework solutions from a chatbot. Well, in the context of the above-mentioned story, it may well be that shtickers will earn themselves a failing grade not only for downloading, but above all for downloading mistakes. Poles should be protected from temptation by the sad story of Elemel Sparrow, who downloaded the letter B from a hoopoe during a class, but had three tummies. It is worth refreshing this poem and not to download from hoopoes, even if they have a clever and modern name.

Even funnier - reported in the same social media - is the story of the contact with ChatGPT of Janusz Kucharczyk, a doctor of philosophy and teacher of the subject, translator from ancient languages. As a great patriot of his native Silesia, he asked ChatGPT about his home village, Tworóg Mały. The machine first wrote back that it was a river in the Sudetenland and gave specific villages through which it supposedly flows, adding to the absurdity of the situation. Full geographical fiction! The enquirer then pointed out to her that there was no such river, only a village. Bot admitted that he was mistaken, and that it was a village in the Będzin district. When he was again 'made aware' that it was not, that it was a village in the Gliwice district, he admitted this and added that there was no river there. Again, a mistake. So when he was finally informed that yes, there is a river and it is called the Bierawka, he added that it is full of anglers and the inhabitants draw water from it.

Certainly! With buckets on carriers, as in "Above the Niemen". As a man with a philosophical and satirical bent, Dr Kucharczyk wittily remarked in his post that this intelligence is artificial, because it must be made of plastic. And, more importantly, that this efflorescence of our post-modernity "instead of writing that it doesn't know, it misleads and collects data from its artificially intelligent bottom". To the point! That seems to me to be the key and here I can say absolute PERFECTION in capital letters has been achieved on ChatGPT.
ChatGPT was banned in New York City schools in January 2023. Photo by CFOTO/Future Publishing via Getty Images
He knows how to lie and how to deceive, as many of us learned at school. Because young children are truthful, they start lying when we teach them to do so and start forcing it on them outright. Well ChatGPT has mastered to perfection the impersonation of the average ignoramus with intelligent pretensions. Storytelling and every possible narrative, because no one will check you anyway. Which reminds me of a scene from one of the cult films from the beginning of our political transformation, where a group of Szczecin high school graduates are involved in smuggling Royal spirit from Germany. One teaches the other: "Never admit to anything [...], they catch you by the hand, say it's not your hand". Well, in a word, a well-trained Young Wolf. And for this, applause is due to the creators and providers of the database to be milled by ChatGPT.

Quite consciously, therefore, and so humanly, Professor Lech Mankiewicz suggested in another post: "Instead of generating answers to Polish language exam and matriculation questions, ChatGPT could generate texts for analysis in these exams. Everyone would have a level playing field, there would be no britches, and after all, it's all about understanding the text in a broader sense. Not just information, but also emotions, cultural references, etc. Chat generates his texts by mixing different sources he knows, which also has its exam appeal. And the Central Examination Board could finally show that it understands reality." Anyway, he wouldn't be himself if, for example, he didn't immediately ask to generate such a text in several variants.

In concluding this column - in which there is a deliberate confusion of many cultural references and digressions, which I assure you does not require any collaboration with artificial intelligence - I have one conclusion: he (or perhaps "it", "that" ... how to cover him with a pronoun, let that be the concern of linguists, not mine) is from the people taken from the people and put up for the people. He is just like us. We can only wait as he gets more lazy in his search for data, or starts to make jibes, or finally gets pissed off that a thousand people are "bothering" him every second. Or he'll go crazy or become violent - like the bot Tay, which was supposed to be a nice conversation programme for teenagers, but it only took 24 hours for it to learn incredibly bad behaviour from the internet and have to be switched off. The bosses of the startup, which has been rapidly investing billions, assure me that their creation will soon have a professional version, a hundred times more powerful... I'm the one who says "without cussing, I'll say five hundred" (and here, parrying with laughter, I took my hands off the keyboard).

– Magdalena Kawalec-Segond
- Translated by Tomasz Krzyżanowski


TVP WEEKLY. Editorial team and jornalists

Main photo: A screen shot of the ChatGPT website displayed on mobile devices, with the logo of OpenAI, the startup that created the bot. January 2023. Photo by Jonathan Raa/NurPhoto via Getty Images
See more
Columns wydanie 22.12.2023 – 29.12.2023
Swimming Against the Tide of Misinformation
They firmly believe they are part of the right narrative, flowing in the positive current of action.
Columns wydanie 1.12.2023 – 8.12.2023
What can a taxi do without a driver?
Autonomous cars have paralysed the city.
Columns wydanie 1.12.2023 – 8.12.2023
Hybrid Winter War. Migrants on the Russian-Finnish border
The Kremlin's bicycle offensive
Columns wydanie 1.12.2023 – 8.12.2023
Is it about diversity or about debauchery and libertinism?
It is hard to resist the impression that the attack on Archbishop Gądecki is some more significant operation.
Columns wydanie 24.11.2023 – 1.12.2023
The short life of a washing machine
No one has the courage to challenge the corporations responsible for littering the Earth.