Why do we think a chatbot will do something better than us if we 'make it like us'? Yes, it will do it faster than us, because that's what it was created to do. But not necessarily more accurately - because that's not what it's made to do, to be a better, more intellectually honest version of us, just to be us on boost. Journalist and editor Wojciech Mucha, inspired by AI technologies, gave interesting testimony to this on social media. As he wrote, he hadn't had time to make himself a cup of tea, and the chatbot had already refined a large chunk of text from the information he had gathered - a paragraph that would have taken the average newsroom intern three-quarters of an hour to write.
ChatGPT searches instantly, but the problem is that it searches in online resources where there is a lot of rubbish. Where, even if one relies only on rather reliable information - from encyclopaedias or the official scientific circuit - one still encounters factual errors or incorrect ways of solving problems, including simple school assignments. After all, there are plenty of "scientific journals" of the "predatory" genre, i.e. money-grubbing, so they publish everything that is sent to them for a fee. They do not subject the text to any kind of proofreading, not even a spelling check, let alone a substantive review. A chatbot, on the other hand, is incapable of properly verifying the information it finds.
He himself will not define what truth is - when asked, he will rake from the web some definitions of truth formulated by philosophers and theologians and give them to us. We know that truth, according to ChatGPT, is what is the most common answer to a given question in the sources that the creators of his database considered reliable and asked him. Is he learning how to verify sources himself, how to deal with separating the internet grains from the chaff? I dare to doubt at times, for which I will give examples. Today we live in social media, so these will be examples from there, but only from people I know personally in real life. Because we have to stick to something when it comes to getting information in these strange times.
Our hero failed a trivially simple physics task. Professor Lech Mankiewicz, PhD, from the Centre for Theoretical Physics of the Polish Academy of Sciences and ambassador of the Khan Academy for Poland, described on his "wall" on 16 January (and confirmed with a screenshot of a conversation with the bot) how he tried to get an answer to a question asked in English about the average speed of a car moving from town A to town B, 100 km apart. The car was travelling at 40 kilometres per hour for the first half and 60 km/h for the second one. The bot solved the task as if it was about the first half of the driving time, even though no time was given, but ok. So the professor made the question more precise, saying that it was about half the route and not the driving time, but that didn't help at all. It still read exactly the same, namely: (40+60)/2=50. Which can hardly be right, only this is not the solution to THIS particular task by any means.
SIGN UP TO OUR PAGE
Interestingly, and even more instructive in dealing with this machine, one of the professor's 'wall' guests asked the chatbot the same physics question during the discussion, only in Polish. And here the answer was as appropriate as possible. What this tells us about the prevalence of incorrect solutions to Newtonian mechanics tasks on the English-language internet, I don't know exactly, but I have a feeling it's nothing good. No one has checked the Spanish, French or any other version as part of the ongoing exchanges between physicists and so-called familiar laymen, but let's agree that - as Prof Mankiewicz put it - "I have no further questions". Here, after the bots, it is essentially "swept away".
A village like a river, or plasitic intelligence
Teachers all over the world are loudly lamenting that students have started downloading homework solutions from a chatbot. Well, in the context of the above-mentioned story, it may well be that shtickers will earn themselves a failing grade not only for downloading, but above all for downloading mistakes. Poles should be protected from temptation by the sad story of Elemel Sparrow, who downloaded the letter B from a hoopoe during a class, but had three tummies. It is worth refreshing this poem and not to download from hoopoes, even if they have a clever and modern name.
Even funnier - reported in the same social media - is the story of the contact with ChatGPT of Janusz Kucharczyk, a doctor of philosophy and teacher of the subject, translator from ancient languages. As a great patriot of his native Silesia, he asked ChatGPT about his home village, Tworóg Mały. The machine first wrote back that it was a river in the Sudetenland and gave specific villages through which it supposedly flows, adding to the absurdity of the situation. Full geographical fiction! The enquirer then pointed out to her that there was no such river, only a village. Bot admitted that he was mistaken, and that it was a village in the Będzin district. When he was again 'made aware' that it was not, that it was a village in the Gliwice district, he admitted this and added that there was no river there. Again, a mistake. So when he was finally informed that yes, there is a river and it is called the Bierawka, he added that it is full of anglers and the inhabitants draw water from it.
Certainly! With buckets on carriers, as in "Above the Niemen". As a man with a philosophical and satirical bent, Dr Kucharczyk wittily remarked in his post that this intelligence is artificial, because it must be made of plastic. And, more importantly, that this efflorescence of our post-modernity "instead of writing that it doesn't know, it misleads and collects data from its artificially intelligent bottom". To the point! That seems to me to be the key and here I can say absolute PERFECTION in capital letters has been achieved on ChatGPT.