As an artificial intelligence, the word processing program ChatGPT is able to process scientific tasks and understand complex problems. But anyone who thinks they can let the AI write term papers or newspaper articles without hesitation is mistaken. The chatbot sometimes responds to questions with facts that sound plausible but are actually made up. The newer version now explicitly points this out.
Anecdotes about the “fibbing” of ChatGPT
Here are a few examples from our day-to-day work: When asked about current case law on Section 97 UrhG, the AI lists judgments with date, a genuine-looking file number including the correct indication of the 14th Civil Chamber of the Regional Court of Cologne responsible for copyright cases and a convincing summary of a legally correct judgment. The problem: these judgments never existed. There is no indication that ChatGPT’s response is not based on real judgments.
When asked where to have lunch with colleagues in Bonn, the text program readily provides restaurant suggestions, including a brief description of the cuisine on offer.
However, the restaurant “Zum goldenen Kopf” in Bonn was not known to the team or the Internet…
When asked whether there is a distinction between light and photographic works in the Swiss Copyright Act, as in the German Copyright Act, the program confidently explains that this is correct and is also stated in Articles 2 and 4 of the Swiss Copyright Act – both statements are simply wrong. The chatbot then also reproduces Article 2 incorrectly. Although the CopA was comprehensively amended in 2019, the AI should be familiar with the correct legal text based on its knowledge of 2021.
ChatGPT not trained for truth
False statements by AI are now a well-known weakness of AI and specialists from a wide range of fields have difficulties in immediately recognizing false statements by the chatbot as such. How is a person with only superficial information supposed to know whether they can rely on the AI’s topic-specific information?
If you ask the AI itself how you can recognize made-up answers, the program only replies that it is the reader’s responsibility to critically question and check the information provided. Ultimately, the program is right. ChatGPT does not claim to be a scientific expert on any subject, but is “only” a word processing program with the aim of imitating a dialog with a person as closely as possible.
- There are essentially two reasons why ChatGPT provides incorrect information:
The programming or training of the artificial intelligence is based on a large amount of data, which in turn has not been fully checked for its veracity. Due to the incredibly large amount of data alone, it is not possible to expect 100% accuracy. Incorrect training data (input) inevitably leads to incorrect answers (output). - As a chatbot, the AI is designed to generate an answer in every case. In contrast to human conversation partners, mere silence is not an option. The AI would rather answer incorrectly than not at all. If it cannot give a correct answer, it begins to process text modules that come as close as possible to a correct answer. This is also the reason why the answers sound so plausible.
So you can’t accuse ChatGPT of deliberately lying. The problem is rather that the AI is trying too hard to fulfill its task and give the user an answer to their question at any cost. It is not yet clear how and whether other programming can get around this problem.
ChatGPT still usable for simple tasks
But does that make the program unusable for any scientific work? No. You should just be aware that the presentation of facts can be incorrect. The program is still well suited for simple summaries of given texts. Even the formulation of individual text modules, in which scientific terms have to be incorporated, can be easier for the program than for some users. ChatGPT can often improve the language and style of factually correct essays. However, important tasks that require precision and correctness should not be entrusted to the chatbot. Especially not if the user cannot easily verify the truthfulness of the information, e.g. due to a lack of specialist knowledge. Under no circumstances should the information provided by the AI be blindly trusted. False information, e.g. in advertising statements, can have far-reaching and unpleasant consequences – both economically and under criminal law.