Blind faith in chatbots runs the risk of human mental degeneration
A group of scientists warns that the persuasiveness of chatbots is exacerbating the mental laziness and loss of critical thinking that has increased with the rise of major search engines, Kristjan Port points out in his Raadio 2 technology commentary.
Daniel Kahneman, a Nobel laureate, proposed a two-part framework of thinking concepts to help understand human cognitive processes. These concepts could just as easily describe robots. This is perhaps not an inappropriate metaphor, as, in contrast to the vast freedom afforded to humans, robots' activities are rather predictable and programmed.
Kahneman describes System 1 as a fast, automatic and intuitive decision-making process. These traits are likely familiar to everyone. Many decisions, explanations or other reactions happen so quickly that even the individual may not fully understand them. Even if the result might sometimes invite criticism, this way of reacting is not inherently bad. Life is full of situations that require quick responses.
For example, when driving along a familiar route, there's no need to analyze every turn to know where the chosen path will lead. Similarly, one easily recognizes a friend in a crowd without much thought and few people would pause to think deeply when asked how much 2+2 equals.
System 2, on the other hand, is slow, deliberate, conscious and analytical. This process then tries to think and reason why the quick decision was indeed correct and necessary. The more thorough information-processing part of the thinking methodology takes the lead in solving complex problems or situations. Arriving at a concentrated answer or decision is more energy-intensive, laborious and noticeably slower. Both systems are deeply personal and operate within individuals.
However, people also think together through dialogue. As the number of connections between members of society increases through technology, a collective thinking process emerges, based on shared ideas, opinions and observations. In this phenomenon, collective intelligence takes shape, rather than relying solely on individual choices.
There is also occasional speculation that the internet thinks independently. Through the synthesis of information on digital platforms such as the internet and social media, something more than just the sum of individual minds seems to emerge. This is often referred to as independent thinking emerging from complexity. Where this fits into Kahneman's framework of systems might become clear later.
Another newly emerging system of thinking might be pushed to the forefront. This has even been given a name – System 0. A group of scientists has proposed the idea of an emergent thinking process that arises from problem-solving in collaboration with artificial intelligence (AI). The number zero suggests a form of existence outside the human mind, ahead of both the reflexively fast System 1 and the slowly pondering System 2.
The inclusion of AI into the scheme of human thinking processes is justified by its cognitive-enhancing effect. A contrary example would be the harmful impact of internet services on mental performance, comprehension of behavior-influencing information and the intelligent behavior expected from every person. Despite the known behavioral risks in humans, it is hoped that AI assistance in task completion will enhance human capabilities and increase value.
By joining work-related discussions as a potential cognitive assistant, AI also influences human thinking, information processing and decision-making processes. Although physically located outside the organic brain, scientists see AI as a pre-conscious, heavily data-driven form of cognition, operating alongside or below human thought.
In other words, while System 1 relies on intuition and System 2 on logic, System 0 represents machine-based, autonomously and efficiently functioning information processing, becoming an organic part of improving human decision-making capacity.
The idea's proponents also offer a warning. This is no longer news in the post-internet era. Thanks to the internet in general and Google specifically, there are references to the growing mental laziness and decline of critical thinking, to the point that human freedom at large is in jeopardy. This has already been observed with search engine results, which are often taken as truth without even a cursory evaluation.
Compared to search results, AI presents information even more cleverly and persuasively, which could lead to mental atrophy, warn the authors of the discussion. Blindly trusting AI's conclusions risks the loss of previous mental freedom and independence. At the same time, the potential benefits are immense. We might be facing something akin to the adoption of fire in ancient times. Some will get burned, while others will enjoy a feast.
--
Follow ERR News on Facebook and Twitter and never miss an update!
Editor: Jaan-Juhan Oidermaa, Marcus Turovski