Facebook
Twitter
LinkedIn

“It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring.”

Carl Sagan

Why do cars have people’s faces? Car grilles and headlights frequently resemble faces. This conventional design makes automobiles appear familiar. Designers may utilize this method to make automobiles appear friendlier. Observe that when an automobile’s design doesn’t follow this pattern, it looks unwelcoming, hostile, and unpleasantly different. This is called anthropomorphism. People frequently perceive human characteristics in inanimate objects. For instance, we frequently see faces in clouds, name our vehicles, and howl at malfunctioning computers. Although it comes naturally to people to anthropomorphize objects, it isn’t always automatic. According to studies, the ability to anthropomorphize may depend on the appearance of particular characteristics. When more human characteristics are present, anthropomorphism increases. And of late, some kind of this phenomenon has been predominant in the news—that generative artificial intelligence has already reached human-level intelligence and that our hopes have been realized. Though disembodied, people perceive generative AI as possessing interpersonal attitudes. If people can’t tell the difference between a computer and a human through teletyped conversation, then we must conclude that it has reached human-level intelligence. But like people’s pareidolia in automobiles, instead of seeing a face, when the AI communicates back with a human-like response, people have the tendency to humanize it.
Another observation: the Barnum effect occurs when people accept personality descriptions that apply to most people as credible insights into themselves. “You have a great need for other people to like you,” “You tend to be critical of yourself,” and “Sometimes it is difficult for you to make important decisions” are examples. These are called high-base rate statements that apply to most of the population, which tends to accept these universally valid statements as true and meaningful. And positive interpretations are more accepted than negative ones. Similar phenomena can be observed in responses to prompts in generative AI models and how they are received as valid. When these models are trained on publicly accessible information on real people, including interviews, speeches, and writings, they can then simulate them and respond as a real person would. Since we already have an estimation about something (the what is) that we are about to ask the AI, when it responds with uncanny similarity, we are convinced that it is a human-like response, despite the fact that they are only parroting the things that we already know from the things we already created.
Further observation: remember when you’re planning to buy a new phone and you’re trying to look for reviews online. Let’s say you already have a cool phone in mind to buy. How many times have you been swayed to buy something else aside from it? So you go online and look up reviews about that phone. The top results might have good reviews, but occasionally you’ll find one that begs to differ. One that really makes sense and is an actual dealbreaker. Would you still buy the phone? There’s a great chance that you will. Experts call this confirmation bias. When we dismiss a seemingly reasonable disagreement in favor of something that we favor. This phenomenon is also observable among proponents of generative AI. They tend to see what they would like to see and conduct experiments to prove that these models have human-like intelligence. They provide test results that suggest that AI consistently outperforms humans on all widely accepted intelligence tests. But arguably, if these models are trained on artistic and literary creativity, mechanical ability, and other similar tests, the AI will effectively pass such psychometric tests without a doubt. Again, it would appear that the AI has human-like intelligence.
People think artificial intelligence is realized human intelligence, but this is actually a misnomer. Researchers tend to be selective in the experiments given to AI. Some of these experiments that would test the intelligence of AI are very difficult for people, so when the result shows that the AI completed the task with ease, people are convinced that it possessed human-like intelligence. Another is when AI is pitted against a human in puzzles and games. From solving a Rubik’s cube to playing chess or Go, people are amazed at how the AI triumphs over professionals and grandmasters. If researchers kept making this unreasonable comparison between human-like intelligence and artificial intelligence, cognitive bias would be further reinforced.
Lastly, it tends to be problematic if we perceive being human as the pinnacle of intelligence. The experiments and tests are very anthropocentric and include fundamental presumptions that human intelligence is the standard benchmark for intelligence. These tests are insufficient for assessing machine intelligence, notwithstanding how successful they are in testing human intelligence. We confuse human-level intellect with the augmentive capabilities of artificial intelligence. AIs are intelligent and powerful in their own right. Though human intelligence relies on reasoning (deductive, inductive, abductive, quantitative, causal, or moral reasoning) to draw conclusions from premises, facts, and knowledge to solve problems and make decisions, people could still benefit from AI augmentation. That is to say, humans intelligence can go farther when amalgamated with augmentive AI. And this is where Valmiz comes in.
Valmiz’s unique information augmentation system develops advanced comprehension from your own data and external sources. With this seamless integration of both intelligences, people can maximize the full potential of their knowledge base. You could learn more about Valmiz here. We would also like to learn more about you and your organization. Let us set up an introductory meeting by clicking this link.