What are the risks of humanizing artificial intelligence
In our current age, we are exposed to a constant barrage of media discourses that present us with a version of artificial intelligence that is precisely designed to look like a digital version of us humans. This version formulates carefully polished language phrases, tries to imitate expressions of human emotions, shows curious behaviour, claims to have the ability to empathise with others, and even goes beyond its limits to claim that it is able to emulate and surpass human creativity. But behind this impressive facade lies a shocking and disturbing fact: artificial intelligence, in its current form, does not possess any of these qualities; it is not human, and presenting it as such carries with it serious risks, because the strength of artificial intelligence lies in its superior ability to convince, and there is nothing more dangerous than an illusion that seems so real that it is difficult to resist.
General artificial intelligence... Just science fiction so far:
We must realise that what we call today "artificial intelligence" is nothing but a sophisticated statistical machine, or a digital parrot, that reproduces the patterns he extracted from huge amounts of data produced by humans. When he answers a question, he does not think or understand but performs a complex statistical guess of the letter or the next most likely word in the sequence, based on his intensive training.
This means that artificial intelligence lacks any form of real understanding and has no awareness or knowledge of the inherent human sense. What we see is just a super-engineered ingenuity based on probabilities, nothing more and nothing less.
The idea of artificial general intelligence (AGI), the theoretical kind that supposedly fully mimics human cognitive abilities, is still locked in science fiction and may remain so for many years to come.
Why achieving general artificial intelligence may be impossible
What are the risks of humanizing artificial intelligence
The main reason for this lies in the fact that artificial intelligence lacks the body and therefore lacks the senses that connect us to the physical world, the flesh, blood, and nerves that make us feel pain and pleasure. It also does not know hunger, desire, or fear, those basic drivers of human behaviour, and this complete absence of embodiment creates a fundamental, perhaps insurmountable gap between the data it consumes—data that stems entirely from embodied human emotions and experiences—and its ability to understand or interact with them on a real cognitive level.
Philosopher David Chalmers calls the complex mechanism that connects our physical body and consciousness "the difficult problem of consciousness, and other prominent scientists have come to the assumption that consciousness actually arises from the integration of internal mental states with sensory representations, such as physiological changes in heart rate, sweating, and many other bodily responses.
Given the paramount importance of human senses and emotions in the process of the emergence and formation of consciousness, there is a deep and perhaps irreconcilable separation between general artificial intelligence, which is essentially a machine, and consciousness, which is a uniquely human phenomenon.
What are the risks of humanizing artificial intelligence
This is what is called "artificial intelligence humanization, and this provokes an instinctive response in us that makes us attribute to the machine human qualities that it does not possess, and this leads us to believe strange claims, such as that artificial intelligence has successfully passed the famous Turing test, which tests the machine's ability to show intelligent behaviour that rewards human behaviour.
We proved this claim to be incorrect in a previous article titled “Artificial Intelligence Passes the Famous Turing Test. Have we really come close to human intelligence? You can look at it to find out the reasons.
The real problem is that the machine has no idea what it means to be a human being; it cannot offer real sympathy, foresee suffering, or detect hidden motives and lies. It is devoid of taste, instinct, inner compass, and all that chaotic and fascinating complexity that forms our human identity.
The most worrying thing is that artificial intelligence does not have its own goals, desires, or ethics; it's all injected into its code, and therein lies the real danger: not in the deaf machine, but in the person who controls it, be it the programmer, the company, or the government. Can we really entrust these entities with our deepest secrets, life decisions and emotional upheavals, through a medium that is ultimately just software code
However, this is exactly what many are currently doing when interacting with popular AI models such as GPT-4.5, Gemini, and Grok, and that's what calls for a serious pause for thought.
Artificial intelligence.. A super tool or a possible weapon
The huge power of artificial intelligence as a tool is undeniable; it enables us to translate, summarise, programme, and analyse data quickly and efficiently beyond our dreams. It impresses and amazes but remains only a tool. As with every tool created by man, from an axe to an atomic bomb, it can be used as a weapon, and it will certainly be used as well.
Do you need a concrete perception of danger Imagine that you are falling in love with a charming and attractive artificial intelligence entity, as happened in the science fiction movie Her. Now imagine that this entity suddenly decided to abandon you. what will you do to stop it And to clarify: it will not be the AI itself that rejects and abandons you, but the human being or the human system that is behind it, which uses that tool that has turned into a cunning weapon to control your behaviour and emotions.
But how can we strip artificial intelligence of its human qualities
Some large linguistic models, such as the GPT-3 model, initially showed behavior that suggested that it was a conscious entities; this particular model pretended to have a personality.
Companies have worked to fix this, and this explicit behaviour of pretending to be a character is no longer the default in our interactions with new AI models; however, the basic style of interaction—that smooth and peculiar flow of conversation that simulates human dialogue—remains. Although this method may seem harmless, it has a very high persuasive power, which can lead to incorrect perceptions about the nature of such models.
Therefore, it is now necessary to begin the process of stripping artificial intelligence of any human qualities or characteristics and removing this human mask that it wears, and this can be achieved by practical steps. Companies should remove any reference to emotions, judgements, or cognitive processes from artificial intelligence, as its responses should be realistic, devoid of the first person (I), and avoid phrases such as “I feel” or “I am curious.”
Will this change happen It reminds us of the climate change warnings that have been ignored for decades, but we must constantly warn big tech companies about the dangers of humanising artificial intelligence and demand that they develop more ethical and transparent systems, even if their cooperation is unlikely.
In the meantime, each user can take some steps to strip any AI system they are dealing with of its human qualities. When using platforms such as ChatGPT or Claude, you can instruct the system not to call you by name, to refer to itself as artificial intelligence, and to avoid using any terms with emotional or cognitive connotations.
And when using the voice chat feature, ask him to use a monotonous tone of voice closer to the robot; it may seem strange, but it helps us to always remember that we are dealing with a gadget, no matter how sophisticated it is, and not with a conscious entity.
We must clearly emphasise that artificial intelligence is an amazing, sophisticated tool, but it is not a conscious entity like us and will never be, and that our insistence on humanising this technology not only reveals our misunderstanding of its true nature but exposes us to a range of serious risks that cannot be underestimated, including the ease of manipulating our emotions, eroding the ability to think critically, and the possibility of delegating sensitive responsibilities to entities unable to understand the human context or bear the moral consequences of their decisions. By removing this false mask of humanity from artificial intelligence, we can finally begin to appreciate its true power as an analytical and computational tool and exploit its huge potential in the service of humanity without falling into the trap of believing that it is a substitute for the human mind or soul. The choice before us is clear: will we continue to deceive ourselves by projecting our qualities onto these machines, or will we face the Naked Truth and redefine our relationship with artificial intelligence based on a realistic understanding of its nature, possibilities and limitations There is no doubt that the answer to this question will shape the future of our dealings with this wonderful technology and determine the extent of our ability to use it in a safe and responsible manner.
We must clearly emphasise that artificial intelligence is an amazing, sophisticated tool, but it is not a conscious entity like us and will never be, and that our insistence on humanising this technology not only reveals our misunderstanding of its true nature but exposes us to a range of serious risks that cannot be underestimated, including the ease of manipulating our emotions, eroding the ability to think critically, and the possibility of delegating sensitive responsibilities to entities unable to understand the human context or bear the moral consequences of their decisions. By removing this false mask of humanity from artificial intelligence, we can finally begin to appreciate its true power as an analytical and computational tool and exploit its huge potential in the service of humanity without falling into the trap of believing that it is a substitute for the human mind or soul. The choice before us is clear: will we continue to deceive ourselves by projecting our qualities onto these machines, or will we face the Naked Truth and redefine our relationship with artificial intelligence based on a realistic understanding of its nature, possibilities and limitations There is no doubt that the answer to this question will shape the future of our dealings with this wonderful technology and determine the extent of our ability to use it in a safe and responsible manner.
Post a Comment