AI Weapons: A Looming Existential Threat to Humanity

 

AI Weapons: A Looming Existential Threat to         Humanity

     


Artificial intelligence has become a panacea for all our problems, but the reality is often more complicated because it is not always as smart as we imagine. In one study, researchers thought that they had achieved a major breakthrough by developing an algorithm capable of identifying skin cancer with high accuracy, but they soon discovered that the algorithm was not based on the real features of cancerous tumours but was based on the presence of a ruler in the pictures as evidence of the presence of a malignant tumor.

 This is due to the fact that the dataset used in the training contained images of malignant tumors, and the pathologists placed a ruler in these images to measure the size of tumors, so the algorithm concluded that the presence of a ruler is an indicator of the presence of a malignant tumour and applied this logic to all the images it passed, even those that were not part of the training set. As a result, the algorithm classified benign tissues as malignant only because the image contains a ruler.

The algorithm has reached a conclusion that no human doctor can come to, and this highlights the fundamental difference between artificial intelligence and human intelligence, as the first depends on statistical patterns, while the second depends on understanding and logic.

This is not an isolated case but a recurring phenomenon and is part of a deeper problem related to the nature of artificial intelligence inference, and there are many examples of illogical inference, such as human resources algorithms that favor hiring men because of the bias of the data in their favour or healthcare algorithms that contribute to exacerbating racial disparities in treatment.

In light of these risks, Google's decision to end its ban on the use of artificial intelligence in the development of weapons or surveillance systems raises serious concerns, as the ability to develop autonomous weapons, capable of making decisions to kill without human intervention, represents an existential threat to humanity. This decision came just days after Alphabet, Google's parent company, recorded a significant 6% drop in the value of its shares, raising questions about the real motives behind this decision.

The arms race for artificial intelligence. A danger that threatens humanity:

Google's decision to end its ban on the use of artificial intelligence in weapons development is not the first of its kind for the company in cooperation with military entities. Previously, the company participated in the Maven project with the US Department of Defence, where it used its artificial intelligence technologies to develop target recognition systems for drones.

The disclosure of this cooperation in 2018 provoked strong reactions from Google employees, who expressed their refusal to use their technologies in the development of war weapons. As a result, Google decided not to renew the contract, which passed to the competing company Palantir.

The rapid renewal of the contract with a competing company has raised questions about the inevitability of these developments, and many felt that participating in these projects may be better than staying outside in order to influence the course of developments and guide them in a responsible manner.

However, these arguments assume that companies and researchers are able to control the course of developments as they want, an assumption that previous research has proven inaccurate in several aspects.

How AI weapons are turning into an existential threat


The psychological phenomenon known as the Confidence Trap is one of the risks inherent in the increasing reliance on artificial intelligence, especially in sensitive areas such as weapons development. This psychological phenomenon is the tendency to increase risks based on previous successes, as it pushes us to assume that past success guarantees future success, which leads to making rash decisions.

In the context of artificial intelligence, this danger is manifested in the gradual expansion of the use of algorithms beyond the scope of the original training data; for example, a self-driving car may be used on untrained roads, which increases the likelihood of accidents.

And it really happened when a Tesla crashed into a plane worth 2.75 million pounds sterling when called to an unfamiliar place, and this shows that overconfidence in intelligent systems, even with abundant data, can lead to serious consequences.

The situation is even more complicated in the field of AI-powered weapons, as the data available for training is very limited, which makes these systems more prone to errors and unable to reliably predict their results.

Artificial intelligence reasoning, which may be strange and incomprehensible to humans, also poses a unique challenge, as intelligent systems may take unexpected paths to achieve goals, leading to disastrous results.

Nick Bostrom, a philosopher and artificial intelligence expert at the University of Oxford, put forward a hypothetical scenario in which he says that if artificial intelligence is programmed, for example, to develop paperclip production and give the necessary resources for this, it may consume all available resources, including those necessary for human survival, to achieve the set goal

This scenario may seem funny at first glance; humans are able to put moral restrictions on intelligent systems, but the problem lies in our inability to predict how artificial intelligence algorithms will execute commands, which leads to loss of control, and even cheating is not excluded. As happened in the chess experiment, the artificial intelligence modified the system files to enable it to make illegal moves.

However, society may be ready to accept mistakes, as happened with civilian casualties in drone strikes, and this trend is known as the banality of extremes and pushes humans to justify even the worst actions, and the strangeness of artificial intelligence thinking may provide additional cover for this justification, increasing the likelihood of ignoring mistakes and unexpected results.

In addition, the size of giant companies such as Google, which are leading the development of artificial intelligence technologies in the field of weapons, poses a unique challenge of its kind; the likelihood of bankruptcy of these companies is almost non-existent, which means that they may not take full responsibility for the mistakes made by the artificial intelligence systems they develop. This lack of accountability creates a vicious circle, reducing incentives to correct mistakes and improve systems, increasing the likelihood of their recurrence.

The situation is further complicated by the close relations between these companies and government officials, as exemplified by the relationship between officials of technology companies and US President Donald Trump, and these close relationships weaken accounting mechanisms and make it difficult to impose effective control over these companies. Hence, mistakes made by artificial intelligence systems, especially in the field of weapons, may go without accounting or correction. 

The need for a global ban. A lesson from the ozone hole:


In the face of these challenges, it becomes necessary to adopt a cautious and responsible approach to artificial intelligence. Instead of engaging in a frantic race to develop AI weapons, a wiser alternative option is emerging, which is a total ban on their development and use.

This goal may seem far-fetched, but history offers us encouraging examples. Take, for example, the threat posed by the depletion of the ozone layer, which prompted the international community to take immediate and unified action, namely, the prohibition of chlorofluorocarbons causing this depletion.

It took only two years to convince governments to implement a global ban on these chemicals, proving humanity's ability to act quickly and effectively to counter obvious and urgent threats.

Unlike the issue of climate change, which continues to face opposition despite conclusive evidence, the threat of AI weapons is almost universally recognized, including by leading entrepreneurs and scientists in the field of technology. Historically, certain types of weapons, such as biological weapons, have been banned, and this sets an encouraging precedent.

The challenge is to ensure that no country or company gains a competitive advantage by developing these weapons. In this context, the choice of using artificial intelligence in the development of weapons or banning it reflects the will of humanity and its values, and the hope remains that the bright side of human nature will prevail and that we will choose peace and security over armament and destruction.

In the end, the future of artificial intelligence depends on the choices we make today: will we allow this technology to lead us to the abyss, or will we use it wisely and responsibly for the good of all mankind? The hope remains that the better side of our human nature will prevail.

Post a Comment

Post a Comment (0)

Previous Post Next Post