Illustration for the call to treat AI as a biological weapon. (Source: gizmodo.com) |
In the article Treat AI as a biological weapon, not a nuclear weapon by Emilia Javorsky, a scientist-doctor and director of the Future of Life Institute (USA), the author argues that despite the fact that the world has recently repeatedly compared AI to nuclear bombs, there is a more suitable approach, which is to regulate this type of technology as a biological weapon or biotechnology.
According to the author, AI is perhaps the most powerful technology in human history that humans are developing today. The harmful effects of AI, including discrimination, threats to democracy, and concentration of influence, have been well documented.
Yet leading AI companies are racing to build increasingly powerful AI systems, escalating risks at a rate unprecedented in human history.
As leaders grapple with how to contain and control the rise of AI and its associated risks, they need to consider the adjustments and standards that humanity has leveraged to create it in the past.
Adjustment and innovation can coexist, especially when human lives are at stake.
A warning from nuclear technology
Although nuclear energy is more than 600 times safer than oil in terms of mortality and is enormously efficient, few countries touch it because of the consequences they have seen from the long-standing approach to nuclear.
The world learned about nuclear technology in the form of the atomic bomb and the hydrogen bomb. With these weapons, for the first time in history, humans developed a technology capable of ending human civilization, the product of an arms race that prioritized speed and innovation over safety and control.
Subsequent failures in technical safety and risk management, famously responsible for the nuclear disasters at Chernobyl and Fukushima, destroyed any chance of people accepting the positive aspects of nuclear energy.
Despite the overall favorable risk assessment of nuclear power and decades of scientists trying to convince the world of its viability, the very concept of 'nuclear' remains…tainted.
When a technology causes harm in its early stages, social awareness and overreaction can permanently limit its potential benefits. Because of early missteps with nuclear power, humanity has not been able to take advantage of its clean, safe energy source, and carbon neutrality and energy stability remain a pipe dream.
The right approach to biotechnology
Yet in some areas, humans have gotten it right. Biotechnology is one such field, encouraged to develop rapidly in a context where many patients are suffering and many die every day from diseases that have no cure.
The ethos of this research is not to ‘move fast and break things’, but to innovate as quickly and safely as possible. Humans limit the pace of innovation in this field by a system of regulations, ethics and norms that protect the welfare of society and individuals, and protect the industry from being paralyzed by backlash that could lead to disaster.
When biological weapons were banned at the Biological Weapons Convention during the Cold War, the opposing superpowers agreed that the creation of such weapons would not benefit anyone. Leaders saw that these difficult-to-control but highly accessible technologies should not be viewed as a mechanism to win the arms race but as a threat to humanity itself.
Emilia Javorsky is one of the scientists who recently signed an open letter calling for a six-month moratorium on AI development. She also signed a statement warning that AI poses a “risk of extinction” to humanity. |
The pause in the bioweapons race allows humans to develop it at a responsible pace. Scientists and regulators apply strict standards to any new innovation that could potentially harm humans.
These adjustments have not come without cost but have established a bioeconomy, with applications in areas ranging from clean energy to agriculture.
During the Covid-19 pandemic, biologists have applied mRNA technology to produce effective vaccines at a speed unprecedented in human history.
A recent survey of AI researchers found that 36% of respondents felt that AI could cause a nuclear-level disaster. However, government responses and regulations have been slow to catch up with the pace of technology adoption, with the ChatGPT app now surpassing 100 million users.
The rapidly escalating risks of AI have prompted 1,800 CEOs and 1,500 professors in the US to recently sign a letter calling for a six-month pause in AI development and an urgent process of regulation and risk mitigation. This pause would give the global community time to limit the harm caused by AI and prevent the risk of irreversible disaster for our society.
While assessing the risks and potential harms of AI, we must also consider how to avoid losing sight of the positive potential of this technology. If we develop AI responsibly now, we will be able to reap incredible benefits from this technology. For example, the benefits of applying AI to drug discovery and development, improving the quality and cost of health care, and increasing access to doctors and medical treatment.
Google's DeepMind has shown that AI has the potential to solve fundamental problems in biology that humans have long avoided. According to the study, AI could accelerate the achievement of all of the United Nations' Sustainable Development Goals, moving humanity toward a future of improved health, equity, prosperity and peace.
Now is the time for the global community to come together, as it did 50 years ago at the Biological Weapons Convention, to ensure that AI development is safe and responsible. If we do not act soon, we risk destroying the bright future of AI and our current society.
Source
Comment (0)