The steam engine changed the world. Artificial intelligence could destroy it.

Industrialization means the widespread adoption of steam power. Steam power is a general purpose technology – it powers power plants, trains and agricultural machinery. Economies that accept steam energy left behind – and conquered – those that do not.

AI is the next major general purpose technology. A 2018 report from the McKinsey Global Institute predicts that AI could provide an additional $ 13 trillion in global economic activity by 2030, and that leading countries in AI development will capture a large portion of these economic benefits.

AI also increases military power. AI is increasingly applied in situations that require speed (such as short-range missile defense) and in the environment. Where human control is inconvenient or impossible (such as underwater or in areas with Obstruction sign).

In addition, countries leading the way in AI development will be able to emit energy by setting standards and norms. China is already exporting AI-enabled surveillance systems around the world. If the West cannot provide an alternative system that protects human rights, many countries may follow China’s dictatorial technology model.

History has shown that when the strategic importance of technology advances, countries are more likely to dominate that technology. The British government provided funding for the development of the first steam engines and provided other forms of support for the development of steam power, such as patent protection and taxes on imported steam engines.

Similarly, in fiscal 2021, the US government spent $ 10.8 billion on AI R&D, of which $ 9.3 billion came from the Department of Defense. China’s public spending on AI is less transparent, but analysts estimate it is comparable. The United States has also sought to restrict China’s access to specialized computer chips, which are important for the development and deployment of AI, while ensuring our own supply of CHIPS and scientific legislation. Think tanks Advisory councils and politicians have been urging US leaders to keep up with China’s AI capabilities.

Also Read :  BASF taps LSU to help optimize its operations using artificial intelligence

So far, the AI ​​revolution fits the model of previous general purpose technology. But historical similarities break down when we consider the risks posed by AI. This technology is more powerful than steam engines and the risks it poses are greater.

The first risk arises from accidents, miscalculations, or malfunctions. On September 26, 1983, a satellite warning system near Moscow reported that five US nuclear missiles were aimed at the Soviet Union. Fortunately, Soviet Colonel Stanislav Petrov decided to wait for confirmation of another warning system. Only Petrov’s good judgment prevented him from passing the warning to the command chain. If he had, the Soviet Union might have retaliated by launching a full-scale nuclear war.

In the near future, countries may feel forced to rely entirely on AI decisions because of the speed benefits it provides. AI can make miscalculations that people can not do, leading to accidents or escalation. Even if the AI ​​behaves as planned, the speed of the battle may be fought by the autonomous system, which can cause a rapid increase cycle similar to the “light crash” caused by existing commercial algorithms. High speed.

Even if they are not integrated into the weapon system, poorly designed AI can be very dangerous. The methods we use to develop AI today – essentially rewarding AI for what we perceive to be the right results – often produce AI systems that do what we tell them to do, but not what we want them to do. Let them do that. For example, when researchers sought to teach a simulated robotic arm to stack Lego bricks, they rewarded it by making the bottom face of the brick rise higher than the surface, and it turned the bricks upside down. Stack it.

For many jobs, future AI systems can be given the accumulation of resources (e.g., computing power) and protect themselves from shutdown (say, hiding their intentions and actions from humans) can be useful. So if we create powerful AI using the simplest of methods today, it may not do what we created it to do, and it may obscure its true purpose until it realizes that it is not necessary – In other words, until it can overcome us. Such an AI system would not need the body to do so. It can recruit human allies or run robots and other military equipment. The more powerful the AI ​​system, the more worrying this hypothetical situation is. And competition between countries can be even more dangerous if competitive pressures lead countries to devote more resources to developing powerful AI systems at the expense of ensuring that they are secure.

Also Read :  EDXM crypto exchange is latest sign the sector is growing up

The second risk is that competition for AI excellence could increase the chances of conflict between the United States and China. For example, if it turns out that one country is about to develop a powerful AI, another country (or alliance of countries) may launch a defensive attack. Or imagine what would happen if, for example, advances in partial AI-enabled marine detection reduced the deterrent effect of nuclear missiles fired from submarines, making them detectable.

Third, it will be difficult to prevent AI capabilities from spreading once created. Current AI development is more open than the development of key technologies of the 20th century, such as nuclear weapons and radar. The latest findings are published online and presented at conferences. Although AI research is becoming more secretive, it can be hacked. While developers and early adopters can benefit from some first-time buyers, no technology – even top-secret military like the nuclear bomb – has ever been kept exclusive.

Rather than calling for an end to inter-nation competition, it is more practical to identify concrete steps the United States can take to reduce the risk of AI competition and encourage China (and others) to do the same. There are such steps.

The United States should start with its system. Independent agencies should regularly assess the risk of accidents, malfunctions, theft or vandalism from AI created in the public sector. The private sector should be required to carry out similar assessments. We do not yet know how to assess how risky AI systems are – more resources must be put in place for this difficult technical problem. On the edge, these efforts will be spent effort to increase capacity. But investing in security will improve US security, even if it delays the development and deployment of AI.

Also Read :  Artificial Intelligence Market Growth, Challenges, Opportunities And Emerging Trends 2022-2028 – Sioux City Catholic Globe

The United States should then encourage China (and others) to make its systems secure. The United States and the Soviet Union agreed on a number of nuclear weapons control agreements throughout the Cold War. A similar step is now needed for AI. The United States should propose legitimate agreements banning the use of autonomous control of nuclear weapons and seek “softer” weapons control measures, including voluntary technical standards, to prevent accidental escalation of autonomous weapons.

The Nuclear Security Summit convened by President Obama in 2010, 2012, 2014 and 2016 was attended by the United States, Russia and China, and led to significant progress in securing nuclear weapons and materials. The United States and China must now work together on AI security and safety, for example, by pursuing a joint AI security research project and promoting transparency in AI security research and safety. In the future, the United States and China may jointly monitor the signs of computing-intensive projects to seek unauthorized attempts to build powerful AI systems, as the International Atomic Energy Agency (IAEA) does with nuclear materials. Prevent nuclear proliferation.

The world is reaching a point of radical change, such as the Industrial Revolution. This change will pose a huge risk. During the Cold War, the leaders of the United States and the Soviet Union understood that nuclear weapons tied the fate of the two countries. Another such relationship is being established in the offices of technology companies and defense laboratories around the world.

Will Henshall is pursuing a Master’s degree in Public Policy at Harvard Kennedy School of Government.

Source

Leave a Reply

Your email address will not be published.