Home About us

Artificial Intelligence and National Security

Insight frontiers 2024/09/03 08:08

Joseph · Nye is a professor at Harvard University

Humans are the kind that make tools, but can we control the tools we make? When Robert Oppenheimer and other physicists developed the first nuclear fission weapon in the 40s ·of the 20th century, they feared that their invention could destroy humanity. So far, it hasn't, but controlling nuclear weapons has been an ongoing challenge.

Many scientists now see artificial intelligence – algorithms and software that enable machines to accomplish tasks that would normally require human intelligence – as an equally transformative tool. Like previous general-purpose technologies, AI has great potential for good and evil. In cancer research, it can collate and summarize more research in minutes than a human team can do in months. Similarly, it can reliably predict patterns of protein folding, which will take years for human researchers to discover.

But AI has also lowered costs and lowered the barrier to entry for misfits, terrorists, and other bad actors who may be looking to cause harm. As a recent RAND study warned, "the marginal cost of resurrecting a dangerous smallpox-like virus could be as little as $100,000, while developing a complex vaccine could exceed $1 billion." ”

In addition, some experts fear that advanced artificial intelligence will be so much smarter than humans that it will control us, not the other way around. Estimates of the time required to develop such super-intelligent machines, known as artificial general intelligence, range from a few years to several decades. Regardless, today's narrow AI poses a growing risk that requires more attention.

For 40 years, the Aspen Strategy Group, made up of former government officials, academics, businessmen and journalists, has met every summer to focus on a major national security issue. Past meetings have discussed topics such as nuclear weapons, cyberattacks, and the rise of China.

This year, we focused on the impact of AI on national security, examining its benefits and risks.

The benefits include greater ability to classify vast amounts of intelligence data, strengthen early warning systems, improve complex logistics systems, and check computer code to improve cybersecurity. But there are also significant risks, such as advances in autonomous weapons, unexpected errors in programming algorithms, and adversarial artificial intelligence that could weaken cybersecurity.

China has invested heavily in the broader AI arms race, and it also has some structural advantages. The three key resources of AI are the data on which the model is trained; Clever engineers develop algorithms; and the computing power to run them. China has few legal or privacy restrictions when it comes to accessing data (although ideology restricts some datasets), and China has a lot of bright young engineers. The area where China lags behind United States most is the advanced microchips that provide computing power for artificial intelligence.

United States export controls have restricted China's access to these cutting-edge chips, as well as the expensive Netherlands lithography machines that make them. Experts at Aspen agreed that China is a year or two behind United States; But the situation remains unstable.

Autonomous weapons pose a particularly serious threat. After more than a decade of diplomatic efforts by the United Nations, countries have failed to agree on a ban on autonomous lethal weapons. International humanitarian law requires the military to distinguish between armed combatants and civilians, and the Pentagon has long demanded that a person be involved in the decision-making process before a weapon is fired. But in some cases, such as defending against incoming missiles, there is no time for human intervention.

Since the environment is important, humans must strictly define (in code) what weapons can and cannot do. In other words, there should be a person who is "on the loop" and not "on the loop". This is not just a speculative question. In the war in Ukraine, the Russians interfered with the signals of the Ukraine army, forcing the Ukrainians to program their equipment in order to autonomously decide when to fire.

One of the most terrible dangers of AI is its application in biological warfare or terrorism. When countries agreed to ban biological weapons in 1972, it was generally considered useless because of the risk of "counter-effect" on themselves. But with synthetic biology, it is possible to develop a weapon that can destroy one group without another. ?( While they use sarin gas, which is a non-infectious substance, their modern counterpart can use artificial intelligence to create a contagious virus. )

In the area of nuclear technology, a non-proliferation treaty was concluded in 1968 and now has 191 member States. The IAEA regularly inspects domestic energy projects to confirm that they are exclusively for peaceful purposes. Despite the fierce competition of the Cold War, in 1978 the major countries in nuclear technology agreed to impose restrictions on the export of the most sensitive facilities and technical know-how. Although there are clear differences between the two technologies, such precedents point some path for AI.

It's no secret that technology evolves faster than policy or diplomacy, especially driven by fierce market competition in the private sector. If there was one important takeaway from this year's Aspen Strategy Group meeting, it was that governments need to step up.

This article is from Xinzhi self-media and does not represent the views and positions of Business Xinzhi.If there is any suspicion of infringement, please contact the administrator of the Business News Platform.Contact: system@shangyexinzhi.com