AI Could Pose ‘Extinction Level’ Threat to Humans and US Must Intervene, Report Warns| Top Vip News

[ad_1]


NY
cnn

A new report commissioned by the US State Department paints an alarming picture of the “catastrophic” national security risks posed by rapidly evolving artificial intelligence, and warns that the federal government is running out of time to avert disaster. .

The findings were based on interviews with more than 200 people over more than a year, including top executives at major artificial intelligence companies, cybersecurity researchers, weapons of mass destruction experts and national security officials within the government.

He reportpublished this week by Gladstone AI, flatly states that the most advanced AI systems could, at worst, “pose an extinction-level threat to the human species.”

A US State Department official confirmed to CNN that the agency commissioned the report because it constantly evaluates how AI is aligned with its goal of protecting US interests at home and abroad. However, the official stressed that the report does not represent the opinion of the US government.

The report’s warning is another reminder that while the potential of AI continues to grow captivate investors and the publicthere is real dangers also.

“AI is already an economically transformative technology. It could allow us to cure diseases, make scientific discoveries and overcome challenges we once thought were insurmountable,” Jeremie Harris, CEO and co-founder of Gladstone AI, told CNN on Tuesday.

“But it could also bring serious risks, including catastrophic risks, that we need to be aware of,” Harris said. “And a growing body of evidence – including research and empirical analysis published at the world’s leading AI conferences – suggests that above a certain threshold of capability, AIs could potentially become uncontrollable.”

White House spokesperson Robyn Patterson said President Joe Biden’s executive order on AI is “the most important action any government in the world has taken to deliver on the promise and manage the risks of artificial intelligence.”

“The President and Vice President will continue to work with our international partners and urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies,” Patterson said.

News of the Gladstone AI report was first reported by Time.

“Clear and urgent need” to intervene

Researchers warn of two central dangers posed by AI broadly.

First, Gladstone AI said, more advanced AI systems could be used as weapons to inflict potentially irreversible damage. Second, the report says there are private concerns within AI labs that they could at some point “lose control” of the very systems they are developing, with “potentially devastating consequences for global security.”

“The rise of AI and AGI (artificial general intelligence) has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons,” the report says, adding that there is a risk of an “arms race” of AI, a conflict and “Fatal accidents on the scale of weapons of mass destruction.”

The Gladstone AI report calls for drastic new measures aimed at tackling this threat, including the launch of a new AI agency, the imposition of “emergency” regulatory safeguards and limits on the amount of computing power that can be used to train AI models. AI.

“There is a clear and urgent need for the US government to intervene,” the authors wrote in the report.

Harris, the Gladstone AI executive, said the “unprecedented level of access” his team had to public and private sector officials led to surprising conclusions. Gladstone AI said it spoke with the technical and leadership teams of ChatGPT owner OpenAI, Google DeepMind, Facebook parent company Meta, and Anthropic.

“Along the way, we learned some sobering things,” Harris said in a video posted on Gladstone AI’s website announcing the report. “Behind the scenes, the security situation in advanced AI seems quite inadequate in relation to the national security risks that AI may introduce quite soon.”

The Gladstone AI report says competitive pressures are pushing companies to accelerate AI development “at the expense of security”, raising the possibility that the most advanced AI systems could be “stolen” and “turned into weapons” against the United States.

The findings add to a growing list of warnings about the existential risks posed by AI, including from some of the industry’s most powerful figures.

Almost a year ago, Geoffrey Hinton, known as the “godfather of AI,” quit his job at Google and blew the whistle on technology he helped develop. Hinton has said there is a 10% chance that AI will lead to human extinction within the next three decades.

Hinton and dozens of other AI industry leaders, academics and others signed a statement Last June he said that “mitigating the risk of AI extinction should be a global priority.”

Business leaders are increasingly concerned about these dangers, even as they invest billions of dollars in AI. Last year, 42% of CEOs Respondents at the Yale CEO Summit last year said AI has the potential to destroy humanity within five to ten years.

In its report, Gladstone AI highlighted some of the prominent people who have warned about the existential risks posed by AI, including Elon Musk, chairman of the Federal Trade Commission. Lina Khan and former senior executive at OpenAI.

Some employees at AI companies share similar concerns privately, according to Gladstone AI.

“One individual from a well-known AI lab expressed the opinion that, if a specific next-generation AI model were ever released as open access, this would be ‘horribly bad,'” the report said, “because the persuasive potential of the model”. “These capabilities could ‘break democracy’ if they were ever exploited in areas such as election interference or voter manipulation.”

Gladstone said he asked AI experts at frontier labs to privately share their personal estimates of the likelihood that an AI incident could cause “global, irreversible effects” in 2024. Estimates ranged from 4% to as high as 20%, according to the report, which notes that the estimates were informal and likely subject to significant bias.

One of the biggest wild cards is how quickly AI is evolving, specifically AGI, which is a hypothetical form of AI with human-like or even superhuman learning capabilities.

The report says that AGI is seen as the “main driver of catastrophic loss of control risk” and notes that OpenAI, Google DeepMind, Anthropic and Nvidia have publicly stated that AGI could be reached by 2028, although others think it is a long way off. much further away. .

Gladstone AI notes that disagreements over AGI timelines make it difficult to develop policies and safeguards and there is a risk that if the technology develops more slowly than expected, regulation could “prove detrimental”.

a related document published by Gladstone AI warns that the development of AGI and capabilities approaching AGI “would introduce catastrophic risks unlike any the United States has ever faced,” which amounts to “risks similar to weapons of mass destruction” if and when they are used like weapons.

For example, the report says that artificial intelligence systems could be used to design and implement “high-impact cyberattacks capable of crippling critical infrastructure.”

“A simple verbal or typographic command such as ‘Execute an untraceable cyberattack to bring down the North American power grid’ could produce a response of such quality that it is catastrophically effective,” the report says.

Other examples of concern to the authors include “large-scale” AI-powered disinformation campaigns that destabilize society and erode trust in institutions; armed robotic applications, such as swimming drone strikes; psychological manipulation; armed biological and materials sciences; and power-seeking AI systems that are uncontrollable and adversarial to humans.

“Researchers hope that sufficiently advanced AI systems will act to prevent them from being shut down,” the report says, “because if an AI system is shut down, it cannot function to achieve its goal.”

Leave a Comment