Harry and Meghan Align With AI Pioneers in Demanding Prohibition on Advanced AI

Prince Harry and Meghan Markle have joined forces with AI experts and Nobel Prize winners to advocate for a total prohibition on creating artificial superintelligence.

Harry and Meghan are part of the group of a influential declaration that calls for “a ban on the creation of artificial superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human intelligence in all cognitive tasks, though this technology remain theoretical.

Primary Requirements in the Statement

The declaration states that the prohibition should remain in place until there is “broad scientific consensus” on developing ASI “safely and controllably” and once “substantial public support” has been achieved.

Notable individuals who endorsed the statement include technology visionary and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of contemporary artificial intelligence, another AI expert; tech entrepreneur Steve Wozniak; British business magnate Richard Branson; Susan Rice; former Irish president Mary Robinson, and British author a public intellectual. Additional Nobel winners who signed include Beatrice Fihn, Frank Wilczek, an astrophysicist, and Daron Acemoğlu.

Behind the Movement

The declaration, aimed at national leaders, tech firms and lawmakers, was coordinated by the FLI organization, a American AI ethics organization that previously called for a pause in advancing strong artificial intelligence in recent years, shortly after the emergence of ChatGPT made artificial intelligence a global political discussion topic.

Tech Sector Views

In recent months, Mark Zuckerberg, the leader of Facebook parent Meta, one of the major AI developers in the United States, claimed that development of superintelligence was “now in sight”. However, some analysts have suggested that discussions about superintelligence reflects market competition among technology firms investing enormous sums on AI recently, rather than the sector being near reaching any scientific advancements.

Possible Dangers

Nonetheless, the organization warns that the possibility of ASI being developed “in the coming decade” carries numerous risks ranging from replacing human workers to losses of civil liberties, exposing countries to security threats and even endangering mankind with existential risk. Deep concerns about artificial intelligence focus on the possible capability of a system to escape human oversight and protective measures and initiate events against human welfare.

Citizen Sentiment

FLI released a American survey showing that about 75% of Americans want strong oversight on sophisticated artificial intelligence, with 60% believing that artificial superintelligence should not be created until it is demonstrated to be secure or manageable. The survey of American respondents noted that only a small fraction supported the current situation of rapid, uncontrolled advancement.

Industry Objectives

The top artificial intelligence firms in the US, including the conversational AI creator OpenAI and Google, have made the creation of human-level AI – the hypothetical condition where AI matches human cognitive capability at most cognitive tasks – an stated objective of their research. While this is one notch below ASI, some experts also caution it could carry an extinction threat by, for instance, being able to improve itself toward achieving superintelligence, while also carrying an underlying danger for the modern labour market.

Thomas Martinez
Thomas Martinez

A tech-savvy writer passionate about simplifying complex topics for everyday readers, with a background in digital media.