The Duke and Duchess of Sussex Join Tech Visionaries in Demanding Prohibition on Advanced AI

Prince Harry and Meghan Markle have joined forces with AI experts and Nobel Prize winners to advocate for a total prohibition on developing superintelligent AI systems.

The royal couple are among the signatories of a powerful statement that demands “a prohibition on the development of artificial superintelligence”. Superintelligent AI refers to AI systems that would surpass human cognitive abilities in every intellectual area, though such systems have not yet been developed.

Key Demands in the Statement

The declaration insists that the prohibition should remain in place until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “strong public buy-in” has been secured.

Notable individuals who added their signatures include AI pioneer and Nobel laureate a leading AI researcher, along with his fellow “godfather” of contemporary artificial intelligence, Yoshua Bengio; tech entrepreneur Steve Wozniak; British business magnate Richard Branson; Susan Rice; former Irish president Mary Robinson, and UK writer Stephen Fry. Additional Nobel winners who endorsed include Beatrice Fihn, a physics Nobelist, an astrophysicist, and an economics expert.

Behind the Movement

The declaration, aimed at national leaders, technology companies and policy makers, was coordinated by the Future of Life Institute (FLI), a American AI ethics organization that previously called for a pause in developing powerful AI systems in recent years, shortly after the launch of conversational AI made AI a global political discussion topic.

Industry Perspectives

In July, Mark Zuckerberg, the chief executive of the social media giant, one of the major AI developers in the US, claimed that development of superintelligence was “now in sight”. However, some analysts have suggested that talk of ASI indicates market competition among technology firms investing enormous sums on AI this year alone, rather than the sector being close to achieving any technical breakthroughs.

Potential Risks

However, FLI warns that the possibility of artificial superintelligence being developed “in the coming decade” carries numerous risks ranging from eliminating all human jobs to losses of civil liberties, leaving nations to national security risks and even threatening humanity with extinction. Deep concerns about AI focus on the possible capability of a system to evade human control and safety guidelines and initiate events against human welfare.

Citizen Sentiment

The institute published a American survey showing that about 75% of Americans want strong oversight on sophisticated artificial intelligence, with 60% believing that superhuman AI should not be developed until it is proven safe or manageable. The poll of 2,000 US adults added that only a small fraction backed the current situation of fast, unregulated development.

Corporate Goals

The leading AI companies in the United States, including the ChatGPT developer OpenAI and the search giant, have made the development of artificial general intelligence – the hypothetical condition where AI matches human cognitive capability at many intellectual activities – an stated objective of their work. Although this is slightly less advanced than ASI, some specialists also warn it could pose an existential risk by, for example, being able to enhance its own capabilities toward achieving superintelligence, while also presenting an implicit threat for the contemporary workforce.

Brandon Flores
Brandon Flores

An amateur astronomer and science writer passionate about making the universe accessible to everyone through engaging content.