A UK government adviser has cautioned that there may be a potential for banning powerful artificial intelligence, highlighting concerns about its implications. The adviser, who is also the CEO of Faculty AI, emphasized the need for stringent transparency, audit requirements, and enhanced safety measures for artificial general intelligence (AGI).
He suggested that in the coming months to a year, important decisions regarding AGI should be made. These remarks follow a joint statement by the EU and US, underscoring the urgency of establishing a voluntary code of practice for AI.
The adviser is a member of the AI Council, an independent committee of experts that provides guidance to the government and AI leaders. Faculty AI, OpenAI’s exclusive technical partner, assists customers in the safe implementation of ChatGPT and other AI products into their systems. While Faculty AI’s tools have been instrumental in predicting the demand for NHS services during the pandemic, the company’s political affiliations have attracted scrutiny.
In addition, the adviser joined the Center for AI Safety in warning about the potential risks of AGI, including the risk of human extinction. Faculty AI, along with other technology companies, engaged in discussions with the Technology Minister regarding the risks, opportunities, and regulatory frameworks necessary for ensuring the safety and responsibility of AI technologies.
The adviser proposed that “narrow AI” systems, designed for specific tasks like translation or medical imaging, could be regulated similarly to existing technologies. However, AGI, being a fundamentally new technology, presents more significant concerns and requires distinct regulations. AGI aims to surpass human intelligence across a wide range of tasks, posing potential risks due to its expansive capabilities.
The adviser argued that humanity’s position of dominance on Earth is primarily attributed to its intelligence. If AGI reaches or surpasses human-level intelligence, there is a lack of scientific justification for assuming its safety. Although this doesn’t guarantee negative outcomes, a cautious approach is necessary due to the associated risks. The adviser suggested imposing strict limits on computing power and even considering a potential ban on algorithms that exceed certain complexity or computational thresholds. However, he emphasized that such decisions should be made by governments rather than technology companies.
Some argue that concerns about AGI distract attention from existing issues with AI technologies, such as bias in recruitment or facial recognition tools. The adviser countered this argument, highlighting the importance of addressing both AGI risks and current technological challenges. Drawing an analogy, he stated that wanting both cars and airplanes to be safe is equally essential.
The balance of regulation is also a matter of discussion, as excessive regulation could diminish the UK’s attractiveness to investors and hinder innovation. However, the adviser believed that promoting safety could provide the UK with a competitive advantage. He expressed his conviction that safety is crucial for extracting value from technology, drawing a parallel with the necessity of functioning engines for airplanes.
Although the recent UK White Paper on regulating AI faced criticism for not establishing a dedicated watchdog, Prime Minister Rishi Sunak acknowledged the need for “guardrails” and expressed the UK’s potential to assume a leadership role in this domain.
EU Commissioner Margrethe Vestager indicated that industry and other stakeholders would be invited to contribute to a draft voluntary code of conduct in the coming weeks. During a meeting of the fourth US-EU Trader and Technology Council, US Secretary of State Antony Blinken emphasized the importance of establishing voluntary codes of conduct open to a wide range of likeminded countries.