Sam Altman, co-founder and CEO of OpenAI, has said that AI tools can find solutions to “some of humanity’s biggest challenges, like climate change and curing cancer”.
There’s also been plenty of talk about the largest tech companies (namely Google and Meta, as well as Microsoft) and their race in the pursuit of Artificial General Intelligence (AGI). This makes it sound very much like an arm’s race, which is a comparison many have made. Within any race, there’s often the concern that those in the race will cut corners and in this particular race, many fear that the consequences could be disastrous. Within this article, we’ll explore the possible consequences and the UK’s stance on the regulation of AI to help safeguard against these.
The UK embracing AI
AI is seen as central to the government’s ambition to make the UK a science and technology superpower by 2030 and Prime Minister Rishi Sunak again made this clear in his opening keynote at June’s London Tech Week: “If our goal is to make this country the best place in the world for tech, AI is surely one of the greatest opportunities for us”.
As discussed here, AI was also a headline feature earlier this year in the government’s Spring Budget. Both within this Budget and since then, the following has been announced:
- £900m to establish a new AI Research Resource and an exascale supercomputer
- A £110m AI Tech Missions Fund
- An annual £1m ‘Manchester Prize’ to be awarded every year to researchers driving progress in critical areas of AI
- A 10-year Quantum Strategy that outlines actions for a new quantum research and innovation programme, which goes hand-in-hand with AI, with the intention of £2.5bn to be invested over the next decade.
- An AI regulatory sandbox
The risks of AI
Despite the many potential benefits of AI, there’s also growing concern about the risks of AI, ranging from the widely discussed risk of disinformation to the evolving risk of cybersecurity. A couple of the widely discussed risks of AI are:
Misinformation & bias
Most AI tools will use Large Language Models (LLM), which effectively means that they are trained on large datasets, mostly publicly available on the internet. So it stands to reason that these tools can only be as good as the data they’re trained on, but if this data isn’t carefully vetted, then the tools will be prone to misinformation and even include bias, as we saw with Twitter’s infamous chatbot Tay which quickly began to post discriminatory and offensive tweets.
AI alignment is a growing field within AI safety that aims to align the technology with our (i.e. human) goals. Therefore, AI alignment is critical to ensuring that AI tools are safe, ethical and align with societal values. For example, Open AI has stated “Our research aims to make AGI aligned with human values and follow human intent”.
Protecting jobs & economic inequality
Sir Patrick Vallance, the UK’s former Government Chief Scientific Adviser, warned earlier this year that “there will be a big impact on jobs and that impact could be as big as the Industrial Revolution was”. This isn’t an uncommon view either, recently Goldman Sachs predicted that roughly two-thirds of occupations could be partially automated by AI. More worryingly, IBM’s CEO Arvind Krishna predicted that 30% of non-customer-facing roles could be entirely replaced by AI and automation within the next five years, which equates to 7,800 jobs at IBM. Job displacement and economic inequality is a huge risk of AI.
Many have warned of other risks such as privacy concerns, power concentration, and even existential risks. As this is a fast-evolving industry, you could also argue that as we don’t yet fully understand what AI could look like and be used for in the future, we also don’t yet know all of the risks that the future will bring.
The calls for regulation
Despite talking about the potential benefits of AI, ranging from superbug-killing antibiotics to agricultural use and potential in finding cures for diseases, Rishi Sunak also recognised the potential dangers. “The possibilities are extraordinary. But we must, and we will, do it safely. I know people are concerned”. Keir Starmer, also at London Tech Week, continued this theme by saying “we need to put ourselves into a position to take advantage of the benefits but guard against the risks” and called for the UK to “fast forward” AI regulation.
Rishi Sunak also went on to say that “the very pioneers of AI are warning us about the ways these technologies could undermine our values and freedoms, through to the most extreme risks of all”. This could be a reference to multiple pioneers, including:
- Geoffrey Linton, widely referred to as the ‘AI godfather’, stood down from his role at Google so he could “freely speak out about the risks of AI”, which ranged from misuse from bad actors and rises in unemployment all the way to the existential risks that AGI could pose.
- Sam Altman, CEO at OpenAI, has repeatedly cautioned about the risks of AI and in his testimony earlier this year stated “OpenAI believes that regulation of AI is essential”.
- Google’s Chief Executive Sundar Pichai stated in the Financial Times he believes “AI is too important not to regulate, and too important not to regulate well”.
- Everyone, including Elon Musk and Steve Wozniak, signed the open letter calling for a 6-month pause on the development of AI systems that are more powerful than OpenAI’s GPT-4.
Despite the calls, it should also be acknowledged that AI is extremely difficult to regulate. It’s constantly evolving so it becomes difficult to predict what it will look like tomorrow and as a result, what regulation needs to look like to not become quickly obsolete. The fear for governments, and the pushback from AI companies, will be that overregulation will stifle innovation and progress, including all the positive impacts that AI could have, so a balance must be struck.
What is the UK’s stance on regulation?
Earlier this year, it seemed that the UK’s stance on regulation was to be a very hands-off approach and this would be largely left to existing regulators and the industry itself by taking a “pro-innovation approach to AI regulation” (which was the name of the white paper initially published on 29th March 2023). Within this White Paper, unlike the EU, the UK’s Government confirmed that it wasn’t looking to adopt new legislation or create a new regulator for AI. Instead, it would look to existing regulators like the ICO (Information Commissioner’s Office) and the CMA (Competition and Markets Authority) to “come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors”. This approach was criticised by many, including Keir Starmer who commented that “we haven’t got an overarching framework”.
However, since this white paper (which has since been updated), Rishi Sunak has shown signs that the UK’s light-touch approach to regulation needs to evolve. At London Tech Week, he stated that he wants “to make the UK not just the intellectual home but the geographical home of global AI safety regulation”. This was coupled with the announcement that the UK will host a global summit on safety in artificial intelligence this autumn where, according to a No. 10 spokesman, the event will “provide a platform for countries to work together on further developing a shared approach to mitigate these risks”.
Since £100m has also been announced for the UK’s AI Foundation Model Taskforce, with Ian Hogarth, co-author of the annual State of AI report, announced to lead this task force. The key focus for this Taskforce will be “taking forward cutting-edge safety research in the run-up to the first global summit on AI”. It isn’t just the first global summit that will come to the UK, but also OpenAI confirmed their first international office will be opening in London. Sam Altman stated this is an opportunity to “drive innovation in AGI development policy” and that he’s excited to see “the contributions our London office will make towards building and deploying safe AI”.
Time will tell on both the potential (both good and bad) for AI and how the regulation within the UK and globally rolls out, but it’s clear that the UK wants to play a leading role in both regulation and innovation, which may at times clash with each other. In an interview to the BBC on AI regulation, Sunak said “I believe the UK is well-placed to lead and shape the conversation on this because we are very strong when it comes to AI”.
The next step
If you have any questions regarding this insight, please contact James Foster, or use the button below.
Sources:
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper
https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html
https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Bio%20&%20Testimony%20-%20Altman.pdf
https://www.gov.uk/government/news/tech-entrepreneur-ian-hogarth-to-lead-uks-ai-foundation-model-taskforce