A national laboratory for artificial intelligence research should be created to help reduce the UK’s huge reliance on Google DeepMind, a new report by Sir Tony Blair and Lord Hague of Richmond recommends.
Much of the UK’s academic research on AI is currently led by the Alan Turing Institute, the country’s national institute for data science and AI, which is backed by more than a dozen leading research universities and funded by the Engineering and Physical Sciences Research Council.
However, in a wide-ranging report on how Britain can improve its AI performance, published on 13 June, the two former political leaders argue that the “Alan Turing Institute has demonstrably not kept the UK at the cutting edge of international AI developments” and its “functions should be wound down and a new endeavour, Sentinel, should be funded”.
Sentinel, the proposed national AI laboratory, would “test, understand and control safe AI, collaborating with the private sector and complementing its work”, explain Sir Tony and Lord Hague.
The centre should aim to become an international hub for AI research and would “loosely resemble a version of Cern for AI and would aim to become the ‘brain’ of an international regulator of AI, which would operate similarly to how the International Atomic Energy Agency works to ensure the safe and peaceful use of nuclear energy”, their report adds.
Drawing on “best practice in the top AI labs”, Sentinel’s five-year aim would see it “form the international regulatory function across the AI ecosystem, in preparation for the proliferation of very capable models”, the study continues, stating that the UK’s potential for AI expertise – largely as a result of the presence of Google DeepMind’s London headquarters – could make it the “home to a high-value AI assurance market”.
While the UK has strong expertise in AI, says the report, it is “overly dependent on a single US-owned and funded entity, Google DeepMind”. DeepMind was responsible for the vast majority of top papers by UK-based researchers on AI published in 2020, a recent analysis found, with universities in China, Canada, the US and Switzerland producing far more highly cited papers than did institutions in the UK.
The centre should be “sufficiently resourced to operate at the cutting edge of AI, while having the freedom to partner with commercial actors flexibly and quickly”, adds the report, which notes that DeepMind’s budget was about £1 billion a year prior to its merger with Google.
“It is better not to fund something at all than fund it in a way in which it cannot be globally relevant,” it notes.
The study also recommends that Sentinel be given “similar freedom of action” to that of the Advanced Research and Invention Agency (Aria) and the same ability to recruit top technical talent, and should be led by a technical expert to ensure it is not a “business as usual lab”, likely to fail.
However, Sir Adrian Smith, director of the Alan Turing Institute, rejected the criticisms levelled at his organisation.
“It’s clear the Tony Blair Institute for Global Change has not sought a wide network of opinions and perspectives in reaching their recommendations, in particular the ill-informed comments relating to the Turing,” said Sir Adrian, who is standing down as the Turing’s director in September at the end of his five-year term.
“The Alan Turing Institute has a crucial role in applying data science and AI to key national and global challenges, as evidenced by our success working with partners in industry and the public and third sectors, and convening expertise across the UK and internationally,” added Sir Adrian, who is also president of the Royal Society.
“More than ever, now, as new and influential AI technology continues to be created, the government needs a trusted adviser to navigate the risks and opportunities of AI,” he continued, adding that “the Turing is uniquely placed to provide informed, technical advice from a wide range of national experts doing extensive work in this area. The government must now provide significant investment for AI research to back up this important work.”
More broadly, the study calls for the creation of AI fellowships and retraining opportunities, tougher regulation of AI technology, including the mandatory labelling of deepfake imagery, and the strengthening of the Foundation Model Taskforce, which reports directly to the prime minister on how AI capabilities could be strengthened.
“If the country does not up its game quickly, there is a risk of never catching up, as well as losing the chance and ability to sculpt both the technology’s future and its governance,” warns the report.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login