British projects given tens of millions in funding to develop ‘safe and trustworthy’ AI | Science & Tech News

British projects dedicated to developing safe and trustworthy artificial intelligence have been given tens of millions of pounds in public funding.

The investment by UK Research and Innovation (UKRI), the national funding agency for science and technology, was revealed at London Tech Week.

Most of the money has been awarded to Responsible AI, a group led by the University of Southampton, to ensure future models benefit rather than threaten society.

Project lead Professor Gopal Ramchurn warned that just because AI has been tested or validated by its creators in “well-defined settings”, it doesn’t mean it should be trusted by the public, government and industry.

“Trustworthy AI tends to be looked at from a very technical perspective,” he said.

While it could be deemed reliable within “particular closed environments it has been tested in”, new problems may present themselves once released into the public – such as use of a user’s personal data.

Please use Chrome browser for a more accessible video player

Meet world’s first ultra-realistic robot

Responsible AI will get £31m of the funding, with the rest spread out across dozens of other initiatives.

Some £2m will go to 42 projects to carry out feasibility studies on how businesses could speed up AI adoption.

Successful ones will receive a share of an additional £19m to develop these solutions further.

A further £13m will be used to fund 13 projects dedicated to helping the UK reach its net-zero targets.

Read more:
AI could ‘kill humans’ in two years’ time, Sunak’s adviser warns
Terminator blamed for public’s concerns about AI

It comes after Rishi Sunak appeared at London Tech Week to speak of the benefits AI could bring to British businesses and the public sector.

The prime minister said while the tech needed to be regulated, it could prove transformative across the economy, with “every job essentially having AI as the co-pilot”.

Education Secretary Gillian Keegan also used the event to call on technology and education experts, together with business leaders, to come forward with proposals on how AI can be used “in a safe and secure way”.

Her appearance followed Labour leader Sir Keir Starmer, who warned many jobs could be replaced by AI, and called for a “much more informed discussion” about its impact on British industry.

Please use Chrome browser for a more accessible video player

AI is getting ‘crazier and crazier’

The government has announced Britain will host a global conference in the autumn to debate the regulatory “guardrails” that will mitigate future risks from AI, which Mr Sunak compared to the COP climate summits.

The EU is already moving ahead with its own guardrails, with the European Parliament approving proposed AI regulation on Wednesday.

The draft legislation seeks to set a global standard for the technology, which is used in everything from generative models like ChatGPT to driverless cars.

Companies that do not comply with the rules in the proposed AI Act could face fines of up to €30m (£25m) or 6% of global profits, whichever is higher. That would put Microsoft, a large financial backer of OpenAI’s ChatGPT, at risk of a fine exceeding $10bn (£7.9bn).

The proposals will now be debated by EU member states before being signed into law.