Rishi Sunak certainly takes AI seriously – but he’s walking a treacherous technology tightrope | Science & Tech News

You can tell just by the way he talks about it – Rishi Sunak is very excited about the benefits of AI.

Today he spoke about the transformation it will bring, being “as far-reaching as the Industrial Revolution”.

Politics latest: PM says AI can be ‘co-pilot’ for jobs

But at the lectern emblazoned “long-term decisions for a brighter future”, the PM wanted to confront the risks.

The government, he said, had taken the “highly unusual step” of publishing its analysis on the risks of AI – including those to national security.

Three reports were prepared by a team of expert advisers – drafted in from tech start-ups, Whitehall and academia – to lay out the groundwork for the AI Safety Summit next week at Bletchley Park.

It’s sobering reading.

More on Artificial Intelligence

“A smorgasbord of doom,” one political journalist quipped.

Biggest threats laid bare

The reports detail the risks AI could pose from deliberate misuse, or purely by accident.

From enabling cyber criminals, to more dangerous viruses, to crashing the economy – and in the most extreme case, the “loss of control” of some future AI general intelligence, that’s more capable than humans at multiple tasks – and could ultimately destroy society.

The prime minister announced a new AI Safety Institute that will be dedicated to assessing those risks from the most powerful “frontier AI” models that are currently available – like OpenAI’s ChatGPT, Meta’s Llama, or Google’s PaLM2 – and the ones expected to imminently supersede them.

But how tough is the UK prepared to get on tech companies over safety?

I asked Mr Sunak if he would compel tech companies to hand over the code to their models as well as the data used to train them.

“In an ideal world, what you’re saying is right,” he said. “Those are all the types of conversations we need to have with the companies.”

Not exactly a yes.

Please use Chrome browser for a more accessible video player

Is AI an existential threat?

Risk v reward at heart of AI dilemma

Mr Sunak chose his words carefully because his summit is as much about encouraging big tech companies to want to do business in the UK.

It aims to develop a regulatory environment that doesn’t discourage investment or innovation.

There’s another reason, too.

When dealing with “potential risks” in an exponentially growing area of technology, it’s hard to know what it is you’re actually regulating.

Then there’s the fact big tech is multinational and drawing up one set of rules here might be meaningless if the same don’t apply elsewhere.

The best many are hoping for from the summit is that it’s a profile-raising exercise. The beginning of a conversation.

But some in the world of AI say certain red lines could be drawn now.

A ban, for example, on the pursuit of artificial general intelligence (AGI) models that are capable of multiple tasks and are superior to humans in each of them.

Rules could be drawn up now to prevent, in principle, a future AI model that can control the majority of the world’s industrial robots from speaking to the AI that dominates our office software or drives our cars.

The tech firms have made no secret of wanting to develop AGI. They’ve also said they want to ensure they’re safe and are willing to be regulated.

Read more:
We asked ChatGPT to predict PM’s speech

But next week Rishi Sunak will walk a technology tightrope – encouraging the development of the best AI has to offer (preferably in the UK), without throttling that potential by looking like someone who wants to regulate too hard.

We might come out of the AI Safety Summit with a better idea of what the greatest threats are – and the options we have for avoiding them to ensure the genuine benefits of AI are realised.

But if you’re expecting to see any long-term decisions for that brighter future, don’t hold your breath.


Leave a Reply

Your email address will not be published. Required fields are marked *