Boss of AI firm’s ‘worst fears’ are more worrying than creepy Senate party trick | US News

Normally when representatives of tech companies appear before the US Senate, they tend to rail against the prospect of regulation, and resist the suggestion that their technology is doing harm.

And that’s what made this committee hearing on AI a rare thing.

Sam Altman, CEO of OpenAI – the company that created ChatGPT and GPT4, one of the world’s largest and most powerful language AIs – admitted on Tuesday: “My worst fears are that we… the industry… cause significant harm to the world.”

He went on to say that “regulatory intervention by government will be critical to mitigate the risks of increasingly powerful models”.

This was, of course, welcome to the ears of worried US politicians.

The hearing on AI began with a pre-recorded statement by Democrat Senator Richard Blumenthal speaking of the potential benefits but also grave risks of the technology.

But it wasn’t him speaking – it was an AI trained on recordings of his speeches, reading a statement generated by GPT4.

More on Artificial Intelligence

Another of those creepy party tricks AI is increasingly making us familiar with.

Please use Chrome browser for a more accessible video player

AI speech used to open Senate hearing

Senators were worried – not just about the safety of individuals at the mercy of AI-generated advertising, misinformation, or outright fraud – but for democracy itself.

What could an AI, trained to carefully sway the political views of targeted groups of voters, do to an election?

Mr Altman of Open AI said this was one of his biggest concerns, too.

In fact, he agreed with nearly all the fears expressed by senators.

His only point of difference was that he was convinced the benefits would outweigh any risks.

Sen. Richard Blumenthal and Sam Altman
Image:
Senator Richard Blumenthal and Sam Altman. Pic: AP

An unlikely inspiration for controlling AI

Well if they’re all in agreement, how do you regulate AI?

How, in fact, do you write laws to constrain a technology even its creators don’t fully understand yet?

It’s a question the EU is struggling with right now looking at a sliding scale of regulation based on the risks of where an AI is being used.

Healthcare and banking would be high risk; creative industries, lower.

Today, we got an interesting insight into how the US might do it: Food labelling.

Should AI models of the future – whatever their purpose – be examined by independent testing labs and labelled according to their nutritional content, asked Senator Blumenthal?

Sen. Richard Blumenthal
Image:
Senator Richard Blumenthal. Pic: AP

The nutrition, in this case, is data the models are fed with.

Is it a junk diet of all the information on the internet – like GPT4 and Google’s Bard AI – have been trained on?

Or is it high-quality data from a healthcare system or government statistics?

And how reliable are the outcomes of the AI models that have been fed that data, even if it’s organic and free range?

Read more:
Geoffrey Hinton: Who is the ‘Godfather of AI’?
Google boss Sundar Pichai admits AI dangers ‘keep me up at night’

FILE PHOTO: OpenAI logo is seen in this illustration taken, February 3, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
Image:
OpenAI logo

Looming question for trust in AI

Mr Altman said he agreed with the senator’s idea and looked to a future when there is sufficient transparency for the public and regulators to know what’s inside an AI.

But herein lies the contradiction in Mr Altman’s evidence. And the looming question when it comes to AI regulation.

While he shared what undoubtedly are his deep-held beliefs, the way his AI and others are being deployed right now doesn’t reflect that.

OpenAI has a multi-billion dollar deal with Microsoft which has embedded GPT4 in its search engine Bing to rival Google’s Bard AI.

We know little about how these AIs manage their junk food diet or how trustworthy their regurgitations are.

Would representatives from these companies have taken a different stance on the issue of regulation if they had been sitting before the committee?

At the moment other big tech companies have been resisting attempts to regulate their social media products.

Their main advantage, particularly in the US, is first amendment laws protecting free speech.

An interesting question for a US constitutional expert is whether AIs have a right to freedom of expression?

If not, will the regulation many of the creators of AI say they want to see actually be easier to implement?


Leave a Reply

Your email address will not be published. Required fields are marked *