Artificial intelligence ‘doesn’t have capability to take over’, Microsoft boss says | Science & Tech News

The head of artificial intelligence at Microsoft says the company will continue to accelerate its work on large AI models despite concerns from some in the field that the technology is growing too fast and is too unpredictable to be safe.

“The potential for this technology to really drive human productivity… to bring economic growth across the globe, is just so powerful, that we’d be foolish to set that aside,” Eric Boyd, corporate vice president of Microsoft AI Platforms told Sky News.

In 2019, the US software giant invested $1bn in artificial intelligence start-up OpenAI.

Microsoft’s cash and computing power – made available via its Azure cloud computing platform – allowed OpenAI to create GPT4, the most powerful “large language model” the world had ever seen. It was launched to the public as the chatbot, ChatGPT.

Microsoft were quick to build GPT4 and its conversational abilities into its Bing search engine. But it is also putting the technology into something called Copilot – effectively a virtual digital assistant – into a number of its existing software products like word processing and spreadsheets.

Its vision of AI isn’t about planetary takeover, explains Boyd, but about changing the relationship between people and computers.

“It will just really redefine the interfaces that we’re used to, the way that you’re used to talking to a machine – the keyboard and your mouse and all of that. I think it becomes much more language-based.”

But what of claims by leaders in the field of AI that large “generative AI” models (ones that can create text, images or other output) are developing too fast and aren’t fully understood?

“Experts in the field have gotten there based on their present credentials,” said Boyd.

“And, of course, we’re going to listen and give serious consideration to all the feedback that they have. But I think as you look at what these models do, what they’re capable of, you know, those concerns seem pretty far away from what we’re actually working on.”

Please use Chrome browser for a more accessible video player

Sky News trials an AI reporter

The current capabilities of language models like ChatGPT are being overstated, Boyd argues.

“People talk about how the AI takes over, but it doesn’t have the capability to take over. These are models that produce text as output,” he said.

Boyd said he is more worried about the potential for AI to exacerbate existing societal problems.

“How do we make sure that these models are going to be working safely for the use cases that they’re in?” he mused.

“How do we work to minimise the biases that are inherent in society and those showing up in the models?”

Please use Chrome browser for a more accessible video player

‘AI to boost UK economy by £400bn’

Read more from Sky News:
China risks falling further behind US in AI race with ‘heavy-handed’ regulation
This is how AI could change the future of journalism

But some of the biggest near-term concerns about AI aren’t about the safety of the technology itself. Rather, they are more about how much damage the technology could do if applied to the wrong tasks, whether that’s diagnosing cancer to managing air traffic control. Or being deliberately misused by rogue actors.

Some of those decisions are up to them, Boyd admits. He references the decision by Microsoft not to sell face recognition software it developed to law enforcement agencies. But the rest is for regulators.

Click to subscribe to the Sky News Daily wherever you get your podcasts

“I think as a society we’re going to have to think through what are the places that this technology is appropriate and what are the places where we have concerns about its use. But we definitely think there’s a place for regulation in this industry.”

Its partnership with OpenAI has given Microsoft a major boost in the race to market AI breakthroughs. But competition is intense. Google has a world-leading AI research division working hard to bring AI products to consumers too.

Big Tech doesn’t look like it has any intention of slowing down the race to develop bigger and better AI. That means society and our regulators will have to speed up thinking on what safe AI looks like.


Leave a Reply

Your email address will not be published. Required fields are marked *