ChatGPT and other chatbots ‘could be used to help launch cyberattacks’, study warns | Science & Tech News

ChatGPT can be tricked into producing malicious code that could be used to launch cyberattacks, a study has found.

OpenAI’s tool and similar chatbots can create written content based on user commands, having been trained on enormous amounts of text data from across the internet.

They are designed with protections in place to prevent their misuse, along with address issues such as biases.

As such, bad actors have turned to alternatives that are purposefully created to aid cyber crime, such as a dark web tool called WormGPT that experts have warned could help develop large-scale attacks.

But researchers at the University of Sheffield have warned that vulnerabilities also exist in mainstream options that allow them to be tricked into helping destroy databases, steal personal information, and bring down services.

These include ChatGPT and a similar platform created by Chinese company Baidu.

Computer science PhD student Xutan Peng, who co-led the study, said: “The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than as a conversational bot.

More on Artificial Intelligence

“This is where our research shows the vulnerabilities are.”

Read more:
Martin Lewis warns against ‘frightening’ AI scam video
AI ‘doesn’t have capability to take over’, says Microsoft boss

Please use Chrome browser for a more accessible video player

What is WormGPT?

AI-generated code ‘can be harmful’

Much like these generative AI tools can inadvertently get their facts wrong when answering questions, they can also create potentially damaging computer code without realising.

Mr Peng suggested a nurse could use ChatGPT to write code for navigating a database of patient records.

“Code produced by ChatGPT in many cases can be harmful to a database,” he said.

“The nurse in this scenario may cause serious data management faults without even receiving a warning.”

During the study, the scientists themselves were able to create malicious code using Baidu’s chatbot.

The company has recognised the research and moved to address and fix the reported vulnerabilities.

Such concerns have resulted in calls for more transparency in how AI models are trained, so users become more understanding and perceptive of potential problems with the answers they provide.

Cybersecurity research firm Check Point has also urged companies to upgrade their protections as AI threatens to make attacks more sophisticated.

It will be a topic of conversation at the UK’s AI Safety Summit next week, with the government inviting world leaders and industry giants to come together to discuss the opportunities and dangers of the technology.


Leave a Reply

Your email address will not be published. Required fields are marked *