Modern technology gives us many things.

Cyber criminals using ChatGPT to make scams more human-like and believable

Cyber criminals are now using artificial intelligence to make their scams even more realistic and believable according to the latest global research from leading cyber safety brand Norton.

ChatGPT has people around the world buzzing with its ability to produce fluent and coherent passages.

Cyber criminals are now using this technology that can generate human-like text to create emails and social media posts that are more convincing and more likely to fool innocent users.

“While the introduction of large language models like ChatGPT is exciting, it’s also important to note how cybercriminals can benefit and use it to conduct various nefarious activities,” said Mark Gorrie, Asia Pacific Managing Director at Gen.

“We’re already seeing ChatGPT being used effectively by bad actors to create malware and other threats quickly and very easily.

“Unfortunately, it’s becoming more difficult than ever for people to spot scams on their own, which is why Cyber Safety solutions that look at all aspects of our digital lives are comprehensibly needed, be it our mobile devices to our online identity, or the wellbeing of those around us – being cyber vigilant is integral to our digital lives.”

Norton’s researchers found that in addition to using ChatGPT for even more believable phishing campaigns, it’s also being used to create deep fake chat bots which can impersonate humans and legitimate sources like banks and government entities to manipulate victims into handing over their personal information to steal their money.

To stay safe from these new threats, Norton experts recommend:

– Avoiding chatbots that don’t appear on a company’s website or app and being cautious of providing any personal information to someone you’re chatting with online.

– Thinking before you click on links in response to unsolicited phone calls, emails or messages.

– Updating your security solutions and making sure it has a full set of security layers that go beyond known malware recognition, such as behavior detection and blocking.

Norton’s latest report has shown it has blocked more than 3.5 billion threats in 2022 including:

– 90.9 million phishing attempts

– 260.4 million file threats

– 1.6 million mobile threats

– 274 thousand ransomware attacks

– Norton AntiTrack blocked over 3 billion trackers and fingerprinting scripts.

In Australia, just in the last quarter of 2022 alone, Norton blocked more than 28 million threats or 300,000 threats per day including:

– 960,000 phishing attempts

– 1.3 million file threats

– 24, 000 mobile threats