In recent months, AI chatbots have taken the world by storm. We’ve had a lot of fun asking ChatGPT questions, seeing how much of our work it can do, and even getting it to tell us jokes.
However, while many people have been having fun, cyber criminals have been moving forwards and developing ways to use AI for more sinister purposes.
They’ve discovered that AI can make their phishing scams more difficult to detect – and thus more successful.
Our advice has always been to use caution when sending emails. Read them thoroughly. Keep an eye out for spelling and grammatical errors. Before you click any links, make sure it’s the real deal.
And it’s still sound advice.
Ironically, the phishing emails generated by a chatbot appear more human than ever before, putting you and your employees at greater risk of falling for a scam. As a result, we must all be even more cautious.
Crooks are using artificial intelligence to create unique variations of the same phishing lure. They’re using it to eliminate spelling and grammar errors, as well as to create entire email threads to make the scam appear more credible.
Security tools to detect AI-written messages are being developed, but they are still a long way off.
That means you should be extra cautious when opening emails, especially those you didn’t expect. Check the address from which the message was sent, and double-check with the sender (not by replying to the email!) if you have any doubts.
Get in touch if you require additional information or team training on phishing scams.