20 august 2024
|

The ChatGPT Revolution: A new playground for fraudsters?

For over a year now, artificial intelligence (AI) has been at the center of all discussions. Thanks to the rise of ChatGPT and generative AI, it has become accessible to the general public. Now we can all generate voices, texts and images of astonishing quality and consistency, for better… or for worse. This article explores how fraudsters have been able to invest in the field of generative AI.

ChatGPT: the new Swiss army knife for fraudsters?
As fraud continues to rise, ChatGPT opens the door to new forms of fraud, including making it easier to create sophisticated phishing emails.

Phishing is not a new practice. We have all been faced with this well-known fraud technique that aims to deceive us through communications, in order to obtain sensitive information or to make us click on malicious links. Whether in the personal or professional domain, such as presidential fraud, we are often lucky enough to be able to recognize these fraudulent emails, thanks to their lack of context or spelling mistakes.

Unfortunately, the advent of ChatGPT has the potential to transform this practice, making it more accessible and more formidable. So much so that British intelligence (NCSC) recently spoke out to alert us:

“By 2025, generative AI and LLM (Large Language Models) will make it difficult for everyone, regardless of their level of understanding of cybersecurity, to assess whether an email or password reset request is authentic, or to identify phishing, spoofing or social engineering attempts.”

Open-source automated recognition
Firstly, fraudsters can now use generative AI tools to efficiently collect public information about individual users. Social networks are veritable mines of personal and professional information which can be subsequently exploited.

Once this information is collected, fraudsters increase their chances of manipulation by asking ChatGPT to adapt its tone and text accordingly. For example, by analyzing a user’s public interactions on social media, an AI model can generate a message that not only appears to come from a legitimate source but is also peppered with references and nuances that enhance its credibility.

What recourse is still possible?
Faced with this growing threat, businesses and users must adopt a proactive attitude to protect themselves against phishing. Although some devices are capable of assessing more or less precisely whether texts are generated by AI (we are thinking in particular of Copyleaks), it is becoming increasingly difficult to distinguish human texts from those produced by LLMs. To date, the best defense remains awareness and ongoing training of employees and citizens about new forms of fraud, which are exacerbated by AI.

ChatGPT, like artificial intelligence in general, is a tool. A tool that can be used for virtuous or malicious purposes for society. To navigate this new era, businesses, regulators and society at large must work together to ensure that advances in AI lead to a future where innovation, privacy, Ethics and security go hand in hand, protecting individuals while promoting technological progress.