21 augustus 2024
|

How to choose the right LCLF partner?

For over a year now, artificial intelligence (AI) has been at the center of all discussions. Thanks to the rise of ChatGPT and generative AI, it has become accessible to the general public. Now we can all generate voices, texts and images of astonishing quality and consistency, for better… or for worse. This article explores how fraudsters have been able to invest in the field of generative AI.

ChatGPT: the new Swiss army knife for fraudsters?

As fraud continues to rise, ChatGPT opens the door to new forms of fraud, including making it easier to create sophisticated phishing emails.

Phishing is not a new practice. We have all been faced with this well-known fraud technique that aims to deceive us through communications, in order to obtain sensitive information or to make us click on malicious links. Whether in the personal or professional domain, such as presidential fraud, we are often lucky enough to be able to recognize these fraudulent emails, thanks to their lack of context or spelling mistakes.

Unfortunately, the advent of ChatGPT has the potential to transform this practice, making it more accessible and more formidable. So much so that British intelligence (NCSC) recently spoke out to alert us:

“By 2025, generative AI and LLM (Large Language Models) will make it difficult for everyone, regardless of their level of understanding of cybersecurity, to assess whether an email or password reset request is authentic, or to identify phishing, spoofing or social engineering attempts.”