In terms of cybersecurity, the emergence of generative AI has changed the game from both an offensive and defensive standpoint. Today, companies find themselves on the front line facing increasingly numerous and sophisticated cyberattacks. However, the primary threat they face comes from within and is related to the human factor.
Cyberattacks are no longer the sole preserve of professional cybercriminals. Tools such as ChatGPT now make it easy for anyone to carry out phishing campaigns. As a result, digital attacks have increased by 40% in five years,and nearly one in two companies (47%) suffered at least one successful cyberattack in 2024.
Attack messages, whether sent by email or via social media, are becoming increasingly sophisticated thanks to generative AI, making them harder to identify. The quality of attacks is improving, particularly targeted attacks, which are becoming increasingly complex and realistic.
The human factor, the primary vector of cyber risk
In this context, companies face a dual challenge: on the one hand, they are confronted with unprecedented risks, and on the other, they must manage several different types of risks simultaneously. In 2024, nearly nine out of ten organizations (89%) faced at least two types of risk. Among these, human risks are a source of concern for 77% of executives.
The reason: 68% of compromises today are still human-induced[4], linked to an error or a social engineering attack! Given this situation, technology alone is no longer enough to counter attacks. User awareness is now essential to the fight. But this reality requires a paradigm shift. Why? Because companies are struggling to adopt an effective approach.
Above all, companies must ask themselves whether their awareness campaigns are useful and whether they are relevant to the current threat landscape. They must give meaning to their training and associate a return on investment with it in order to have a positive impact on their production.
How? By precisely quantifying the risk and defining measurable objectives in order to implement appropriate and targeted protective measures.
For intelligent human risk management
Determining the level of risk requires evaluating a number of indicators: Is the user under attack? Do they receive a lot of threats? If so, how do they behave when faced with a fraudulent email? Do their access rights to the company's systems make them a prime target?
The idea is to map user risk so that only the right people receive the right best practices for the specific threats they face. This is where artificial intelligence comes into its own, playing a dual role: detecting threats upstream and analyzing user behavior in order to personalize protection strategies.
Refined behavioral analysis
One of the major advantages of AI is its ability to establish individualized risk profiles. Not all companies face the same threats, and not all users have the same level of maturity or exposure. AI makes it possible to analyze the actions of each employee and assign them a risk level based on their behavior and the threats they face. The result is tailor-made cybersecurity that allows efforts to be focused on the most exposed employees rather than the entire company.
Proactive threat detection
AI also plays a key role in anticipating cyberattacks. By analyzing weak signals, it is not only able to detect anomalies that would go unnoticed through conventional monitoring, but also to send an alert message to security teams to indicate which cases should be prioritized. Thanks to AI, it is possible to identify malicious emails that do not contain any suspicious links or attachments, but whose language has been modified to deceive users.
How does it work? Thanks to natural language processing (NLP), AI is able to analyze not only words but also the intent of the message: it is the combination of words in a certain sense that makes it possible to establish notions of urgency and make the decision to reject the message.
A gradual and intelligent response
Another challenge inherent in human risk management is knowing how to respond to risky behavior without harming employee productivity. To do this, it is essential to adopt a gradual approach. If an employee repeatedly clicks on suspicious links, rather than immediately punishing them, you can start by limiting their access to certain sites or services, sending them personalized reminders, or even directing them to appropriate training. The idea here is to train users who need it at the right time to support them in their professional and personal lives, relying on communication with both the cyber and non-cyber ecosystems.
Where to begin?
However, it is difficult to know where to start. To get started, it is important to ask yourself two questions. The first: does the IT director or CISO know who is most frequently targeted? And the second: how much of the latest attacks were due to human error?
Giving meaning to human risk begins with an assessment to determine how many incidents have occurred in production involving users. Thesecondstep is to ask whether the risk is uniform. Finally, it is important to analyze the impact that training and simulation campaigns have had on production. For companies that cannot answer these three questions, their awareness-raising solution needs to evolve. By promoting personalized human risk management, AI is becoming an essential lever for strengthening companies' resilience to the cyber threats of tomorrow.
[1]Annual Report on Cybercrime 2024, Department of Homeland Security – 2024
[2]Business cybersecurity barometer, OpinionWay for CESIN, January 2025
[3]8th editionofthe SME–mid-market risk management barometer, QBE–OpinionWay, February 2025
[4]Data Breach Investigations Report, Verizon – 2024
