Back

Cyber AI 2025

Image Slider

February 26, 2025

During the round table discussion with the minister and the president of Campus Cyber, several key points were addressed. The minister emphasized that France has the best talent, cutting-edge infrastructure, and more than 500 start-ups. She stressed the need to adopt an international and diplomatic vision, while avoiding divisions with countries in the Global South.

Opening

During the round table discussion with the minister and the president of Campus Cyber, several key points were addressed. The minister emphasized that France has the best talent, cutting-edge infrastructure, and more than 500 start-ups. She stressed the need to adopt an international and diplomatic vision, while avoiding divisions with countries in the Global South.

Societal, economic, and diplomatic issues have been highlighted, as well as the importance of talent, as demonstrated by Deepseek. To remain competitive, it is crucial to be more optimistic and to promote our strengths. Initiatives such as Mistral, H, and Kyutai are not sufficiently promoted.

Healthcare and artificial intelligence are areas in which we excel. AI also plays a crucial role in cybersecurity. The announcement of Inesia, a collaboration between ANSSI, INRIA, and PNL, is a concrete example of this.

However, AI models can be targeted by attacks. A call for projects has been launched to raise awareness among companies about the associated risks.

Trustworthy AI

During the second round table, which brought together representatives from Orange, Microsoft, La Poste, and Frédéric O (former Minister for Digital Affairs and founder of a start-up), several topics were discussed concerning trustworthy AI.

Ms. Bilbao (Microsoft) discussed watermarking and the use of AI by cyber attackers. Monitoring social media platforms and training to raise awareness about information and fake news were also discussed.

Mr. Heydemann of Orange Cyber Defense emphasized the need for trust to promote the adoption of AI and the competitiveness of businesses, while lowering barriers to entry. Mr. Poupard from Dicaposte emphasized the importance of Enisia, stating that once fear has been dispelled, it is possible to distinguish what can be objectified from the rest. Enisia is now in place, and wealthy countries will derive diplomatic value from it. Cyber crisis exercises were also mentioned.

Frédéric O warned about the learning environment and the priority given to hardware. Microsoft presented the company's inclusive vision, providing toolkits for AI and its mastery by businesses, while emphasizing the importance of electricity and water needs for a sustainable solution. They also insisted that there can be no trust without value sharing, and that their approach is to have an open ecosystem. Accessibility was also a key point for Microsoft.

Introduction to Fortinet

During Fortinet's presentation, several points were raised. They reiterated the usual concerns about deepfakes and data leaks. The race for MTTR (Mean Time to Resolution) was highlighted, with AI-generated playbooks reducing incident resolution times from hours to minutes.

Fortinet warned about compatibility between different generations of AI and the quality of the data used. They also emphasized that AI comes last in the "people-process-technology" chain, not first.

However, the presentation seemed to lack maturity in terms of productivity gains, with the exception of the creation of automatic dashboards and coding assistance for non-specialists. My overall impression was that Fortinet was primarily pushing their own solutions rather than the value of AI in cybersecurity.

Automation and AI for Defense

During the Cloudflare presentation, several important points were addressed. The context was re-explained, highlighting current challenges such as zero-day attacks, exploitation time and time to patch, as well as DDoS attacks that are breaking records in terms of size, reaching peaks of 5.6 tera requests per second.

Cloudflare presented its infrastructure, including data centers in 50 cities equipped with NVIDIA GPUs out of a total of 330 cities in 120 countries. This allows 95% of the world's population to have a latency of less than 50 ms, with a network of 13,000 partners.

AI is used as a request detector, determining whether requests should be blocked. Cloudflare, which protects 19% of websites, has the data necessary to effectively detect problems on the internet and provide high-quality threat intelligence capable of spotting weak signals. More than 40% of Fortune 1000 companies use their services.

Cloudflare uses more than 50 generative AI models (foundation model built from scratch by Fortinet), enabling them to detect vulnerabilities in advance through effective threat intelligence (automatic WAF). For example, on January 17, Cloudflare pushed a rule for Ivanti Pulse even before the CVE appeared.

Cloudflare's CAPTCHA uses AI to recognize humans and validate CAPTCHAs, providing a better user experience.

In conclusion, although the presentation seemed to be a sales brochure, the figures and the fact that they have internal generative AI (foundations developed in-house) are a real contribution to understanding cyber infrastructure issues.

Protecting Your Reputation Against LLM Agents

Akamai changed the subject of its presentation at the last minute, shifting from protecting LLM (Large Language Models) to protecting against LLM, hence the new title.

They highlighted the risks of losing control of commercial property and unpaid contributions, as LLMs can reuse the information provided. In addition, resource consumption and the distinction between legitimate users and bots were discussed.

It has been noted that if LLM agents are blocked, this could allow a competitor to take over the market, which could also lead to hallucinations and fake news.

Akamai proposed an AI-based strategy, consisting of either blocking agents or issuing alerts when they are detected.

IAM and AI: Memority

Memority, a French company, presented its IAM IA (Identity and Access Management) solution, which is not yet in production. Their IAM solution can be deployed on-premises or in the cloud and includes rights management, provisioning, SSO (Single Sign-On), and multi-factor authentication. They are aiming for SecNumCloud certification in 2027.

Deploying an IAM solution is complex, particularly due to the resource-intensive nature of rights certification, orphaned roles, and role management, which is difficult to maintain over time as the information system evolves. Authentication rules also add to this complexity.

Static and IAM (IAM as a Service) models are complicated to manage. Memority offers a solution for detecting anomalies in relation to colleagues and dynamically, such as abnormal connections (e.g., from abroad or at unusual times).

AI also makes it possible to discuss rules and rights without requiring technical skills, which improves productivity. It also helps with the maintenance of rules and orphan accounts. However, for orphan accounts, simple correlation with presence, IP, etc., remains basic.

Memority envisions a future where AI-based Access Control (AIBAC) will replace Attribute-Based Access Control (ABAC), Organization-Based Access Control (ORBAC), and Role-Based Access Control (RBAC) for rights management. AI suggests and humans validate and enrich, making AIBAC compatible with older methods such as RBAC.

Memority will define a dynamic risk, enriched by statistics, because a mobile person represents more risk, for example, which will make life easier for the CISO (Chief Information Security Officer).

Dell and IA Factory

Dell presented its outlook on agentic AI in 2025 and multimodal AI. Huggingface, with its million models, illustrates the algorithmic innovation of AIs. Two major trends have been identified: private AI for business use cases and the proliferation of AI cases in companies.

Advances in this field are due to hardware, data, and algorithms. However, it is important to note that 90% of data is private, which highlights the centrality of data. Dell mentioned use cases such as digital twins of a city and supply chain optimization.

The issue of quantifying productivity gains remains complex, with no simple answer. The choice of model depends on specific use cases, and using a model directly in a business setting is not always appropriate.

To manage this data, Dell proposes the idea of a data lake. Problems that existed before AI, such as data management, are reappearing. The AI revolution is based primarily on data and how it is organized.

Understanding the location of data is crucial for trustworthy AI. Dell pointed out that the LLMs most likely to comply with the GDPR are French, such as Mistral.

In terms of infrastructure:

  • 90% of the data is private.
  • 50% of new data comes from the edge (e.g., connected supermarket checkouts).

Dell recommends bringing AI to the data rather than the other way around. Several types of threats exacerbated by AI have been identified:

  • Social engineering.
  • Deepfake CEO fraud.
  • Infection and poisoning.

Although this presentation is an introduction to AI, it highlights the general need rather than selling a specific product, which was nice.

CIO of BNP

BNP is a partner of Mistral and uses AI on a large scale, with 5,000 Copilot licenses deployed. Access to data is a major challenge, and progress is being made step by step. With 180,000 employees, BNP has implemented a broad communication strategy, followed by communities and POCs.

The environment must be secure. The battle for talent is being fought in schools, with many new jobs related to AI.

What are the obstacles to AI? Business processes must be reviewed for team transformation and training. A CIO must anticipate interoperability, the system, and monitoring. Today, innovation is advancing rapidly, but it is necessary to maintain an acceptable pace for training. BNP is working with Dell on these issues.

The infrastructure and enablers must allow these innovations to be integrated. BNP has 2,800 cybersecurity experts to monitor, identify, and resolve issues in real time, with the help of AI.

Europe's legitimacy rests on its best mathematicians and researchers, and we should be proud of that. In terms of resilience, BNP has a team dedicated to AI for security, with the necessary checks in place, including resilience. They have a backup solution and a kill switch for AI.

What new jobs? BNP has formed a working group for this purpose, focusing in particular on MLOps (although the exact term is not known). Currently, BNP does not have any autonomous agents in its information system.

Quevlar AI (collaboration with Nomios)

The Quevlar solution enables level 1 and 2 investigations to be carried out, reducing processing time from 30 minutes to 3 minutes and thus improving MTTR (Mean Time to Resolution). It also guarantees consistent quality in in-depth investigations and adds additional information to the analysis that is often omitted due to lack of time.

Sentinel One: The Impact of AI on Security

Sentinel One addressed three main topics:

  • Deepfakes.
  • The new attack surfaces created by AI.
  • AI-powered defense.

Identified threats include attacks on the supply chain, ransomware (powered by AI), unpatched systems, and AI-enhanced cyberattacks (polymorphic malware and personalized phishing).

AI has transformed misinformation, particularly with deepfakes, although these do not always require AI (for example, truncating context). Deepfakes can also be used for educational purposes, such as ultra-realistic medical simulations.

Attacks using Large Language Models (LLMs) include payload optimization, reduced need for technical expertise in coding, vulnerability scanning, and increased social engineering.

Sentinel One uses AI for signature-less behavioral analysis, with continuous signal analysis via EDR (Endpoint Detection and Response). In the future, they plan to anticipate threats by analyzing internal logs or the dark web.

Autonomous detection is facilitated by AI and Purple AI, a conversational agent that assists with analysis and handles technical queries, and can also provide summaries. The next step is to move from co-pilot mode to autopilot mode, handling the simplest alerts.

During the FIC, a "capture the flag" competition will be organized to compare a novice using Purple AI with an expert.

Mindflow: Agent and Hyper Automation

Mindflow is equivalent to n8n (no-code platform), where each step in the workflow is a description for an LLM. One example given is the automation of IP investigation. The advantage lies in the hundreds of connectors available.

RAPID7: Cloud, ML, and AI to Protect

According to data from 2023, 82% of IT decision-makers plan to invest in AI for cybersecurity, including 67% for customer support (tickets). RAPID7 presented new threats, already described in other workshops, such as deepfakes and the ease of coding a virus.

However, RAPID7 claimed that there are no new AI-specific attacks (a point on which all other speakers and I disagree), although concepts such as model poisoning and prompt injections do exist. AI is used in cloud detection and response. They have seen an increase in bills and intrusions.

Nothing exciting about the workshop.

IA SPM

Connectors validate the use of AI APIs provided by CSPs. This is beneficial for AI usage from CSPs (if not essential). Both Palo Alto and Wiz solutions provide security. The gap will remain for custom models or those from open source (Hugging Face). Palo Alto needs to get back to me on this specific point.

Google's vision for Cyber AI (top)

Google first demonstrated its AI capabilities, citing models such as Gemma and the Nobel Prize in Chemistry won thanks to AI. According to Google, the four pillars to consider are the model, infrastructure, data, and applications.

Google has emphasized the need for an AI Red Team. The AI Risk Management Framework (https://saif.google) is a security framework tailored to AI, presented as an alternative to the NIST AI RMF and more precise than the ANSSI.

Google also mentioned the use of AI for security, particularly in Workspace and Google Play Protect. They process 100 million spam messages per minute, using AI with 200 million parameters, but not generative AI.

IDECSI Copilot to Secure

IDECSI presented the Copilot solution, which processes 2 billion files per day on Microsoft 365. AI is driving the use of shadow IT, and Copilot is starting to scale.

According to them, two aspects are essential for managing the risks of Microsoft 365:

  1. Risks related to data.
  2. The volume and obsolescence of data.

The problems identified include uncontrolled history, poor rights management, and the mistaken use of obsolete data. The data comes from the business lines, but no one is responsible for it, which poses a problem for the CISO.

IDECSI offers the Detox solution, which includes a part that does not require human intervention. The process takes place in three stages:

  1. Verification and audit.
  2. Corrections.
  3. Measurement of gains and repetition every six months.

This method offers a twofold benefit: making Copilot more relevant and validating data security. Responsibility is placed back with the business units, where the information is located.

The advantages of this method include:

  • Risk reduction.
  • Storage optimization.
  • Better control of data.
  • Centralized view of the tenant.
  • Increased user capacity for action.
  • Better understanding and points of attention for the IS.

Microsoft Security: Securing AI Adoption (second best presentation)

Microsoft Security presented the challenges associated with adopting AI. The first risk is information leakage, although model poisoning was not discussed. The presentation focused on PaaS and SaaS, with the majority of cases concerning PaaS (and tomorrow SaaS with Copilot).

For SaaS, the NIST pillars are: discovery, protection, and governance. Copilot ensures data security at rest and in transit, maintains ownership, protects copyrights, and is committed to complying with EU regulations.

Responsible AI must be inclusive, confidential, accountable, impartial, transparent, and reliable. Azure Content Safety is integrated into services by default, filtering prompt injections, hate speech, etc. It is therefore crucial to verify sources for trusted use.

It is important to identify the AI used (DSPM for AI, Data Security Posture Management for AI) and detect risky uses by population. DSPM for AI allows certain SaaS for documents to be blocked, as a means of preventing data loss.

Automatic tagging and data elevation are also essential. For example, an IBAN classified as C2 becomes C3 if Copilot creates an Excel file with all available IBANs, which poses complex problems.

In terms of governance, the software must alert business units to avoid overwhelming security.

Conclusion

The event was very commercial, with publishers promoting their solutions. However, the actual benefits remain unclear for most of the presentations, with little concrete feedback. Despite this, we can see that the risks associated with AI have been clearly identified by pure players in the field and that the profession is gradually taking shape.

 

Fabien CELLIER
AI Practice Leader