The 2024 edition of the AWS Summit in Paris proved to be a turning point for technology experts, providing a stage to reveal how the integration of artificial intelligence (AI) in the cloud is transforming several industries. This event represented a unique opportunity to observe and evaluate new trends and challenges related to the adoption of these technologies. Here is an in-depth analysis of the main sessions and topics discussed on this opening day.
Opening Keynote: Visions and Perspectives for the Future
The opening plenary session, led by Julien Groues and Mai-Lan Tomsen-Bukovec, highlighted thecapacity of AWS solutions to capitalize on the power ofgenerative AI. Through concrete examples from Air Liquide and the TF1 Group, they demonstrated how the AWS cloud stimulates theirbusiness by improving customer experience and optimizing operational processes. Arthur Mensch of Mistral AI discussed the importance of robust infrastructure to support large-scale language models (LLMs), which are crucial for processing and analyzing vast amounts of data in real time.
Impact of AI on key sectors
May Lan highlighted the significant progress made since the industrial revolution to our digital age , highlighting how AI is revolutionizing sectors such as healthcare and information technology . The examples of Netflix in video content recommendations and Moderna in applying AI to COVID-19 vaccine research are good illustrations of this interdisciplinarity, demonstrating howAI can accelerate the development of innovative solutions to global challenges .
Digital transformation at Air Liquide
Fabien Mangeant, from Air Liquide, recounted how, since 2018, digitization via data and AI has become a key component of the company's transformation strategy. Thanks to a data platform supported by AWS services, Air Liquide processes 3.5 billion pieces of data every day from 600 plants and its interactions with more than four million patients and customers. This case study illustrates the role of AI in optimizing the industrial value chain and continuously improving operational efficiency.
Tools and Infrastructure for Generative AI
This session also highlighted advances in the infrastructure needed to support generative AI. The partnership between AWS and NVIDIA on the CEIBA project, which aims to develop chips for generative AI, underscores the importance of a solid hardware foundation.addition, Amazon Bedrock was presented as an essential platform for development with LLMs, offering integrated tools from companies such as AI21 Labs and Anthropic to facilitate the deployment and management of AI applications .
Open Source and AI at Hugging Face
Thomas Wolffrom Hugging Face discussed the importance of open source in the development and democratization of AI. With a community-driven approach, Hugging Face has succeeded in creating alibrary of half a million AI models, making the technology accessible to a wide range of users and stimulating innovation in various applications, including in fields as specialized as space exploration with the startup Hawking.
Stellantis' strategy for large-scale deployment of generative AI
Stellantis shone with an in-depth presentation on the large-scale deployment of generative artificial intelligence (AI). Led by Annabelle Gérard, head of the generative AI center of excellence at Stellantis, the presentation addressed the technical, ethical, and operational dimensions essential for the responsible and effective integration of generative AI in the automotive sector .
Strategic framework for the adoption of generative AI
Stellantis has unveiled an AI adoption model based on three key pillars: process, technology, and people. This model promotes a comprehensive approach that includes not only the technical aspects of AI, but also the ethical and social implications of its use.
- Identification of users and impacts: Gain an in-depth understanding of end users and the potential impacts of AI on them.
- Risk assessment: Examine potential risks and disadvantages in relation to expected benefits.
- Failure management: Establish protocols to effectively handle AI errors and failures .
Pillars of ethical AI
The presentation highlighted the ethical principles that Stellantis incorporates into the development of its generative AI:
- Alignment with organizational values: Ensure that AI projects are consistent with the company's core values.
- Data privacy and security: Prioritize personal data protection and security across all operations.
- Fairness and transparency: Commit to developing fair and transparent systems.
- Training and awareness: Train employees in AI best practices and ethical implications.
- Diversity and inclusion: Promoting diversity within development teams to prevent algorithmic bias.
Infrastructure and security
The implementation of generative AI requires a robust and secure infrastructure. Stellantis explained its approach to securing the development and deployment of AI:
- AI governance: Integrating threat modeling and compliance with standards.
- Monitoring and auditing: Implement continuous monitoring and periodic audits to ensure system integrity.
- Data protection: Classify data and deploy strategies to detect and mitigate bias.
Stellantis GenAI Platform
The Stellantis GenAI platform is designed to simplify access to and management of LLMs :
- Tools for LLM: Provide tools that facilitate the integration and use of language models.
- Gateway IA: A secure, permission-based gateway for accessing applications.
- Self-service access: Enabling users to independently access generative AI-based tools .
Practical applications
Stellantis presented several use cases for generative AI in a number of business areas, demonstrating its transformative potential, such as "engineering" with an AI application for vehicle data classification and intelligent parts search, or "purchasing" withassistance in contract analysis and optimization of purchasing processes .
Stellantis presented several use cases for generative AI in a number of business areas, demonstrating its transformative potential, such as "engineering" with an AI application for vehicle data classification and intelligent parts search, or "purchasing" withassistance in contract analysis and optimization of purchasing processes .
Towards seamless and secure AI integration
Stellantis emphasizes the importance of infrastructure that is both transparent and secure for the integration of generative AI. The development of robust monitoring systems and the implementation of advanced security protocols are crucial to prevent abuse and ensurethat AI technologies serve the public interest while protecting user privacy .
Implications for the future of the automotive industry
Stellantis' initiative shows how companies can navigate the complex landscape of generative AI. By taking a comprehensive approach that considers technical, ethical, and human aspects, organizations can maximize the benefits of AI to transform their operations while remaining true to their core values.
This approach also highlights the importance of ongoing training and awareness. By educating employees and stakeholders on the principles of responsible AI, Stellantis ensures that its workforce is not only technically competent but also ethically prepared to work with advanced technologies.
Challenges and Opportunities
The integration of generative AI presents many challenges, including risk management, data security , and ensuring fair and transparent use of AI.
These challengesare essential for Stellantis and other companies to overcome. However, the opportunities offered by this technology, such as improved decision-making, optimized operations, and personalized services, are considerable.
Stellantis has offered a detailed view of the responsible adoption of generative AI in the automotive sector . By placing ethics at the heart of its AI strategy, Stellantis is not simply following technological trends, but redefining them by laying the foundations for a future where technology and human values evolve together. This model could serve as a reference for other companies seeking to harness the potential of generative AI while successfully navigating the complexities of its large-scale deployment.
Generative AI: Revolutionizing drug discovery by Iktos
The transformative use of artificial intelligence in the pharmaceutical industry was highlighted by Thomas Sauzeau from AWS and Nicolas Do Huu from Iktos. They presented how generative AI is being used to innovate in the discovery of new drugs, offering the potential for radical change in the development and optimization of treatments.
Exploring generative AI for drug discovery
Iktos, in partnership with AWS, has developed a revolutionary SaaS platform that applies artificial intelligence to transform the drug discovery process. This platform helps pharmaceutical companies accelerate their research, reduce costs, and increase the success rate of drug candidates.
Use of large language models (LLMs)
A central element of this solution is the use of LLMs, which enable the generation and exploitation of complex data models necessary for the synthesis and analysis of new molecular structures. Thanks to Amazon Bedrock, Iktos can implement these powerful LLMs, promoting more accurate modeling and simulation of molecular interactions .
Specific Applications
The presentation highlighted several applications of generative AI in the drug discovery process:
Molecular Structure Optimization: AI is used to refine molecular structures, maximizing their effectiveness and minimizing adverse effects.
AI and Brain-Based Neural Networks (BBNN) Training: Development of models capable of predicting drug test results faster than conventional methods .
From Hit to Lead and Lead Optimization: AI techniques to accelerate the selection of the most promising candidates and optimize these candidates for clinical trials.
AWS Infrastructure and Support
The collaboration with AWS provides Iktos with a robust and secure infrastructure for processing sensitive data and conducting complex simulations required for drug discovery. Molecular dynamics simulations, which are crucial for understanding drug interactions at the atomic level, are performed on GPUs via AWS, ensuring speed and efficiency.
The Future of AI at Iktos
Iktos plans to develop a fully automated laboratory, Iktos Robotics, which will use AI to automate not only the discovery but also the synthesis of new drugs.
This initiative could revolutionize the field of pharmaceutical research, making the process much faster and less expensive.
Iktos illustrates the revolutionary potential of generative AI in the healthcare sector , particularly in the discovery of new drugs. By leveraging the power of AI and a robust cloud infrastructure, Iktos is ideally positioned to drive innovation in pharmaceutical research, offering new possibilities and hope in the treatment of various diseases.
The Impact of Generative AI on Startups: A Review of the AWS Summit 2024
Moderated by Vivien de Saint Pern from AWS, the roundtable brought together speakers from several innovative startups , including Grace Mehrabe from Outmind, Charles Borderie from Lettria, and Svend Court-Payen from Qlip. They shared their experiences of implementing generative AI in 2023, the benefits they have gained, and the challenges they have encountered.
Various experiences with generative AI
Qlip highlighted their ability to stay ahead despite increased competition by continuously updating the contextualization of their content with LLMs. Their main challenge has been to strike a balance between the richness of AI-generated content and the need to control the "creative hallucinations" of AI outputs to ensure the relevance and accuracy of video content .
Outmind highlighted the expansion of the AI market and the addition of relevant use cases for their customers, emphasizing the importance of constant interaction with users to get them accustomed to using AI prompts.
Lettria explained their initial approach to evaluating the benefits of integrating LLMs into their products, which led to significant time savings in their operations.
Challenges of Integrating Generative AI
The integration of LLMs presented significant challenges. Qlip discussed the difficulties of content creation, where maintaining a balance between AI creativity and quality control is essential. Outmind discussed the challenges of getting users to adopt this new technology, requiring an educational approach from the outset.
Deployment Strategies and Impact Measurement
Production deployment was discussed, with strategies such as the blue/green deployment used by Outmind, which allows for a smooth transition for end users. Qlip explained their approach of accepting initial variability to gradually build user confidence .
The impact varied depending on the company. Qlip observed a significant improvement in content generation for their users, while Outmind noted impressive 50% growth in the fourth quarter, attributed to the use of generative AI. Lettria reported considerable time savings, confirming the effectiveness of AI in their processes.
This roundtable discussion shed light on the effectiveness of generative AI in various business contexts, but also highlighted the challenges associated with its integration. The exchanges between these technology leaders offered valuable insights into how startups can leverage emerging technologies to transform their operations and service offerings .
Optimizing the video transcoding workflow at Canal+ with AWS
The session devoted to accelerating the expansion of a video transcoding workflow at Canal+ was a highlight, exploring the transition from traditional media processes to modern cloud-based solutions.
Hosted by Karim Sabbagh and Lionel Gattegno from AWS, with the participation of Kevin Saliou from Canal+, this presentation revealed the challenges and successes of Canal+ in the field of video transcoding.
Digital Transformation at Canal+
Canal+ launched an ambitious project called WALL-E, aimed at modernizing and expanding their video transcoding capacity in response to a rapid increase in archive volume. This project addressed the need to manage a growing amount of content and the evolution of delivery formats and codecs, such as HLS, MPEG-Dash, and those associated with Windows Server IIS Smooth Streaming.
Strategies adopted
Migrating to AWS has enabled Canal+ to benefit from the elasticity and robustness of the cloud.
The 'split and merge' approach, which involves splitting the source file and transcoding the segments on Amazon EC2 Spot instances, was essential. This method optimized costs and improved the efficiency of processing large volumes of data.
Challenges and solutions
The main challenge for Canal+ was to manage the elasticity required to handle up to 45,000 monthly transcodes , a number well above the initial 6,000. To achieve this, Canal+ opted for a cloud-native architecture, creating a resilient and elastic platform capable of handling unprecedented load peaks.
Results and benefits
The transition to AWS has enabled Canal+ to efficiently manage a 350 TB archive and support the programming of 1,200 pieces of content per day across 160 channels. This architecture has not only facilitated international expansion but also improved service quality, notably with the introduction of Dolby Vision content in partnership with Apple TV.
Final thoughts
The WALL-E platform embodies Canal+'s ambition for a modern media infrastructure designed to be resilient, adaptable, and economically viable. This project illustrates how cloud computing technologies can radically transform operations in the media sector, delivering unprecedented flexibility and significant improvements in terms of cost and performance.
This presentation highlighted the ability of AI and the cloud to revolutionize traditional industries, but also emphasized the importance of meticulous planning and execution for such a profound transformation to succeed.
Impact and applications of generative AI in Fintech
The session on generative AI in fintech, led by Alexandre Matsuo from AWS, offered a glimpse into the revolutionary potential of this technology in the financial sector.
Presentations by Maxime Mandin of Blackfin Capital Partners, Zoe Mohl of Balderton Capital, Michael Benisti of Ledger, and Joffrey Martinez of Artefact provided a variety of perspectives on innovative applications of generative AI.
Digital transformation in Fintech
Generative AI is transforming fintech operations, improving customer experience and automating internal processes. Artefact demonstrated the use of advanced chatbots that facilitate self-management for customers, enabling services such as account opening and insurance and credit applications through natural language interactions .
Practical examples
Ledger highlighted how generative AI doubled their productivity through assisted development , thereby optimizing their code more efficiently. They also emphasized the importance of maintaining a "human-in-the-loop" approach to complement human capabilities without replacing them .
Blackfin uses AI to write brokerage summaries in response to financing requests, significantly improving their customer service.
Challenges and Strategies
The challenges of adopting generative AI were discussed, including the risks associated with data recognition . Ledger raised the issue of accuracy, while Blackfin discussed the risks of fraud and the need for ongoing audits to ensure security and compliance.
Investor outlook
Investors, such as those at Blackfin Capital and Balderton Capital, have expressed optimism about the potential of generative AI while emphasizing the importance of rigorous investment criteria. They are looking for applications with a clear return on investment and the ability to train AI on specific data to develop truly personalized and effective solutions.
Conclusion and future prospects
Participants emphasized that fintech is at an exciting turning point with generative AI, which offers incredible opportunities to transform financial services. However, this also requires a renewed approach to risk management, ongoing training, and investment in secure technologies. The future of fintech, enriched by generative AI, promises significant improvements in efficiency and an enhanced customer experience, making financial services more accessible and personalized than ever before.
Implementation of an effective observability strategy
The session led by Rodrigue Koffi andSébastien Duval from AWS explored the vital importance of observability in modern architectures, particularly in cloud environments.
This discussion highlighted the need for a proactive and structured approach to monitoring and optimizing IT systems.
Fundamentals of Observability
Observability was addressed through the "GoldenCircle" model, which includes the questions: "Why observe?", "How to observe?" and "What to observe?".
This method aims to clearly define objectives and methodologies before delving into technical aspects.
Observability maturity
The maturity of observability within an organization can be classified into four levels :
Level 1 - Fundamentals : Involves collecting basic telemetry data .
Level 2 - Telemetric Analysis: Transformation of data into actionable information .
Level 3 -ObservabilityAdvanced: Proactive detection and correlation of anomalies.
Level 4 -Proactivity: Automatic identification of problem sources for rapid resolution.
Techniques andtools
For each maturity level, specific AWS tools were recommended :
Amazon CloudWatch for an overview and basic alerts.
AWS X-Ray for detailed tracing and fine-grained anomaly detection.
Amazon Managed Service for Prometheusfor advanced metrics and alert management, especially in Kubernetes environments.
Importance of observability
The importance of observability has been highlighted as crucial not only for detecting and resolving incidents, but also for understanding their impact on operations and user experience. It enables organizations to remain responsive and predictive rather than simply reactive.
Best practices
Specific practices were recommended, such as using AWS Real User Monitoring (RUM) to observe the user experience in real time, adopting a differentiated dashboard strategy for internal stakeholders, and implementing an effective alerting strategy to avoid information overload .
Iteration and continuous improvement
Iteration and continuous improvement were presented as essential, emphasizing that observability is not a one-time project, but a continuous process that must evolve with the monitored systems.
In conclusion, the session emphasized the importance of clearly defining business objectives and aligning them with observability strategies. Assessing observability maturity is crucial to ensuring that the strategies implemented are not only effective but also tailored to the specific needs of the business. When executed well, observability enables a better understanding of systems and an increased ability to respond quickly to changing conditions, ensuring customer satisfaction and loyalty.
Conclusion of the AWS Summit 2024: Navigating the age of AI with ethics and harmony
The summit highlighted how AWS technologies facilitate the use of AI by providing tools that augment human capabilities. However, this integration raises crucial questions:
- How can we maintain a healthy balance between human autonomy and automated decision-making?
- How is data, which reflects our lives, used to improve these technologies without compromising our privacy?
The role of technology leaders
As a technology leader, AWS is shaping the future of our digital world, but with great power comes great responsibility. Their role in promoting fair practices and stimulating healthy competition is crucial. This prompts us to reflect on how these companies can act as guardians of technology, innovating not only for progress, but also to ensure that these advances benefit everyone equally .
Philosophy ofcoexistence
The convergence of AI, data, and the cloud prompts profound reflections on our relationship with technology. How are these tools, which extend our intelligence and capabilities, redefining what it means to be human? What place will technology occupy in our lives, and how can we use it to enrich the human experience rather than dominate it?
The convergence of technologies and practices showcased at the AWS Summit 2024 highlights an undeniable truth: AI has immense potential, but its safe implementation requires synergy between technological innovation and human vigilance .
As AI pushes the boundaries of what is possible, transforming data into decisions and actions in near real time, it remains essential to navigate this new era with a clear ethical and security compass .
AI redefines, humans secure.
By embracing this duality, we can not only keep pace with innovation but also guide it toward a future where technology amplifies good without compromising safety or ethics.
Lionel GAIROARD
DevSecOps Practice Leader




