Faced with an immense amount of password processing within our mission, we decided to automate a number of processes to save valuable time and improve the quality of our actions. The goal? To handle password-related incidents differently than in a traditional tool, such as SIEM or even SOAR.
Introduction
Every year, NordPass publishes a report with statistics on the most commonly used passwords, broken down by category (e-commerce, social media, etc.) and the 35 countries covered by the study. The results are not very encouraging... "123456" and its variants, "admin," and "password" are among the top 10 most commonly used passwords [1]. Unfortunately, this is not limited to the private sphere, as the passwords studied also came from the dark web and corporate data leaks. The problem is therefore clear: not only are passwords not strong enough, but they are sometimes stored where they shouldn't be.
Background
This observation was made at the client I worked for. The password policy is well defined, with basic rules around robustness and encrypted storage. In addition, the client provides its employees with the means to store and exchange this data securely. To complement this, monitoring must be put in place for this highly sensitive data in the company's shared storage spaces. This was the purpose of my assignment: to improve the Password Hunting project, which was still in its infancy at the time.
When I got involved, the Password Hunting project had a detection tool with rules that had been developed and activated on a defined set of "unencrypted" files. An initial processing procedure had also been established. However, given the volume of detections (100,000 per month!), even though the scope had been reduced, the workload for processing all these detections overwhelmed the team. It is unthinkable that the CISO could handle all this on his own, so the initial processing procedure specified that the owners of the detected files had to correct the detections themselves. This raised a problem: how could the entire processing chain be defined and automated as much as possible in order to save the time needed to cover all shared storage spaces?
Achievements
What took a lot of time was the analysis and processing:
- Analyze each detection individually, record the result in a spreadsheet (which was becoming time-consuming),
- Then regularly extract the confirmed cases and, for each one, send an email to the employee concerned.
- Sometimes chat with the employee so they can find the file they need to correct and then collect their response.
All of this was unsustainable and even impossible to do 100% without a large team, given the volume of work involved. So I worked on the entire project, starting with the basic questions:
- What? Or more clearly, what are we detecting? At first glance, the answer to this question is fairly simple: passwords. However, this concept can be extended to all types of secrets, because passwords are not used everywhere (and thank goodness for that). There are passwords, keys (API, cloud, etc.), tokens, certificates, and more. And depending on company standards, they don't all have the same formats. Detecting secrets in the broad sense requires a clear and precise definition, otherwise the false positive rate is too high. As initial work had already been done on this point, and detection rules had been created, I left the improvement of these rules aside and only returned to them at the end.
- Where? In the sense, where should this sensitive data be detected? This question is crucial because, based on the team's experience, the volume generated could be too large to run (i.e., process detections) and build (automate and improve the process) at the same time. I therefore began by further reducing the initial scope that had been defined, in order to allow time for automation while still having a representative sample of what can be detected.
- The biggest question here is how. Several questions underlie this point:
- How should you proceed? In this regard, the process is similar to traditional development processes:
- Detect
- Analyze
- Treat
- Remedy
- Monitor
- Improve
- And the questions to define each step of the process:
- How to detect?
- How to analyze?
- How should it be treated?
- ...
- How should you proceed? In this regard, the process is similar to traditional development processes:
The heart of the matter: detailing the entire process.
How to detect
This point is the quickest. As I said, a detection tool, Netskope, was already in place when I arrived, with detection rules implemented. However, it had been decided that the tool could not be used for on-premise storage spaces, for which another tool (Forcepoint) would be used. Within the defined scope, this tool was not yet in use, but plans had to be made to integrate the detections from this tool as well.
The detection rules were initially left in place while the next steps in the process were defined. At the end of my assignment, they were reworked to take into account the client's standards, so that detections could be prioritized more appropriately. Until then, no detection had been prioritized over others, despite the fact that a password does not have the same criticality as a token, for example. To achieve this, we had to work on defining a secret and create several rules with clearly distinct criticality levels in order to rank the types of secrets in order of criticality according to the client. The idea behind this prioritization is that when the client expands the scope of detection, they can do so gradually, targeting the most critical types of secrets first, without causing the volume of detections to explode.
How to analyze, address, and remedy
As it was planned to have several detection tools, it was not appropriate to use the detection tools directly for analysis without first finding a way to bring the detections together in one place in order to have an overview of the alerts. An additional tool was therefore needed for this overview. Two tools were compared for this purpose:
- The Splunk SIEM tool, a classic and widely used tool for collecting all events in one place, with the added ability to synthesize them into dashboards and create rules.
- The Microsoft PowerPlatform ecosystem of tools, consisting of PowerAutomate for creating automated processes, PowerApps for application creation, and Power BI for data visualization. Unlike Splunk, which provides a basic application, Microsoft tools allow you to create a customized tool that can be scaled as needed.
A quick comparison was made between these two tools, resulting in the choice of Microsoft PowerPlatform tools. These tools allowed for the creation of an application that was closer to what was wanted, and one that could evolve more quickly given that the tool would only be used by the Password Hunting team. The decisive factor was also the fact that Splunk does not allow (in a simple way) to send notifications to users and collect their responses automatically. In addition, the Power Platform tools are integrated into the company's communication tools (Outlook, Teams), and the tool had already been in use before I arrived.
The next step was to develop the application and processes with these tools, which would enable:
- Analyze detections from the application (with potential redirection to detection tools if necessary),
- Send notification campaigns to users in order to address confirmed cases.
- Track the remediation performed by users by following their response to the notifications sent to them.
PowerApps enabled us to create the interface from which the team analyzes detections and controls the sending of campaigns. The interface was customized to minimize analysis time. Campaigns are triggered from the application (with a simple button) and managed using PowerAutomate flows (the equivalent of scripts). PowerAutomate flows interact with the Microsoft Approvals application, which allows requests to be made to users via Teams or Outlook and their responses to be collected. The data is stored in a Microsoft database (Dataverse), which allows all PowerPlatform applications to access it without restriction. The database has also been designed to take into account the different sources of detections, as there were potentially two detection tools.
How to monitor
This issue was resolved by using Dataverse to store the data: the PowerBI tool is directly connected to Dataverse and allows this data to be used directly to build automated dashboards that summarize Password Hunting activity for management. All that was needed was to create the dashboards in the form of PowerBI reports and then share these reports with management.
Best practices when working with Microsoft PowerPlatform
- Development takes place in a development environment, testing in a test environment, and production in a production environment. The distinction must be made as in any development process.
- Keep copies of work that works, just in case, even though Microsoft PowerPlatform has a version management tool.
- Avoid splitting the database into too many tables: it looks neat, but it creates "delegation" constraints. This PowerPlatform feature allows calculations that are not authorized by the data sources, but limits the number of rows on which the calculation is performed. The limit cannot be circumvented, so the advice is to stick to operations authorized by the data sources as much as possible (SharePoint and Dataverse have different authorized operations).
- PowerAutomate flows: once launched, a flow run is limited to 30 days of execution. To get around this limit, you can try restarting the flow just before the 30-day limit, for example by modifying the SharePoint item that triggered the flow. In the flow, you will then need to find a way to distinguish between the steps. Approvals (Microsoft Approvals) also have a default expiration date of 30 days after the creation date. To change this, simply modify the approval in the approvals table once the approval has been created.
- If the Microsoft documentation isn't enough, the PowerPlatform tools community is very extensive, so don't get stuck if you can't find the answer yourself—someone else may have already asked the question on a forum.
- Be aware of Microsoft limitations: character limit in a table field, flow execution limit, data retrieval limit from a source, etc.
- Also be wary of dates and time zone differences: in the database, dates are stored in UTC and adapted for the user according to their settings (even with the winter/summer time difference in Europe).
Conclusion
The implementation of a tool created by the team and for the team has saved time in processing detections. Already, the analysis time for the same amount of detection has been halved. Processing time has also been significantly reduced, allowing us to respond to requests when employees needed help, whereas before we did not help employees on a case-by-case basis. Finally, the preparation time required to report on activity to management has been eliminated, as this part of the process is now fully automated. In addition, management has appreciated having much more comprehensive KPIs than before.
In terms of detection, monitoring is working: the number of detections has fallen significantly, as has the number of confirmed cases after analysis. The impact has therefore been visible: employees have learned, taken responsibility through their actions in the remediation process, and are now more careful. Management has also been able to see this through the automated dashboards provided to them. In addition, the detection scope that had been chosen is now under control and can be expanded, which was a real challenge for the client.
For my part, I learned above all that SIEM is not necessarily the answer to everything, and I discovered the Microsoft Power Platform suite, which is very comprehensive and easy to use. I was able to take on a project in its entirety and build it up step by step, which also gave me a more strategic vision.
Lucille AUBRY
Cybersecurity Consultant








