Protective monitoring solutions are difficult and expensive to implement, and require a significant investment in both tooling and resources to be effective. However, once they’re in place it’s very difficult to know how effective they are, because an absence of alerts could either mean that no suspicious or malicious activity is happening, or that malicious activity is happening but not being detected.
In the same way that it’s crucial to carry out penetration testing to verify that systems are secure, it’s important to carry out testing to verify that the protective monitoring solution is working as it should.
One of the key challenges in developing use-cases is that the blue team often ends up having to do this blind, with no real data or log entries showing how an attack would actually appear in their environment. This means that they’re effectively having to guess what logs and events they should be looking for, and once they’ve implemented them, they often just have to hope that they got it right, with no way to validate it themselves until an attack happens.
Rather than carrying out a full red team engagement, this is a much shorter activity designed to provide a baseline level of assurance that the protective monitoring is working as expected. There are several approaches that can be taken, with varying levels of collaboration between the tester and the blue team.
At one end of the spectrum, the test can carry out a set of suspicious or malicious activities on the system and produce a timeline of exactly what they did and when. This can then be compared to the timeline of alerts and activity produced by the blue team, in order to identify any gaps. This approach works best with a third-party protective monitoring solution, where visibility and collaboration are limited.
If the coverage of the protective monitoring solution is known to be incomplete, then it can be more effective for the blue team to provide a list of covered systems and known use-cases, which the tester can then select a sample from for validation. This allows efficient testing of the areas that are expected to be in place, and validation that the processes for onboarding systems to the protective monitoring solution are working effectively, without wasting time testing systems that are known not to be covered.
At the other end of the spectrum, testing can be done in a fully collaborative manner: with the tester sitting down with the blue team and running through test cases and verifying in real time whether or not they are detected. This can allow any gaps to be immediately investigated, and a much faster cycle of testing and updating use-cases until they are effective. This also provides valuable knowledge sharing between the red and blue teams, and can help identify other gaps and weaknesses in the protective monitoring solution.