MITRE ATT&CK Evaluations provide a rare, transparent look at how security tools detect and respond to threats—but making sense of the data is just as important as the results themselves. In this episode, Allie Mellen, Principal Analyst at Forrester, breaks down what the latest evaluations reveal about alert volume, detection engineering, and the hidden costs of security operations, helping teams make smarter decisions about their defenses.
⬥GUEST⬥
Allie Mellen, Principal Analyst, Forrester | On LinkedIn: https://www.linkedin.com/in/hackerxbella/
⬥HOST⬥
Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On ITSPmagazine: https://www.itspmagazine.com/sean-martin
⬥EPISODE NOTES⬥
In this episode, Allie Mellen, Principal Analyst on the Security and Risk Team at Forrester, joins Sean Martin to discuss the latest results from the MITRE ATT&CK Ingenuity Evaluations and what they reveal about detection and response technologies.
The Role of MITRE ATT&CK Evaluations
MITRE ATT&CK is a widely adopted framework that maps out the tactics, techniques, and procedures (TTPs) used by threat actors. Security vendors use it to improve detection capabilities, and organizations rely on it to assess their security posture. The MITRE Ingenuity Evaluations test how different security tools detect and respond to simulated attacks, helping organizations understand their strengths and gaps.
Mellen emphasizes that MITRE’s evaluations do not assign scores or rank vendors, which allows security leaders to focus on analyzing performance rather than chasing a “winner.” Instead, organizations must assess raw data to determine how well a tool aligns with their needs.
Alert Volume and the Cost of Security Data
One key insight from this year’s evaluation is the significant variation in alert volume among vendors. Some solutions generate thousands of alerts for a single attack scenario, while others consolidate related activity into just a handful of actionable incidents. Mellen notes that excessive alerting contributes to analyst burnout and operational inefficiencies, making alert volume a critical metric to assess.
Forrester’s analysis includes a cost calculator that estimates the financial impact of alert ingestion into a SIEM. The results highlight how certain vendors create a massive data burden, leading to increased costs for organizations trying to balance security effectiveness with budget constraints.
The Shift Toward Detection and Response Engineering
Mellen stresses the importance of detection engineering, where security teams take a structured approach to developing and maintaining high-quality detection rules. Instead of passively consuming vendor-generated alerts, teams must actively refine and tune detections to align with real threats while minimizing noise.
Detection and response should also be tightly integrated. Forrester’s research advocates linking every detection to a corresponding response playbook. By automating these processes through security orchestration, automation, and response (SOAR) solutions, teams can accelerate investigations and reduce manual workloads.
Vendor Claims and the Reality of Security Tools
While many vendors promote their performance in the MITRE ATT&CK Evaluations, Mellen cautions against taking marketing claims at face value. Organizations should review MITRE’s raw evaluation data, including screenshots and alert details, to get an unbiased view of how a tool operates in practice.
For security leaders, these evaluations offer an opportunity to reassess their detection strategy, optimize alert management, and ensure their investments in security tools align with operational needs.
For a deeper dive into these insights, including discussions on AI-driven correlation, alert fatigue, and security team efficiency, listen to the full episode.
⬥SPONSORS⬥
LevelBlue: https://itspm.ag/attcybersecurity-3jdk3
ThreatLocker: https://itspm.ag/threatlocker-r974
⬥RESOURCES⬥
Inspiring Post: https://www.linkedin.com/posts/hackerxbella_go-beyond-the-mitre-attck-evaluation-to-activity-7295460112935075845-N8GW/
Blog | Go Beyond The MITRE ATT&CK Evaluation To The True Cost Of Alert Volumes: https://www.forrester.com/blogs/go-beyond-the-mitre-attck-evaluation-to-the-true-cost-of-alert-volumes/
⬥ADDITIONAL INFORMATION⬥
✨ More Redefining CyberSecurity Podcast:
🎧 https://www.itspmagazine.com/redefining-cybersecurity-podcast
Redefining CyberSecurity Podcast on YouTube:
📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq
Interested in sponsoring this show with a podcast ad placement? Learn more:
[00:00:00] Allie Mellen: Okay.
[00:00:18] Sean Martin: make the business run so that we can actually generate some revenue and then protect that revenue as well.
And, uh, there's a No lack of technology available to help us with that as business leaders, making decisions on which ones best fit our industry, our environment, and our risk appetite is always a difficult one. And there are certainly no lack of claims in terms of what. What's possible with these technologies and, uh, specifically looking at attacks and responses to those attacks.
MITRE tried us to do a good job of providing organizations a view of what that looks like. What are the techniques, techniques, tactics and procedures of the bad actors and how do organizations identify that stuff? So I'm probably butchering this intro, but, um, let me get to the real, the real joy here, which is welcoming Allie Mellon.
To the show, Ali, it's good to see you again.
[00:01:20] Allie Mellen: Thank you. It's so good to see you too. Thanks for having me.
[00:01:22] Sean Martin: It's always a good chat with you. And, uh, when I saw this, uh, this post fly by on LinkedIn, I was like, I miss chatting with Ali. This looks like a great topic. We're going to look at the Ingenuity Evaluations and What that means. You and your team, Ali, do analysis of the analysis, I say, and so we're going to dig into some of that and what it means for folks.
Quick word from you, who you are, what you've been up to lately at Forrester for folks who haven't met you and haven't heard from you in a bit.
[00:01:56] Allie Mellen: Yeah, thank you. I'm Allie Mellon. I'm a principal analyst on the security and risk team at Forrester. I've been with Forrester for a little over four years now and I cover, um, security operations. So people, process, technology, and the SOC. That includes EDR, XDR, SIMS, or security analytics, um, and detection engineering.
And then I also cover, as a part of that, a smaller component, nation state threats, and AI and its use in security tools. I basically do a ton of research into these areas, and then I advise our CISO clients, which are Fortune 500, on their detection response strategies, and then I go back and do more research.
And that is my, that's my job. My background is in computer engineering. I was a computer engineer before becoming a hacker, and then becoming a security practitioner. So I've seen this gig from a few different angles, to say the least.
[00:02:51] Sean Martin: Seen it from many angles and you get to the continuous loop of research, analysis, reporting, guidance and all back through it. Let's, um. Let's get into, I was thinking maybe we start with what is, what is the, uh, MITRE ATT& CK Ingenuity Evaluations. I think most people dropped the Ingenuity one because I can't really say it, but I think that's just how it's referred to.
But do you think it makes sense to maybe give like a two, two second, uh, description of MITRE ATT& CK in the first place for folks? If you aren't familiar with that, is that cool?
[00:03:31] Allie Mellen: Yeah, absolutely. Um, the MITRE ATT& CK overall is a framework that is meant to provide, um, the typical tactics, techniques, and procedures that different threat actors use, and they get very specific into specific APT groups and MITRE ATT& CK. They have their whole framework for describing what those TTPs are and how they operate and then also how to better defend against them.
It's been used in many security tools to help with alignment around things like detection coverage and how we're detecting things and why. It's also an incredibly useful tool for threat hunting and for purple teaming to be able to communicate in a common language about the attacks that we're seeing.
Now MITRE also, um, as kind of an Honestly, kind of like a related output of the framework does a series of tests called, okay, let's see if I can do it, the MITRE Ingenuity Attack Evaluations. There we go.
[00:04:31] Sean Martin: Woohoo!
[00:04:33] Allie Mellen: And they have a variety of these. They have one that's the Enterprise Evaluations, which is what I frequently analyze.
And then they also have the Services Evaluation as well, which is really interesting. And my colleague looks at that, Jeff Pollard. And so what these do is basically let's So what we're going to do is emulate a series of steps of attacker activity and see how each different security tool performs in order to not only have visibility into the steps and whether or not they can see the steps taking place, but also how do they actually alert on them.
What does it look like when they alert on them? been going on for many years. I believe this is their sixth round of evaluations, which is pretty wild. Um, I remember when they first started and it really hit the scene and was incredibly popular, especially among EDR providers. And it stayed that way for the last six rounds.
And, um, what's so valuable about it really is that it gives us such a clearer view. Of what exactly tools are able to see, which for many teams has been a historically a pretty black box. To be honest, it can be very difficult to know why something is detecting what it's seeing and what that means for them.
And MITRE ATT& CK has been able to. Give that to practitioners. In addition, it also helps to improve the tools. It has a, um, step to do a configuration change that you can do detection better, and that can then be, if it's appropriate, brought back into the tools themselves to improve the detection quality.
So there's a couple of different angles where the tests really come in handy. Silence.
[00:06:10] Sean Martin: And I remember, I don't know how many years ago now, It's got to be five or six years as well. Um, Katie Nichols, when she was there at MITRE and Fred Wilmot, who, uh, was with Splunk at the time, the three of us had a good chat about what this all means and the application of the framework and whatnot within organizations.
You can see the swell of up, uptick and uptake, if you will, of the framework by. Solution providers, um, which then make claims that they could do certain things with the, with the framework, obviously with the goal of helping organizations, but I'm, I presume that's why, because of all these claims, the analysis, uh, came, came to fruition and your, your team's analysis of the analysis, then what, what prompted you to look at this and, and what was your, what was your vision for what you would do and what were you trying to get, get out of that?
Taking this time, if you will.
[00:07:10] Allie Mellen: Yeah, um, first off, just shout out to Katie. She's the best and also was such a great advocate for MITRE ATT& CK. So, um, it's a great call out. When it comes to the analysis of the analysis, as you say, there's a couple of things that we're looking to do. Um, first off, my predecessor, Josh Salonis, actually kind of kicked off not only using the MITRE ATT& CK evaluations.
Pretty deeply into a lot of the work that he was doing at Forrester, but also at looking at those results and trying to help make sense of them for clients. And the reason that he did this, and that now I do this, is because one of the things that's really beneficial about the way that MITRE presents this information But also one of their Achilles heels in a certain way is that they do not provide scores or rankings for any of the vendors.
They, they provide the data, but they're not here to make a determination as to what's better than the other.
[00:08:08] Sean Martin: No winners.
[00:08:09] Allie Mellen: no winners, yeah, which I love and I think is a huge value add for the community. I think it also can be very difficult for our clients to sift through all that data and make informed decisions about what they're seeing.
And so my goal was to say, here's all this incredibly useful data. What can clients do with it? And how can I make it simple for them to internalize it and understand what they're supposed to get from it? Um, and It's been very interesting to see how over the years, the way that we approach that has changed in previous years.
I thought about it from much more of like a zero trust angle. And how do you kind of think about how this fits into your zero trust strategy? As we'll talk about a little later this year, it's much more focused around cost. And in part, that's because of market dynamics and market changes. We're seeing a lot of Frustration around some costs and at the same time, too many alerts coming in to end users.
And so we need to address that. And MITRE ATT& CK gave us a very clear picture of what it's like to work with these tools, how many alerts they were generating, what their visibility is, and that's incredibly valuable.
[00:09:17] Sean Martin: Well, let's just go right there. Um, well. Well, let me, I'll ask you this first. Was there anything from previous analysis that, that you found helped organizations maybe get past a blocker or uncover a misconception or an assumption? They were making, um, basically an opportunity to, to, to toot your own horn in terms of, we, we provided this information and we saw a market change in, in how operations. Progressed over time. Anything you can point to?
[00:09:55] Allie Mellen: Yeah, I think, so there's a couple of differences that have taken place. The first off is in the first round of evaluations that we really. That I really in depth looked at. Um, we tried to present the data in such a way that it would make sense to everyone and also give a lot of perspective and feedback around what was being detected and why.
And so one of the core things, one of our core takeaways from it is that practitioners should not be looking for 100 percent on these evaluations, which goes without saying, um, but it is very helpful to have the data to prove that. And so this is one of the things that we were able to find is, um, that When you look at the results, there are certain techniques that.
Most vendors do not see. And that is not necessarily a bad thing because some of these techniques are very common. They are common in any enterprise, whether they're under attack or not. And so necessarily alerting on them is not useful. Now, in this latest round, we've seen a lot of beneficial changes come out of the results.
For example, one of the things that we built was a cost calculator. to look at the number of alerts that these tools are generating and say, okay, if you were to ingest this into your sim with the following constraints, how much would it cost you just to bring in the alerts for this one attack? Or the second attack that they did, or the third attack.
And the costs range dramatically depending on the vendor, because the number of alerts that they were generating range dramatically. Some vendors had over a million alerts that they were creating. Others had under 10. And that is a very stark difference. And so One of the things that the results enabled us to do was to go to the vendors and say, Hey, what is going on here?
Why are you alerting to this extent when that's going to be chaos for anyone who has to respond to them? And the feedback has been, we're fixing it. And so that to me has been a huge win both for the MITRE team and for the results that we pulled. The analysis that we did from those results is It shines a light on where the technology is currently at, where things like correlation are at, and where they are falling short.
And it helps to reprioritize some of the work that the vendors are doing, especially at a time when I'm hearing more than ever, shockingly, from users that they're facing a larger number of alerts than they have in previous years.
[00:12:24] Sean Martin: All right, let's have some fun here. Let's, let's talk about alert volume and correlation. Um, So your view of what, what types of alerts are getting generated and why, perhaps why certain vendors generate more alerts than others, um, that may be connected to that is the correlation piece, which traditionally has been rule based, but I'm, I'm assuming there are some advancements with AI that that might help with correlation that I would hope reduces the number of alerts.
But is it, do you get all the alerts that then generate a correlated alert, kind of describe what's going on there, not, not naming any vendors in particular, but just kind of what's happening there from an alert perspective, why, why so many from some, not that many from others, kind of the view on the tech that's making this all happen or not.
[00:13:20] Allie Mellen: Yeah, so Uh, the MITRE team kind of organized the alerts that were coming in based on severity, and they mapped them all back. Keep in mind, that alone is quite difficult to do because everybody has a different way of describing the alerts. Some use low, medium, high, some use a number, some use a number that is constrained from 0 to 100, some have no constraints.
It's, it is all over the map. And, um, also, some teams have Alerts that are for an incident and some have alerts that are for individual activity. So, say you see a technique and you alert on that technique versus you see a series of techniques and you alert on that all together to coming back to the correlation conversation.
Now, one of the, one of the things that I did as I was going through this data was I looked at it and I said, okay. I'm going to go back to the vendors and I'm going to ask them why their number of alerts are so high because there has to be an explanation for this. And most of their explanations were that for the none, low, and medium alerts, those were really just informational.
They're not something that you see in the main screen. They're just something that they surfaced for MITRE, which is fine. I think that that makes sense, and they're trying to give all of the context that they have of the situation, so it can be okay. Now, the challenge is that even once you remove the none, low, and medium alerts, there are vendors that still generate thousands of alerts.
Thousands of alerts for what is a total of a hundred substeps. It's way more alerts than there are steps in the attack. And so, there's a couple of factors at play here. First off, we know historically there have been groups that have tried to game the system a little bit and try to detect on everything, which is part of the reason why MITRE introduced the volume metric, is to say, okay, here's a counterbalance.
to alerting on everything, where you can alert on everything, but your alert volume is obviously going to be a lot higher than it should be. Or you can alert on what's most important, and maybe you miss a little bit from the standpoint of visibility, but you manage the alert volume a little bit better.
And so that's one of the reasons why this round, this was introduced, and has been very useful for that. Now, the, um, Reason that some vendors were able to get such a low number when it comes to alert volume is because they are doing correlation. They are looking at the information that comes in and they are generating one or two, maybe three, depending on the number of.
Um, depending on the vendor. Alerts that have all the context about an incident needed in that single alert. It is not just a indicator that you see in the platform somewhere. It is an indicator that is in the alert itself. That is very useful. Because it means that you don't have to do the manual correlation that you would otherwise have to do if you're looking at a bunch of disparate alerts and trying to say, okay, which ones are related to this attack and which ones aren't.
And so that's the biggest difference in approach that different vendors take is one is, okay, Let's look at these individual indicators. Maybe they go into a different view that's not the alerts. It's just the, hey, this was spotted and it might be malicious. Then there's the alerts view, which they may be populating in, and then there's the incidents view.
And for those vendors that have a lower score, but also maintained visibility, they're able to do the correlation required to limit the number of alerts that they're Presenting to the analyst, which is the ideal place to be because we want to maximize visibility while also making it so that the analyst can actually get to that alert, which in some of these cases, they just, they just wouldn't.
There's no way that you're going to be going through 5000 alerts in one day.
[00:17:10] Sean Martin: Right. So, yeah, based on my knowledge, which is antique at this point, um, there's the individual events that lead to a correlated alert, um, which in rudimentary fashion is you hit some threshold or some anomaly, and the collection of them triggers some, some rule. Are we moving anywhere toward a story based alert?
Very much in line with what ATT& CK's trying to do, which is Present the attack path and then hopefully you follow up with the kill chain to, to prevent the path from executing successfully. Are we moving any anywhere where correlation or some other form of analysis can actually say, here's the story. We see this path coming to fruition and here's where the, the bad actor is on that path.
Or are we too far away from that yet at this point.
[00:18:11] Allie Mellen: Some vendors can, which is really exciting to see, like, I think one of the underrated parts of these results is the fact that there are vendors in here that have under 10 alerts. That's really impressive. That's difficult to do. And they do it a couple of different ways. There's of course, the. You can kind of take a look at all of the alerts that happen within a certain time frame and group those together.
If you think the time frame is close enough and the number of endpoints is few enough, there's also others that kind of take a graph based approach and can look at the attack as it is basically traversing the graph and make sure that anything that's related to that path on the graph they're bringing together into an alert.
They can have some aspect of automating the investigation. If we look at like The, um, ways that agentic AI is starting to be used on things like triage and taking those actions automatically for the analyst. So there's a couple of different ways that they can manage it. Um, some are using that. Some are not.
Some are relying on those static indicators that you mentioned where it's like, okay, you can do correlation, but you do correlation. It is going to be manual. It's going to be a lot of work for you, and that is where we want to stay away. We don't want to still be stuck doing that, especially if, in many cases, these tools are generating more alerts, in part because they're doing more detection engineering work, but also in part because there's more attack surfaces that they need to, detection surfaces that they need to cover, all of that.
[00:19:35] Sean Martin: So let's. I'm glad you touched on the response piece as well, and maybe that gets highlighted a little more in this next part, which is on the financial implications finding, um, clearly, if you're spending a lot of time just clearing events or trying to correlate manually and trying to make some sense of it all beyond the correlation, that's a lot of wasted time.
Hopefully you can move more toward the, uh, Toward the right where that's where the left, how you want to look at it, where you're a little more advanced with the tooling and the analysis, but then there's the response part, right? The investigation, the lock it down, clean up, uh, respond or change, change your protection and your other rules to detect better and maybe block better.
How does all of that impact the cost? And maybe, maybe highlight some of the things that you found in that area as well.
[00:20:36] Allie Mellen: So it's difficult to quantify that. That was kind of the gap that we really struggled with. What we quantified specifically was the cost of ingest into the SIM, because I know from the inquiries that I get, CISOs right now, especially as they evaluate evaluate their security data management strategy. But
[00:20:55] Sean Martin: Is that a big number? Sorry to start to pause you. Is that a big number? And I'm surprised that that data ingestion in this, I guess maybe not surprised,
[00:21:05] Allie Mellen: Oh, data ingestion in the sim is so expensive.
[00:21:08] Sean Martin: is it
[00:21:09] Allie Mellen: It's wild. You have to be very cognizant of what you're bringing into the SIEM and how it's like one of the biggest questions and consistent questions that I get, and it's a huge driver for a lot of the changes happening in the SIEM market right now. Um, and so that's part of the reason why we structured it around that was like, okay, you're worried about your SIEM costs.
Here's what you're going to be dealing with from that. Um, and that's just, I mean, this was just for three attacks. You know, this was not for attacks on thousands of endpoints or. Thousands of attacks. So, um, but yeah, I could talk all day about the challenges in the Sim Market and SimCost.
[00:21:47] Sean Martin: Maybe maybe we'll have to do that at some point. uh, yeah, so so for this in particular though What what what are some of the highlight I guess? And are there organizations that don't know they have a cost issue, they just kind of roll with it and accept that that's what it is because it's, it's what we bought and I don't know, I'm still a
[00:22:10] Allie Mellen: the EDR, I think it depends a lot on whether or not you choose to bring these alerts into the Sim. Right. Many will just work out of their EDR and not necessarily bring it into the sim because if you're going to bring it into the sim, especially if you're a smaller business that requires more resources, and it's going to be harder to manage that.
And, and also like doing the correlation itself is, it requires a lot of manual work if you're bringing it into the sim. So there's trade offs. Larger enterprises very well might, um, especially because they want to do correlation with additional sources unless they're kind of going the XDR route. Now, with, but to your point, Even though the quantification that we did is only specific to the ingest and to the SIEM, there's a lot of other aspects of this that are very top of mind for practitioners, especially around analyst experience, which is security analysts perception of the tools and technologies that they work with.
That is dramatically affected when you have so many alerts coming in because it leads to burnout. And At the end of the day, we can't lose any more security staff than we already are, either to security vendors or to different parts of tech. And so we need to find ways to manage that a little bit more effectively.
And I see CISO's really prioritizing analyst experience as a part of their yearly goals because they want to be able to train and build the staff more effectively. Now, if you're flooding them with 5, 000 alerts that are kind of boring to go in, Evaluate and figure out if they're right or wrong. They're not going to be enjoying their jobs.
They're not going to be contributing effectively to the work. And it's going to be a big waste of their time on top of it and a waste of your resources. So there's a lot of reasons why beyond the initial upfront cost of ingest that you need to be careful with how you approach using some of these tools.
The other factor here, too, is it does leave the analysts with a bit of a feeling of a lack of control. Because for a lot of EDRs, we talked about how it's a black box, you don't necessarily, beyond tuning, and maybe, um, trying to deprecate a detection, you don't have a ton of control over the alerts and the rules that are being brought into the system.
The vendor typically has that control. And so it means that you are In a bit of a shared responsibility model with the vendor for creating detections and also making sure that the detections that are being created are actually going to be beneficial and lead to better quality alerts.
[00:24:41] Sean Martin: So this just popped in my head, because it'd be easy to blame the vendors for a lot of this stuff. But I'm wondering, do you see security teams, ops teams, analysts, setting stuff up? Incorrectly, that makes it worse for them. Is there any, are there things we need to do from a, from an understanding and education perspective to really?
Get the most out of our sim in a way that that matters.
[00:25:12] Allie Mellen: So there's two different sides to this. First off, all of the tools that were in the MITRE ATT& CK evaluations were set up by the vendors. So, if anybody is going to be held responsible for the outcome of that, it's going to be the vendors. But, to your point, um, there is, especially if we look at the SIEM, a mentality
[00:25:30] Sean Martin: I guess, in that case.
[00:25:34] Allie Mellen: there is a mentality of ingest everything and let's just like throw it in there.
There's pressure from other groups within the organization to put additional data in and create dashboards for them because it's just considered the data store in the enterprise. And so there's a lot of motivation for practitioners to start ingesting a lot of data. In many cases they also will ingest that data because they're like, we'll ingest it now, we'll build detections later.
I can tell you from experience those detections are not going to get built later. You're just going to be ingesting data and, and not seeing very positive outcomes on it because you should be making decisions about what data you ingest based on what data you need for certain detections. As part of the detection engineering research that we've done, the key takeaway is everything comes back to the quality of the detection and to the work that you're doing building the detection.
In previous years, we've had this focus on, okay, let's close the ticket, right? That focus has led us down the path where we struggle to make sure detections are up to date. We struggle to, um, make sure that we even understand the scope of detections that we have. We struggle with analyst burnout. And if we refocus the role of the SOC into creating better quality detections, then we can start to have better quality outcomes.
And that starts with. Knowing what data you're bringing in and making informed decisions on that data based on the detections that you need to create.
[00:27:04] Sean Martin: As we, uh, as we come close to the end, not quite, but we're getting there. I want to, the, the last point in the, the key findings is kind of around the vendor claims. And I don't necessarily want to make this about, uh, beating up any vendors, but I want to look at this from a, an analyst practitioner, security leader perspective, as they're trying to select a tool for their organization, get the most out of the tool that they've selected were on their team effectively without burnout, what, what are some of the.
Discrepancies you saw or uncovered in the claims area that maybe you should get teams to look differently at how they approach selecting and implementing and using a sim and I guess in the bigger picture of their, their operations in general.
[00:28:03] Allie Mellen: Yeah, and I totally agree. Definitely don't want to be just targeting the vendors here. There are some vendors that did really, really well in this evaluation. It's definitely worth looking at. The full list to see what all their capabilities are because it gives you a lot of interesting data on the disparities to between the vendors, but, um, when it comes to evaluating some of the claims, we have this every year, right?
Every year, vendors try to take the results and turn them into marketing material, which is their prerogative. And, um, I feel for them because it's not, it's not an easy task to do, um, But unfortunately that often leads to things like 100 percent performance in the evaluations from us vendor and things like that.
And so to be honest, I would just be very skeptical of. Any results from any vendor and the way that they are portraying those results because it is based around which uh, Metric is most important to them or conveys their success most completely and that's just the reality of it, right? That's nothing against the vendors vendors get a vendor and all of that but um, it is misleading to a analyst or a practitioner who may want to have a bigger picture around where the pros and cons are of using a particular tool.
So I'd look for non biased reviews and perspectives on it, first off, which is why we do the research, but also to look at the results from MITRE Yourself directly, to look at the screenshots that they provide, because they provide screenshots of all of the tools. To look at the alert volume to look at the coverage that each vendor had to look at the background noise that they were able to spot or not spot, which is really another interesting addition and, um, to.
If you do want to use MITRE ATT& CK evaluations in this way to make an informed decision based on far more than just looking at a vendor blog post that is ultimately just trying to paint their technology in the best light because that's their job.
[00:30:07] Sean Martin: Yeah, good stuff. And maybe, uh, Let's wrap up this. Let's speak to the CISOs. I know you get to have a lot of fun conversations with them. Um, not looking for you to disclose anything, but any conversations or aha moments when you're speaking with the CISO community, um, About their programs, not necessarily around, around the analysis and research he did and the findings you have, but maybe that do connect.
So they're talking about these things and, oh my gosh, there's, there is a line there. I think you, you touched on one or two of them already, but is there anything else that comes to mind?
[00:30:50] Allie Mellen: Yeah, I think the biggest thing is this all ties into detection engineering. Um, it's something that we've talked about a lot the past two years. It's something that we've seen happening in the industry quite a bit is the shift from security operations being much more of a kind of waterfall function to moving to detection engineering and having more of a.
software development style approach, not necessarily detectionist code, some do, but really more of a focus on how can we continuously deliver high quality detections. And, um, the MITRE ATT& CK evaluations, the MITRE ATT& CK framework, those are all a part of it, since the MITRE ATT& CK framework serves as that common language and is frequently used to reference what detections are doing and to help understand detection coverage.
But also the evaluations, because especially for any detection engineers. It's great for them to be able to go through the MITRE ATT& CK results and see what is being detected and why. And you can kind of do a little reverse engineering of like, why did the vendor choose not to detect or didn't see this particular technique?
That was something from the previous evaluations that we really called out was, to what we talked about earlier. Certain vendors aren't seeing certain things and that's intentional because it's just wasteful for them to be collecting that information. That's the type of thing that can be really informational to detection engineers as.
They do their research. It also gives you an opportunity to look and say, Okay, what are the gaps in this particular technology? Perhaps it's time for us to build a detection rule to cover that gap that we found in the MITRE ATT& CK evaluations because that's actually much more relevant to our business than it is to maybe all of that vendor's other customers.
So it's an opportunity to see inside the black box and get a perspective of where gaps might be and determine if those gaps are. Acceptable for you and your team.
[00:32:45] Sean Martin: Super cool. And you know what else is cool is my co founder Marco is not here. To stop me asking one more question, because I always have one more good. You did such a good job on the detection piece. I know you look a lot at the response and the source space as well. How does this impacting team's ability to respond properly and with automation and orchestration as well?
And I'm just thinking if you're, if you're not seeing the data. And you might need that context in the investigation and response is some of this. How's that all connecting on the response side?
[00:33:26] Allie Mellen: Yeah, so I love this question because one of the things like we frame our detection engineering research as detection and response engineering very intentionally because one of the things that I think is an oversight typically caused by the fact that. A lot of the detection technologies were, were built separately from the response technologies.
Like most stores were standalone and then got acquired right by the detection technologies. And so it leaves this gap for, um, security teams where they're building detections and they're building. Response playbooks. But in reality, we need to be linking every detection to a response playbook to add additional context in.
And so it would ideally be a cohesive flow between detection. Here's the detection that you built. And here are the response playbooks that need to trigger as part of that detection. Um, and MITRE ATT& CK Supports this in a couple of different ways. Actually, they have recommendations built into, um, the framework that talk about like defense procedures and things that you should be looking for.
But also it's just an effective way coming back to that, treating it as a, as a, um, common language to say, okay, if we see this particular technique, we most often need to gather additional context around this process, and so we're going to automatically pull that with the SOAR playbook. So it all links together in a really important way, and um, ideally, if you have the capacity, building any SOAR playbooks should be a part of your detection engineering life cycle as well, and should follow a very similar path.
[00:35:05] Sean Martin: Excellent advice. I could talk to you for hours about this, Sally. It's always fun. It's
[00:35:11] Allie Mellen: Very fun. Thank you for having me.
[00:35:12] Sean Martin: I get to nerd out a bit here on this.
[00:35:15] Allie Mellen: Me too.
[00:35:17] Sean Martin: So with that, we'll let people kind of ruminate a bit. I have the link to your initial post, which prompted this conversation. A number of links in it to the calculation tool and the report and other things.
So I'll include that, uh, for folks to, uh, to nibble on. And, uh, of course I invite everybody to stay tuned for more redefining cybersecurity here and subscribe, share with your friends and enemies, Ali, thank you so much. Really appreciate it. Uh, keep well, hopefully we'll see you on the conference circuit.
Somehow, somewhere
[00:35:53] Allie Mellen: Definitely.
[00:35:54] Sean Martin: and, uh, you're always welcome back as you, as you uncover new things that are, that are fun and exciting to talk about, which you
[00:36:01] Allie Mellen: Thank you so much.
[00:36:03] Sean Martin: all right. Thanks a million. Thanks everybody.