ITSPmagazine Podcast Network

The Fault in Our Metrics: Rethinking How We Measure Detection & Response | A Conversation with Allyn Stott | Redefining CyberSecurity with Sean Martin

Episode Summary

In this episode of Redefining CyberSecurity, Sean Martin and Allyn Stott explore how to effectively measure detection and response in cybersecurity using the SAVER framework, highlighting the importance of actionable, goal-aligned metrics. Stott shares his insights from fifteen years in the field, emphasizing the need for metrics that drive strategic improvements and better inform security posture.

Episode Notes

Guest: Allyn Stott, Senior Staff Engineer, meoward.co

On LinkedIn | https://www.linkedin.com/in/whyallyn

On Twitter | https://x.com/whyallyn

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/sean-martin

View This Show's Sponsors

___________________________

Episode Notes

In this episode of The Redefining CyberSecurity Podcast, host Sean Martin converses with Allyn Stott, who shares his insights on rethinking how we measure detection and response in cybersecurity. The episode explores the nuances of cybersecurity metrics, emphasizing that it's not just about having metrics, but having the right metrics that truly reflect the effectiveness and efficiency of a security program.

Stott discusses his journey from red team operations to blue team roles, where he has focused on detection and response. His dual perspective provides a nuanced understanding of both offensive and defensive security strategies. Stott highlights a common issue in cybersecurity: the misalignment of metrics with organizational goals. He points out that many teams inherit metrics that may not accurately reflect their current state or objectives. Instead, metrics should be strategically chosen to guide decision-making and improve security posture. One of his key messages is the importance of understanding what specific metrics are meant to convey and ensuring they are directly actionable.

In his framework, aptly named SAVER (Streamlined, Awareness, Vigilance, Exploration, Readiness), Stott outlines a holistic approach to security metrics. Streamlined focuses on operational efficiencies achieved through better tools and processes. Awareness pertains to the dissemination of threat intelligence and ensuring that the most critical information is shared across the organization. Vigilance involves preparing for and understanding top threats through informed threat hunting. Exploration encourages the proactive discovery of vulnerabilities and security gaps through threat hunts and incident analysis. Finally, Readiness measures the preparedness and efficacy of incident response plans, emphasizing the coverage and completeness of playbooks over mere response times.

Martin and Stott also discuss the challenge of metrics in smaller organizations, where resources may be limited. Stott suggests that simplicity can be powerful, advocating for a focus on key risks and leveraging publicly available threat intelligence. His advice to smaller teams is to prioritize understanding the most significant threats and tailoring responses accordingly.

The conversation underscores a critical point: metrics should not just quantify performance but also drive strategic improvements. By asking the right questions and focusing on actionable insights, cybersecurity teams can better align their efforts with their organization's broader goals.

For those interested in further insights, Stott mentions his upcoming talks at B-Sides Las Vegas and Blue Team Con in Chicago, where he will expand on these concepts and share more about his Threat Detection and Response Maturity Model.

In conclusion, this episode serves as a valuable guide for cybersecurity professionals looking to refine their approach to metrics, making them more meaningful and aligned with their organization's strategic objectives.

___________________________

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

___________________________

Resources

The Fault in Our Metrics: Rethinking How We Measure Detection & Response (BSIDES Session): https://bsideslv.org/talks#EVFTBT

___________________________

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: 

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring this show with an ad placement in the podcast?

Learn More 👉 https://itspm.ag/podadplc

Episode Transcription

The Fault in Our Metrics: Rethinking How We Measure Detection & Response | A Conversation with Allyn Stott | Redefining CyberSecurity with Sean Martin

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: [00:00:00] Here we are. You're very welcome to a new episode of Redefining Cybersecurity on ITSP Magazine. I am still Sean Martin, host of the show, and I still get to talk to all kinds of cool people about all kinds of cool things. Uh, you may have seen a ton of stuff coming out from, uh, from Blackhat, and I'm always interested in the technical research driven stuff. 
 

And funny enough, A couple of the chats that, uh, that popped up for me as, as interesting topics are related to measurement. And, uh, today Alan and I have, uh, been getting this on the calendar and having, and, uh, putting together this chat on how do we measure? What are the metrics? Should we care? What should we care about? 
 

And I'm thrilled to have Alan on again, Alan Stott. Uh, thanks for being on the show.  
 

Allyn Stott: Thanks for having me again.  
 

Sean Martin: Yeah, it's gonna gonna be fun and Yeah, I'm since [00:01:00] uh, yeah, I'll just let everybody know again quickly What you're up to what you can share about your role.  
 

Allyn Stott: Oh, yeah. Sure. I'm uh, I'm Alan Stott and I've been doing Information security things for about the last 15 years. 
 

And, uh, I started my journey into detection and response from the red team side. I spent the first five years of my career, breaking things, uh, causing trouble, uh, making a lot of blue team sad. Uh, and then at some point you get a little frustrated and you're like, why are things not getting better? I'm here every year, but it's the same. 
 

Uh, so I made that, that jump over to blue team side of the house and kind of just been there ever since, started just doing general. Uh, operations and kind of a little bit of everything and have gone from [00:02:00] being, uh, just, uh, engineer of jack of all traits for, uh, infosec and then, uh, have slowly narrowed my scope into detection and response. 
 

Um, I've, uh, I've done both engineering and senior management roles, and I like to wander between the two. I'm currently in an engineering role, and I have a really cool role right now because I'm in, on a team that we call the technology leadership team, and what's really neat is that I get to work across a lot of different teams. 
 

Um, and help influence the technical direction that we're going so that we can not only, uh, detect and respond quickly, but then go over to the other teams and figure out, okay, how could we have prevented this? How can we make this better? And so I get to kind of float between all those different teams. 
 

And that's a lot of fun for me.  
 

Sean Martin: I love it. It [00:03:00] sounds like fun. And I, it, as you were talking in the car, I was thinking about when I was in quality assurance engineering, um, which included some security testing appsec stuff back in the day. Um, like you, I was like, I'm finding all these bugs. Can we. Can we not figure out how to find them faster, sooner, and do something better in dev? 
 

So I became a program manager. I'm like, well, if we just define the product better in the first place, then maybe we don't, we won't build bugs into the system. So I became a product manager. And then ultimately I got to work with, with a bunch of different teams, um, to kind of you just kind of make, make things better. 
 

All around. So really, really cool. So all of that's fun and good. Um, organizations want to know that it's making a difference.  
 

Allyn Stott: That's right.  
 

Sean Martin: Are you, are you introducing fewer bugs? Are you fixing them faster? Are you, you have less, uh, [00:04:00] I'm drawing a blank on the word now. And you, you, do you not reintroduce the same bug over and over and over again? 
 

Um,  
 

Allyn Stott: that's right. Yeah. So the big answer of, uh, the big question of what would you say you do here?  
 

Sean Martin: Exactly. Exactly. What's your impact? So let's talk about that in the, in the context of, uh, SecOps, right? So detection and response, um, I don't know how broad you go or how much you want to go in terms of the stuff because I know this is part of, part of a talk you've done, part of a bigger part of a new talk you've put together. 
 

There's a cover, network, endpoints, cloud, apps, fraud, privacy. I don't know how big and wide you want to go, but I guess the big question in the start is what is measurement? What are metrics? And, and I think management cares, but why should we as practitioners care? Thank you.  
 

Allyn Stott: Yeah, [00:05:00] um, you know, it's, it's funny when I think about metrics, I think about how I kind of landed in the position of needing to care about metrics, um, and it's, it's not the best way, but it's probably the most honest way and how a lot of us fall into it. 
 

I was in my first management position and I was asked, Hey, we need new board of director, uh, update for your detection response program. So, uh, let's get those by Monday. And you're like, okay, great. So what metrics have we used before? And you take a look and the team's like, well, so the last guy was. 
 

Putting these out. We don't really know where these numbers came. They're probably made up, uh, but you're gonna do a lot better, we bet. And, uh, yeah.  
 

Sean Martin: Only up from here.  
 

Allyn Stott: Only up from here. That's right. And, uh, so you do what you typically do in these [00:06:00] situations and, uh, you Google it and, uh, then you end up just copying metrics from either that, or, you know, from your last job, um, and so you end up in this. 
 

Kind of condition where you really don't care about the metrics. You're just trying to get something done. And I think this is especially true in like operational roles where you don't really don't have time to do metrics. Uh, you're, you're kind of just trying to get them done. You don't think about like why you should care. 
 

There's a, you know, there's in, in thinking about metrics, um, you, I did a lot of research about like the state of metrics in general, like how, uh, how metrics are business impacting and. I was reading this, uh, this paper. I was actually laughing the last time we talked, I talked about how my first reaction is to [00:07:00] go to Google scholar and look up stuff, but the same situation here, I'm looking up metrics and in Google scholar, and I found this really great paper out of, um, MIT that's called, uh, metrics, you are what you measure, and they talk about how the metrics that you choose. 
 

The ones that you kind of bring attention to, those help you, those help you make decisions and take action. Um, and the metrics you choose will improve. And there's a double edged sword there, where the ones that you choose could be really great, they might also not be the most important thing. Or they might not be things you can control. 
 

Um, and their, their, their entire argument is that you become what you measure. And so really metrics are the kind of beating drum of this is what we're doing. This is what [00:08:00] we're targeting to get better at. And that's all well and good, but you have a team of engineers that are going to look at that and go, okay, great, what can I do to make that better? 
 

And whether that's, uh, nothing, maybe there's nothing you can do to make that metric better. Or maybe it's something you could make better, but it's, it's not going to actually make things better. Uh, like a number 
 

Sean Martin: that and that that's coming through for me as well as I'm thinking about this because there, there are figures that are, I'll say, tech driven, right? 
 

And they're, they're, they're performance related. So, either some system or somebody sitting behind a system does it faster. Yeah. Or does it slightly better, whatever better means. Um, and I feel that, cause to your point, we're going to, we're going to make decisions, but some of those [00:09:00] decisions may force the team to work later or burn out or. 
 

Or leave something else behind. That's really important. And there are connecting measures. So some measures we're not, not really calculating, like how do you measure burnout?  
 

Allyn Stott: Right.  
 

Sean Martin: And then how do you the impact of, of a good result in one area that is having impact in another. So I don't know your thoughts on some of that stuff. 
 

There's a lot in there. 
 

Allyn Stott: A really, a really good example, like to, uh, the talk I'm giving besides Las Vegas is. Very like example, right? Like it's, it's essentially a collection of all of my mistakes. Here's all the metrics I've used in the past. Here's why all of them are terrible. Um, and I don't think I'm alone in a lot of these because, uh, they mostly came from Google searches and previous jobs. 
 

Um, but thinking about like even just some of the most obvious incident response metrics, like [00:10:00] mean time to recover. You know, a lot of these charts are very typical. You have, you know, your time is low and then there's these spikes where, you know, it took longer to recover. And so you look at those and the team's like, all right, great. 
 

Like we want to bring down mean time to recover. And then we all look at each other and we're like, well, first of all, that's not just this team. Uh, the detection and response team don't have complete influence over mean time to recover. We could do everything perfectly. And mean time to recover may take a really long time. 
 

Or it can be really short. It really depends. And that's the other piece of it is that every incident we have is not the same type of incident. Incidents are so dramatically different, and yet we tend to visualize them as just single entities on a timeline. And so when you show a graph of mean time to recover, and it's the squigglies of up and down, [00:11:00] what story does that tell? 
 

Well, it tells you that some incidents take longer than others. Okay, great. Uh, and does it tell you, like, where you can improve and where to get better? Probably not. Um, uh, the, uh, example I use is, uh, Mean Time to Recover, where the time slowly goes up over the periods of November and December, which is something you see all the time. 
 

And it's like, okay, great. What happened here? Well, Thanksgiving and Christmas, that's what happened there. Um, so, you know, do you cancel those? Is that the, is that, is that what the takeaway is there?  
 

Sean Martin: That's right, yeah. Yeah, no, no holidays for anybody. That's right. And, uh, no, no threat actors taking advantage of, of, uh, understanding that people are away during those times, yeah. 
 

Allyn Stott: Yeah, yeah.  
 

Sean Martin: Let's just change this.  
 

Yeah,  
 

Allyn Stott: in thinking about like those, um, especially like time metrics and like [00:12:00] how, like, do we need to do something? Uh, I watched, uh, before I gave this talk, I watched this really great talk by Eric Brandwein from AWS, and he talks about the tension between absolutes and ambiguity and security. 
 

And his argument about metrics is that. When we look at a metric, it should immediately answer, What do you want from me? What do you want me to do? And a lot of the metrics that we use, especially the ones that are about time to, right, like an incident response, we're almost obsessed with, with time metrics. 
 

It's all about mean time to analyze, mean time to triage, mean time to recover. And we graph all these and we see them, but they don't tell us what to do. Like where, where am I supposed to go? And so in thinking about this and then thinking about incident response, one of the, I think most. [00:13:00] Powerful tools that, uh, I'm thinking about more and more incorporating and thinking about how to, how to implement this is this concept of filtering out what you can't reduce right now. 
 

So if you have an incident and, you know, it's a ransomware incident, for example, and you're like, okay, great. I have this playbook for ransomware. I know this part of the playbook is going to take, you know, 2 hours. Well, sure you can buy new capabilities, you can get tooling, but like right now, it's two hours. 
 

It's gonna take two hours. Why include that time, right? Like if you know, you can't improve it, you've done everything to make that better, then filter that out. That way, when you hit the parts that of your incident response that you didn't calculate for, or did actually take longer than you expected, and there's opportunities for improvement, those will spike on your metrics charts instead of. 
 

Yeah, well, we know that takes that long. Great. Yeah, like [00:14:00] that, that incident will take that long. We can't do anything about it. That way, when you look at it, your metric immediately tells you, Hey, by the way, you had an incident in December and remediation was the big spike in it. You, you had one incident where, uh, the, it was a ransomware incident and it took nine hours and a play box. 
 

We estimate that it should only take like four hours. What happened there? Like, what, is there, is there room for improvement for that one? And that gives you like a specific incident type, specific playbook, uh, that you can actually go to. And an engineer is like, okay, cool. Like I can go look at that problem, like that, that I can answer too. 
 

Sean Martin: Let me ask you this, because what you described entering the role, we need updated metrics. Let me see what we've done previously in terms of reporting. Let's see what data we have in terms of of what we're doing now. [00:15:00] And that generates something and morning. So I go back to my example of my change in roles where I said, well, maybe we define the product. 
 

Differently up front so that we know what we want, not just see what we get. So I'm wondering, have you uncovered anything where you can say, this is what we want to achieve? So maybe it's connected to business continuity or disaster recovery, or so in there is the definition of what we need and within, Within some boundaries of those requirements, we live, right? 
 

And if we pop out of that, then we're in trouble. And, or maybe we want to shorten, we want to shorten some of that stuff because we're entering a new market or you have a big competitor. I don't know. So your thoughts on kind of defining what you need and working toward [00:16:00] that versus here's how we've always done it. 
 

How can we squeeze a minute or two off of something that may or may not matter.  
 

Allyn Stott: Yeah, this is the first terrible mistake I've made when making metrics, which is really it's losing sight of the goal, right? Like we're, these is generally teams that are on call, they're operational, they spend their days triaging alerts and responding to fires. 
 

So it's really easy for us to lose sight of the goal. And so we do, we end up describing frontline operational work with the metrics we've been given and passed on. And. We don't take the time to think about, Hey, what are we actually looking to measure? Um, the example I have around like my, my first big mistake here that I think is, uh, speaks to like the root and the start of my, my thought process around thinking about what are we actually trying to measure here? 
 

[00:17:00] What are the things that we should be measuring is to start with, what is the thing that we're measuring today? That makes no sense. And one of those today that I'd see like. Almost every, almost every program I've ever been a part of. We talk about the number of security alerts per month. And there's, you know, it's this, this up and down and there's questions of like, well, why is there more alerts this month and that month? 
 

And, you know, uh, you asked the real question and the real answer is, oh, there's a lot of false positive from the IPS. So we just turned that role off. Uh, but that's not a great answer to leadership. So you come up with this like great narrative. And so anytime you're, you're thinking about, I need a, I need a, a narrative spin to a metric, like the data doesn't speak for itself. 
 

Um, then, you know, you're writing in the problems. Uh, I'm, I'm heavily inspired from like, uh, uh, especially from like a visual, uh, Visual examples of [00:18:00] metrics. Um, Edward Tufte is, uh, I've got a stack of his books in my library. Um, and in both his book about envisioning information and the visuals display of quantitative information, uh, in both of those books, he talks about how, like, if you need a paragraph underneath your metric, you've, you're already in, in trouble, like, uh, the visual should speak for itself. 
 

Um, and so, uh, I took a step back and I said, Okay, what am I actually trying to say with this metric that has? You know, the number of alerts. Um, and then maybe I'm trying to make it better. Maybe I'm, you know, visualizing it by true and false positive. Okay, great. Like what am I actually saying here? Like I have a lot of false positives. 
 

I think that's like the surprise. Like that's the, that's the story is like every team has too many false positives and we're, we're trying to tune it down. [00:19:00] It's, uh, it's thinking about what are the different areas within detection and response that produce the value to the business. And last year I gave a talk about building detection and response programs. 
 

And I talk about the different areas that a modern detection and response program has, whether that's threat intelligence, it's, you know, Um, threat hunting, it's your classic operational processes of triage, analysis, and respond. But all those things all have to flow back into the rest of the business. 
 

And maybe even more specifically, at least the InfoSec organization. Because we have this, uh, unique position in detection and response that we're on the front line of things have gone bad, essentially, these are, uh, these are [00:20:00] the things that are happening as a result of where we are today. Here's the things that detection and response maybe isn't the best strategy here. 
 

Uh, I also talk about within, within metrics that there's lots of times that we're so focused on reducing. mean time to respond, mean time to contain when the reality is, is that there's only so much we can do from a detection and response strategy. And a lot of times we need to flip it over to the preventative folks. 
 

Um, and that's why I really love the position I'm in today because I can look and see like, Hey, what kind of incidents are we happening? What's the What is causing the most problems and how can other teams jump in here because they want to, they're just not always informed about those in a way that that tells the story, you know, we were, you know, we showed these graphs to like all the alerts that are happening and they look at it and go, okay, great. 
 

What action do you want me to take here? [00:21:00] Um, and so thinking about all those different areas. I, I came up with these five different, um, these five different areas within detection and response that I think tell the story and help inform the, uh, the rest of the, at least the rest of the information security organization about how detection and response is doing. 
 

Um, I, uh, I'm terrible at making up cool acronyms. The one I have for this one is SAVER. Uh, S A V E R, and those are streamlined, where we show how we're streamlining our operations. Through efficiencies, accuracy, and through automation. And a lot of this is focused on better tooling, better processes. It is really about how are we making it so that we're automating more things and spending more time doing manual tasks that we have to do manually, where [00:22:00] you want somebody to look at it. 
 

And then the a is for awareness. Where we want to share what we've learned about Thread Intel. There's been so many Thread Intel teams that I've, uh, had the privilege of working with. And I talk to them and I'm like, hey, like, what's, what's happening right now? Like if you were to tell me like three things that you're like, we should really be caring about this. 
 

What would those be? That in of itself should be. On every like core program metric, Hey, we're, we're doing all this thread Intel. Here's three things that, you know, the rest of the organization should be aware of. And maybe even associate the things that we could actually do today. Um, and then, uh, the V being our vigilance or how prepared are we for. 
 

Those top threats. Uh, I could go into like a very long rant about how, uh, MITRE ATT& CK has done a lot of really great things for us about like [00:23:00] defining what the different types of things we care about are, but from like a metric standpoint, we've maybe taken it too far where we try to measure across all of MITRE ATT& CK all in one day, uh, trying to test every single technique. 
 

When maybe what we only need to do is focus on those things that come from our thread Intel and say, Hey, these are the most important things today, right now, from a thread Intel perspective that are either happening in incidents right now, or things that we know are very potentially going to happen. 
 

And, uh, part of this comes down to, like, you only have so much time to do metrics. Uh, I only have so much time to do metrics. Uh, I don't want to try to measure across the entire MITRE ATT& CK framework deck by next week. Uh, that might be a little unrealistic, uh, testing all of those techniques. But I could realistically say, Hey, in the next month we want to know how well would we do against these top attacks that we're [00:24:00] seeing from incidents. 
 

From our thread Intel and tell the story of where are we in detecting those and respond.  
 

Sean Martin: I think before you move on to ENR, let me ask you this, cause it's been on my mind and you, you touched on the thread Intel piece, which cause I saw a post, I don't know, was it Chris Hoff that made the point of, we, we see a lot of these presentations from the big, the big tech companies, the Netflix and the AWS and the Google and the Microsoft, how cool their programs are. 
 

That works for them, maybe not even each other, certainly not 90, 95 percent of most organizations that don't have the money and the staffing and the, and the ability to kind of have a threat Intel team and, uh, and, uh, all these different organizations within the InfoSec [00:25:00] program. So how, how does maturity. 
 

Change because one metric might work for a small team, but then as they mature, make no sense because they have better data and knowledge and, and a team and processes and stack to support getting different measurements. So I don't know, I'm sorry for inserting between the, the, uh, the V and the E here, but, um, uh, how does that picture look from your perspective? 
 

Allyn Stott: Yeah, it's, uh, it's something I get asked a lot, actually from even the last talk, which is. Um, you know, because in my last talk, I talk about how important threat intel is to a detection and response program, right? Like it, it really should help us. Define what's the most important thing we should be working on. 
 

And the question that always comes to me is like, cool, we're like [00:26:00] two people. So what do we do? Or maybe you're one person and that's like the entire IT, you know, like maybe you're a small business and you have maybe, you know, a quarter of a contractor, that's your InfoSec team. I think that's the reality for a lot of businesses. 
 

And in those, in those cases, it's almost more important that you have at least a general idea of what things you care about the most from a threat perspective, because again, your, your threat intel is there to help raise awareness, but it's also there to set your scope. And so when you're looking at. You know, what, what technology investments do I need to make? 
 

Uh, what, what things should I care about the most? That's where threat Intel comes into play. And [00:27:00] I always say, don't overthink it. Uh, you could do it in a day, especially with how much of this data is, I think. Publicly available. I don't think you need, you don't have to have, you know, um, all of the, the Threadus, uh, Thread Intel subscriptions, some of the best Thread Intel, even to this day that I get is even just in, uh, uh, Slack groups that I'm in, you know, that people talk about, you know, what, what things they're seeing and what's, what's really been painful. 
 

Um, So I think the approach that I, that I often thought about is look at what external threat Intel and maybe that's as simple as like, I'm in, you know, I'm, I'm in health care, what things are targeting health care right now, what's, what's really painful and then look at your best Intel, which is your internal. 
 

Intel it's what things are happening [00:28:00] today. What type of security incidents have you had in the past? Is there trends to those and then think about your organization's security risks? so Uh, this could be as simple as asking what would be a really bad day for your company? What data if if exfiltrated? 
 

Would make your chief privacy officer cry the most. I think that's like a really great metric. You can draw it with like the number of tears. Like this would be the most upsetting  
 

Sean Martin: measure of the tears.  
 

Allyn Stott: That's right. Um, so yeah, using that to like help scope it to something that's much saner, uh, and not trying to worry about all the things. 
 

It's very easy. When you put on your InfoSec hat, even if you're only wearing it part time to be like, Oh my goodness, there are so many things. And yes, there's so many things, but you're thinking about it from a risk from a business perspective of, okay, I can't do all the [00:29:00] things. What are the most likely things to happen? 
 

Let's just focus on those. And then at least there's a good story around like why you chose the focus, given the limited constraints you have.  
 

Sean Martin: Nice one. All right, Ian, Ian R.  
 

Allyn Stott: All right. E and R kind of lead. E is, uh, exploration and explorations about raising up the things that you learn from your threat hunts. 
 

Uh, what did you learn about, uh, as you learn about new threats, as you learn about your, your internal incident trends? Um, how is that guiding your threat hunts? And I talk a lot about threat hunting in my last talk. I talk about threat hunting in my new talk. I've been working on a maturity model for detection response to help teams better measure Um, their maturity, you know, we talk about [00:30:00] like, how well are we doing? 
 

What are we prepared for? That's something I've been thinking a lot about. And I started with the threat hunting maturity model. Um, that, uh, I guess it came out 2015 David Bianco, also known as the person that built the pyramid of pain for detection and response. So those are, uh, those are always. Um, highlights and useful tools. 
 

Um, but, uh, threat hunting is, is another one of those like we don't have time, we don't have people to do that. Uh, just like threat intel. And threat hunting can truly be as simple as walking over to your IT person and going, What's really broken around here, you know, uh, that's, that, uh, if you, if you want to  
 

know what  
 

exactly, uh, what, uh, which, which user would you say working here is probably the riskiest user? 
 

Um, those [00:31:00] are, those are just like really easy examples of, uh, uh, what a very small team can do from a threat hunting exploration side of things, right? Like, what, what things are we finding that we should know about and actually including that as part of your, your program metrics of, Hey, we're finding things. 
 

Maybe it's not, you know, some cool analysis technique that we came up with. Maybe it really is just walking over to your IT admin and being like, Hey. So, uh, what's, what's really bad here. And then the R is, is readiness. Um, how quickly are we able to organize and respond to incidents? How complete are the playbooks that we have? 
 

Um, and thinking about it, not just from the perspective of time, uh, but thinking about it from how much coverage do we have, how many things have we already thought about from our awareness and from our exploration? [00:32:00] How many of those things do we have playbooks for today? Maybe we're not mature enough to, to measure things down to how many minutes or how many seconds it'll take. 
 

Uh, maybe we're we're more along the lines of we have at least a playbook for these things. That's at least a good indication that we're, we're somewhat ready to do this.  
 

Sean Martin: That's a good metric. Do we, how many playbooks do we have? .  
 

Allyn Stott: That's right. That's right. Um, the, uh. The, uh, yeah, in thinking about, like, all of these, like, different, different, you know, areas that, that you could measure, uh, the, the most important thing is that, um, a lot of times, because we are just carrying along the metrics that we've already had. 
 

And we, we, we end up making a lot of metrics actually, uh, I don't think there's been anybody that's come to me and it's just [00:33:00] like, we don't have enough metrics. I mean, a lot of times they say we don't have enough good metrics, but usually the, the, the conversation is I get asked to make lots of metrics. 
 

Uh, my boss is asking for a metric for this. My boss is asking for a metric to that. Um, and uh, we, we really need to start pushing back on this need to understand things that maybe don't make the most sense. Um, the, the idea here is that we are telling a story about like what things actually matter and I think that by using a framework like this, that goal is to talk about what our objective as detection and response is, what value we provide. 
 

And so if we can't tie a metric back to one of these areas, then the real question is, is like, what question are we trying to answer [00:34:00] with this metric, right? Like, a lot of times, the metrics we get asked to make, we don't ask, like, what, like, what are you, what are you trying to figure out here? Because a lot of times, if we had just asked, What are we trying to answer with this? 
 

What question do you have that's not being answered today? Because I think that if we can be a lot smarter in the metrics that we build and choose, the lot fewer metrics we can have, so we can be a lot more targeted and focused on the metrics we create today. 
 

Sean Martin: I love it. I love it. Well, we're at, we're at, uh, pretty much 35 minutes here. I wanted to ask because Marco's not here to say no more, no more, one more question from you, Sean. Um, so you went through Savor and, uh, Is that the framework that you're working on? Or is there something beyond that?  
 

Allyn Stott: Uh, yeah, that's the framework. 
 

That's a big part of it. There is more beyond it. [00:35:00] Um, with, uh, the talk that I'm giving, uh, you know, these, this is your starting place, but then it's like, okay, cool, but I actually have to build metrics now. And so I have, Uh, more details around like great. How do we build them? What? What questions do I need to ask? 
 

What components make up a good metric? And then, uh, I share the maturity model I've been working on called the threat detection and response maturity model and talk about how you can measure across the board. Three pillars, observability, proactive threat detection, and rapid response to talk about not just like how your day to day performing from a metric perspective, but how your overall program is maturing or not maturing based on the projects and work and technologies you're, you're, you're choosing. 
 

So that's a big. Big part of the, the talk I'll be giving at B Sides Las Vegas.  
 

Sean Martin: Nice one. Nice one. Well, I'll, I'll, [00:36:00] uh, see if I can't push this up the production schedule. So, uh, people hear it before B Sides and, uh, all the, all the fun, fun times at Hacker Summer Camp. There'll be a Diana initiative as well. 
 

So, uh,  
 

Allyn Stott: Yeah, if they, if you miss me at B Sides Las Vegas, um, I am speaking at Blue Team Con in Chicago the weekend of September 7th, so I'm excited. I haven't been to that one before, um, but it looks like it's going to be a lot of fun. Nice.  
 

Sean Martin: The Windy City will blow some metrics around there.  
 

Allyn Stott: All right.  
 

Sean Martin: Nice one. 
 

Well, Alan, um, Really cool stuff. Uh, hopefully you'll join me again. I mean, any of those topics, maturity, the deeper dive in the framework, uh, yeah, how to build a set of collection of metrics. I mean, any, maybe we did a nice overview today, but if you want to come back and, Talk more about any of those other points. 
 

[00:37:00] Uh, you're always very welcome. Thanks. Sounds like fun. I know, I know. Deeper dives into metrics. Yeah, I uh, I have the fortune sometimes to teach security analytics class, which is all about taking data, turning it into stories, right? That's right. Primarily, we look at it from a, uh, Driving business decisions perspective. 
 

These are all MBA students, but, uh, metrics should drive decisions as well. I'll just say how well you're doing or not. So, uh, interesting, uh, intersection there, but anyway, Alan, fantastic to see you again, my friend, safe journey to, uh, to Vegas. Good luck with the presentation. And, uh, and, uh, hopefully, uh, you get to see a lot of friends and make some new ones there. 
 

And for everybody listening, thanks for listening and watching. And, uh, hopefully this, uh, shed some light on maybe how you're creating or not creating [00:38:00] or overcreating and not measuring and not utilizing metrics properly, or at least appropriately for your organization. So, uh, yeah, feel free to contact me, contact Alan, and, uh, hopefully we'll have more conversations on this topic. 
 

So stay tuned, subscribe, share with your friends and enemies, and, uh, we'll see you on the next one. Thanks again, Alan.  
 

Allyn Stott: Thank you.