ITSPmagazine Podcast Network

Building Resilient Software: Secure by Design, Transparency, and Governance Remain Key Elements | A Conversation with Chris Hughes | Redefining CyberSecurity with Sean Martin

Episode Summary

In this episode of The Redefining CyberSecurity Podcast, Sean Martin engages with cybersecurity consultant and author Chris Hughes to explore the complexities of software supply chain security and vulnerability management. Together, they discuss practical strategies for achieving transparency in software components and the importance of adopting Secure by Design principles to build a more resilient digital ecosystem.

Episode Notes

Guest: Chris Hughes, President / Co-Founder, Aquia

On LinkedIn | https://www.linkedin.com/in/resilientcyber/

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/sean-martin

View This Show's Sponsors

___________________________

Episode Notes

In this episode of The Redefining CyberSecurity Podcast, host Sean Martin connects with Chris Hughes, a seasoned author and consultant in cybersecurity. The primary focus is on the intricacies of vulnerability management and software supply chain security, particularly in an era where software pervades every aspect of modern life.

Chris Hughes emphasizes the paramount importance of understanding what is in the software we consume. Software Bill of Materials (SBOM) has emerged as a focal point, akin to ingredient lists in the food industry, highlighting the need for transparency. Hughes argues that transparency is not just about knowing the components; it extends to understanding the risks associated with those components. He illustrates his point by referencing infamous incidents like the Log4j vulnerability, which unveiled the critical gaps in our knowledge of software components.

The conversation also shifts towards the broader challenges in software supply chain security. Hughes discusses the government's push for self-attestation and the role of third-party validators in ensuring software security. While acknowledging the complexities and potential bottlenecks, he underscores the necessity for a balanced approach that combines self-attestation with external validation to foster a secure software ecosystem.

Additionally, Hughes addresses the concept of Secure by Design, advocating for practices that embed security into the software development lifecycle right from the outset. He notes the historical context of this concept, which dates back to the Ware Report, and argues for its relevance even today. Secure by Design entails building security measures inherently into products, thereby reducing the need for perpetual patching and vulnerability management.

Internal risk management within organizations also gets spotlighted. Hughes insists that organizations should maintain an inventory of the software and components they use internally, evaluate their risks, and contribute to the open-source communities they rely on. This comprehensive approach not only helps in mitigating risks but also fosters a resilient and sustainable software ecosystem.

On the topic of platform engineering, Hughes shares his insights on its potential to streamline software development processes and enhance security through standardization and governance. However, he is candid about the challenges, particularly the need to balance standardization with the diverse preferences of development teams.

As the discussion wraps up, Hughes and Martin underline the importance of focusing on contextual risk assessment in vulnerability management, rather than merely responding to static severity scores. Hughes' advocacy for a more nuanced approach to security, balancing immediate risk mitigation with longer-term strategic planning, offers listeners a thoughtful perspective on managing cybersecurity challenges.

Top Questions Addressed

  1. How can organizations ensure transparency and security in their software supply chains?
  2. What strategies can be implemented to address the challenges of vulnerability management?
  3. How can platform engineering and internal governance improve software security within organizations?

___________________________

Sponsors

Imperva: https://itspm.ag/imperva277117988

LevelBlue: https://itspm.ag/attcybersecurity-3jdk3

___________________________

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

___________________________

Resources

NCF Whitepaper: https://tag-app-delivery.cncf.io/whitepapers/platforms/

CNCF Platform Maturity Model: https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/

Secure-by-Design at Google: What is the website URL for Secure-by-Design at Google?
https://research.google/pubs/secure-by-design-at-google/

Software Transparency: Supply Chain Security in an Era of a Software-Driven Society (Book): https://a.co/d/0bNaPmF

Effective Vulnerability Management: Managing Risk in the Vulnerable Digital Ecosystem: https://a.co/d/6xs5saH

___________________________

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: 

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring this show with an ad placement in the podcast?

Learn More 👉 https://itspm.ag/podadplc

Episode Transcription

Building Resilient Software: Secure by Design, Transparency, and Governance Remain Key Elements | A Conversation with Chris Hughes | Redefining CyberSecurity with Sean Martin

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: [00:00:00] And hello everybody, you're very welcome to a new episode of Redefining Cybersecurity here on ITSP Magazine. This is Sean Martin, your host, where I get to, as you've heard me say, talk to loads of cool people about cool stuff. And software supply chain, AppSec in general is an area that, uh, I'm very keen on. 
 

Something I did in my early days as an engineer and it only gets more important as time goes on. Everything is software driven, either directly through an interface or through one, one of a gazillion APIs at this point. Loads of services, uh, running local and on the cloud and everything else. So I'm thrilled to have Chris Hughes on to, uh, to talk to you today. 
 

To dig into this, he wrote a couple of books on vulnerability management and, uh, software security. The book we're going to talk about today is Software Transparency, Supply Chain Security in an Era of Software Driven Society. And we'll probably expand beyond that as well, but Chris, it's good to have you on, man. 
 

Chris Hughes: Yeah, I'm excited to be here. A long time listening to the [00:01:00] show, so I'm excited to chat and hang out.  
 

Sean Martin: I appreciate you listening and, uh, I'm excited to have you on. It took, took us a bit to, uh, get this coordinated. Appreciate your flexibility with all the travels going on. Um, here we go. Well, let's, uh, let's kick it off. 
 

I know you do a lot for the community, obviously writing, writing two books gives back as well. Um, can you give us an overview of some of the stuff you're working on at the moment, Chris?  
 

Chris Hughes: Yeah, for sure. So, you know, as you talked about, I have a couple of books, one around software supply chain security, one around vulnerability management. 
 

Obviously both of those touch pretty extensively on application security. Um, But outside of that, you know, I own a consulting company called Acquia, where I do cybersecurity consulting in the public sector, so like federal agencies, Department of Defense, things like that. I'm also a cyber innovation fellow at CISA, focused on AppSec, Vaughan Management, Supply Chain, etc. 
 

I host a sub stack called Resilient Cyber, where I put out a newsletter and, you know, articles diving into various topics. All around application security and software supply chain and all the, all the good things in [00:02:00] between there and just pretty active in terms of speaking and engaging with folks on LinkedIn in particular and excited to be part of the community and really enjoy learning from everyone. 
 

Sean Martin: I know that the community appreciates what you do as well, myself included. I wanted to kick things into gear with a point that you made, because I know there's been a lot of, certainly at the government level and, and the operational level within organizations looking at. Software building materials, S bombs, and you made the comment that the problem or the opportunity to do better, let's say, is much broader than just what are we building with, so maybe can you elaborate on that a bit, pick things off? 
 

Chris Hughes: Yeah, I think, uh, when, when software supply chain, you know, really became a hot topic after the cyber executive order, for example, in, you software bill of materials, you know, kind of took center stage originally around the conversation around software supply chain security. And for good reason, [00:03:00] um, you know, we've been long consuming, you know, these things consuming software with no real understanding of what's in it. 
 

You know, we, we don't do that in other aspects of society. If we buy something or we consume a product or a food or, you know, those kinds of things, we want to know what we're consuming. Uh, but when it comes to software, we've just kind of used it and, and didn't really dig too deep into what's actually in this thing that I'm, that I'm Purchasing procuring, you know, using in my enterprise, what risks may be associated with it, that kind of thing. 
 

And there was a, you know, as I titled the book, there was a glaring lack of transparency and there still is, many would argue when it comes to an information asymmetry between, you know, software suppliers and software consumers, we often don't know what's in the software that we use. Um, and I think things like log4j and things like that kind of shine a light onto the fact that no one knew like, well, wait, is it, is it in this product that I have or these products that I have, or where in my enterprise is it running, what systems are impacted, things like that. 
 

Um, so it kind of dominated the conversation originally and, you know, made a lot of great traction with groups like a Linux foundation with SPDX or OWASP with Cyclone [00:04:00] DX in terms of the SBOM format. And we've made a lot of headway in terms of people knowing what an SBOM is or why you should have one. 
 

Yeah. And it's kind of funny that we've had the critical security controls like CIS and software asset inventory has been, you know, one of the top few on that list for decades. Uh, here we are saying, Hey, maybe we need to know what's in the software. Uh, even though it's been in best practice for decades, you know, we're now starting to ask that question. 
 

Uh, but I think it also kind of dominated the conversation too much to where we stopped looking at broader software supply chain things in terms of vendors that we work with, you know, vulnerability disclosure programs, you know, all the things that come in, in, in line with that, uh, thinking about things like SAS governance in terms of cloud consumption, you know, and it just hyper focused on SBOM. 
 

Um, and I think slowly we're starting to see that the conversation around software supply chain security is much broader than that. Uh, transparency is absolutely a fundamental aspect of it, but it's not the, it's not the silver bullet that we we'd hope for, you know, in security, there is, there is no silver bullet, unfortunately, it's a complex ecosystem with a lot of moving parts. 
 

Sean Martin: Loads of moving parts. And I guess we, we [00:05:00] had to start somewhere, right? So at least we look back over the course of, uh, security history. It always starts with an inventory. You can't protect what you don't know. Right. So I think S bomb as a, as a starting point is interesting and probably a good, good place to start. 
 

Uh, but to your point, there's. They're the vendors building stuff and go out on a limb and say they don't really know what's, what's all in their, their stack. Right. Um, they're using libraries, which may call other routines and things like that. And then, so that, that lack of visibility, lack of transparency to your point makes it such that they can't share with the users and the consumers of the, of those services and products what's in there. 
 

So obviously we need to start with the. With the vendors, but what are your thoughts on on that?  
 

Chris Hughes: Yeah, it's actually, you know, there's a lot of interesting aspects to it. You [00:06:00] know, one is the S bomb originally, you know, some people push back saying it was a roadmap for the attacker, but the attacker seemed to be doing just fine already. 
 

They seem to be, you know, pretty successful with their exploitation activity and so on. It was the consumers who had a lack of. Transparency and really didn't understand the risk that they had in their enterprise. Um, and I think the reality is that a lot of, you know, suppliers, one, didn't know what was in their products, or they did, and they weren't really keen on showing that to everybody, because as we've seen now, some of the activities, you know, not to call certain vendors, I won't name anyone, but as you point out, they've had, Uh, components and libraries in their product. 
 

They're, you know, way behind in terms of version, you know, no longer supported just, you know, antiquated in terms of number of vulnerabilities that have already been fixed that they haven't addressed things like that. Um, and then there's been a broader push. Now, when you talk about the dynamic between suppliers and consumers or vendors and consumers is the secure by design. 
 

You hear says that, you know, pushing for secure by design, right? Ironically enough, it's not a new trend, despite it being, you know, kind of taking center stage recently in things like the National Cyber Strategy and, you [00:07:00] know, CISA evangelizing for it and so on. This is a concept that dates back really to, uh, you know, the WARE report. 
 

The concept of building security inverse vaulting and on is 50 plus years old. Uh, but the problem is, you know, many consider cybersecurity to be a market, a market failure. Uh, there's no real force, you know, forcing function that's forcing suppliers or, you know, people producing products to do these things. 
 

If they can externalize that risk on the customers and have, you know, a negligible impact. And if, if something happens, you know, they don't really pay the consequence of that either in terms of share price or, you know, Uh, market, uh, you know, domination or, you know, revenue and things like that, or, or regulatory and legal consequences and so on, um, you know, they're not really gonna take the priority of putting security as first class citizen in terms of, uh, uh, on par with other things like speed of market and revenue and, you know, feature velocity and things like that. 
 

Um, so I think that that plays a part. And now, you know, funny enough, in the latest, uh, wake of the CrowdStrike incident, now we're hearing conversations again around software liability, uh, so I know it's a pretty broad answer and there's a lot at play there, [00:08:00] but you know, it's, like I said, it's a complicated ecosystem and there's, uh, these are challenging problems with no simple solution and they're longstanding challenges that we've had for quite a while. 
 

Sean Martin: Yeah. Yeah. It's in, I was thinking about the, uh, the software liability thing. I don't know how many good 10 years now. Probably eight or nine years. Anyway, um, I think I did a written piece and then perhaps had a chat with the Jeremiah Grossman as well. Who's been. I've been thumping this drum for a long, long time that why, why can't we have everything else has a liability where the, the people producing the goods have a stake in the game for what happens when those goods are, are used. 
 

And so I don't know, can we, can we rely on the individual vendors or do we need and, or do we need, I guess in the book you, you touch on attestations and if you're going to attest to something, then you probably need somebody to validate it. So I don't [00:09:00] know if we have assessors and. And is there a third party that jumps in? 
 

Is it the government? Is it commercial space? Uh, any thoughts on all that? Of course, somebody wants to make money. So somebody's gonna come.  
 

Chris Hughes: Yeah, yeah. You're, uh, you're opening a big, a big can of worms here. You know, first is, uh, I'd give another plug to a researcher and lecturer named Jim Dempsey, who does a lot of great work with an organization called cyber law, uh, lawfare, I think it's called, or lawfare group. 
 

I want to say, but,  
 

Sean Martin: uh, yeah, he's been on the show. We had, we've had, yeah.  
 

Chris Hughes: He's done some amazing research, same with another researcher named Chimayi Sharma, who's done a lot of research around software liability. And the challenge here is like, what, what does secure look like? What is good enough? And when do we determine that someone's not done enough due diligence around securing something, that now they can have, conversely, safe harbor, right, from ramifications or legal litigation and things like that. 
 

And then who's going to validate it? Right now in the ecosystem, it's largely the federal government that's pushing for this self attestation approach. CISA has put out a [00:10:00] self attestation form and requirement that's been driven by, you know, an organization called the Office of Management and Budget. So all software suppliers signed to the federal government are going to need to start self attesting to their products based on, uh, Uh, guidance called NIST, Secure Software Development Framework, or SSDF. 
 

And, uh, you know, so they're going to start self attesting that they're doing these fundamental secure software development, uh, practices and methodologies and things like that, which is great. Uh, but I also come from a space, uh, called the defense industrial base. And for a long time, organizations in that space have self attested that they were doing a lot of things, uh, that NIST said they should be doing. 
 

And they, many of them fell victim to various, you know, uh, Exploitation activities and nation states and so on that kind of started raising questions around. Are they really doing all the things that they said they were doing? They gave us their word, right? They crossed their fingers and gave us a scouts honor. 
 

But when revenue and contracts and things like that are on the line, maybe people aren't forthcoming. Maybe they didn't do enough due diligence to determine what they were. They were weren't doing. But I also in a previous life have worked for an organization called FedRAMP. If anyone's familiar [00:11:00] with that, that evaluates cloud services for the federal government. 
 

And in that space, you have a third party that comes and validates these things. Um, and in a market of tens of thousands, you know, of SAS vendors, for example, FedRAMP has been around for over a decade. They only have about 350 authorized services. And it's because it's such a bottleneck for that third party to come and do this due diligence process. 
 

It's costly, it's cumbersome, it's time consuming. So we kind of got to pick our poison. Do we want to introduce kind of this, this third party mechanism where people are going to go out and validate this and where their qualifications to go and validate what secure looks like? And what is it based on? Um, how long does that take? 
 

How often does it need to be done? You know, and on and on. Uh, so it's a complicated topic, but I think that, you know, I agree with Jeremiah in the sense that, you know, it's insane that people can sell products into the market. Okay. And it's just as it's like, you know, you use it as is, uh, it's, it's any liabilities on, it's up to you. 
 

There's no consequences for the quality of the product or any ramifications of the quality of the product. And given that software now powers everything from consumer goods to critical infrastructure [00:12:00] to national security weapons systems, you name it. Um, and people just let, you know, there's no, there's no real liability there. 
 

Um, you know, and it's kind of weird because we talked about open source off air a little bit. Open source is provided as is like you, you use it, you own the risk of it and people put it in their products and so on. But in that case, there's no, there's no, uh, contractual relationship, right? Between the people making it and the people using it. 
 

In this case, I'm buying a product. We have a contract, we have a legal relationship, but I have no recourse. If you sell me a product that has poor quality and compromises my organization, I impacts my operations, you know, and so on.  
 

Sean Martin: So I don't know if I want to head down the, to his open source good or not. 
 

Cause I'm just thinking, I'm thinking of the food industry. Cause you, you mentioned it earlier. We, we won't eat something unless we know it's been, I guess, approved by the FDA at some point. Right. Um, the, I don't know, is there, is there [00:13:00] something to learn from that industry? I don't, I don't know. Obviously software is much, I don't know. 
 

Is it much more complex in terms of, Ingredients. What's what goes into making something, but there's something we can learn from that industry because they, they clearly moved from whatever from wherever we don't, we don't have to share that to labels going on the box. Right. I guess it doesn't doesn't say where the cinnamon is coming from, but, um, it does say that cinnamons in it. 
 

I don't know. It's just that industry made that move. I'm just wondering if there's anything there that we can clean from.  
 

Chris Hughes: Yeah, I mean, there's, it's definitely often a parallel example or use case cited in terms of the pharmaceutical industry or the, you know, the, the food industry, agricultural industry, things like that. 
 

Um, but you know, on one hand, I think transparency is good. We should at least be able to know what's in the products that we're buying and consuming. But at the same time, like I'm a very heavy into fitness and I can pick up a box in a store and I [00:14:00] couldn't tell you what half of these ingredients actually mean. 
 

I can see what they are, but I don't necessarily know what they are, whether they're good for me or not, or what their implications are. The same is likely said for software. When I look at consumers who, you know, gave them a full verbose list of all the transitive dependencies and everything that's in it. 
 

Are they really in a position to, Determine what's good or not. You know, that's debatable for sure, but should they at least have the opportunity to know what they're purchasing or consuming? Absolutely. And then to your, you know, since you made the comment about open source being good or not, um, I, I, I think, you know, it, uh, it's not binary, it's not, you know, black or white or good or bad, it depends. 
 

Right. You know, you have some open source software projects have massive amounts of maintainers and contributors and it's a thriving ecosystem and they're very quick to fix things. And then you have an overwhelming majority. I met Kim, Kim, my Sharma, you know, she had some research that she showed. I think it was 94 percent of open source has like a single maintainer. 
 

Uh, and then 25 percent has, uh, you know, 10 or less maintainers. So it's, it's vastly, you know, under maintained and, uh, you know, then it becomes [00:15:00] a, what's safe or what's secure. Is it number of vulnerabilities or how quickly they fix things or number of maintainers, where the maintainers come from and on and on. 
 

So it, you know, it really depends how you assess the risk of it. And every organization, you know, we, we throw around the term like risk tolerance. Every organization's got a different risk tolerance in terms of what known good open source looks like. I will make a plug for, uh, there's an OWASP open source software top 10 things like, you know, known vulnerabilities, you know, number of maintainers, the pace of, you know, remedying changes or bloated dependencies and not things like that. 
 

It's definitely a great resource to check out and head down that path. And I've written about that in articles and in a book and things like that. Um, but again, there's no, it's not, you know, it's not binary. It really depends on how you evaluate risk and what your risk tolerance is.  
 

Sean Martin: Uh, I, so a lot of this to me is an organization or I don't know, some individuals are buying, building, building stuff and so they must be, or buying, buying, we all, I guess, buy [00:16:00] software. 
 

I guess my point is this is the, Consumers taking and using software that's built by somebody else. I want to for a moment kind of switch it because you, you speak to the internal risk as well in, uh, in your book. And I'm wondering if there's something we can take or things we need to think about, of course, when we're building internally apps for our employees, apps for our partners, apps for our customers. 
 

So we're building for the business that the consumers are using, whoever they are. Um, That introduces internal risk to the organization. So the whole liability question, the risk and our appetite for it as an organization, uh, Looks different than if we're buying something off the shelf. So some of your views on that, because I think we have clearly more control when it's our own stuff, building it. 
 

Um, so what do you think? [00:17:00]  
 

Chris Hughes: Yeah, I think it's a, you know, there's very similar parallels. Obviously it's not a external entity that we're providing a software to in terms of like a supplier and a consumer. But often organizations are using software to power, you know, the value that they provide to customers, consumers and stakeholders. 
 

If I'm building an internal platform of some sort, I'm likely doing it to facilitate some type of business operation or activity. Um, you know, there's been a big uptick in things like platform engineering and DevSecOps and things like that, internal around application development, and I'd argue the same practices need to apply there. 
 

You know, how are we sourcing third party open source dependencies? How are we evaluating, you know, whether they're fit for use or secure or trustworthy? You know, what kind of metrics do we use to associate, uh, you know, acceptable levels of risk in those cases? Uh, and all those things are still valid questions, you know, because it's if it impacts you say I'm an organization using software in some shape or fashion, if it impacts me and my organization, in some shape or fashion. 
 

There's a strong chance that it may impact my customers, my business partners, my stakeholders, my brand. Uh, you [00:18:00] know, all those things are still at play in that scenario. Um, I think organizations are starting to, you know, take heed to that. Even the federal government, for example, they just published, um, uh, report on their upcoming fiscal year, uh, cyber street priorities. 
 

And in there, they talk about things like an open source program office and hospital, as they call it, to start to govern some of those third party dependencies and start to look at, you know, You know, how do they evaluate those third party dependencies and open source components and libraries, you know, in terms of their risk? 
 

And then also this is a whole another topic. I'm sure you've had people on to talk about it Is actually giving back to that community because it's often volunteers that are unpaid doing this, you know How can we get back financially or can we get back in resources and labor hours and actually helping maintain these projects? 
 

Which we depend on ourselves. Can we get back to the community and help them sustain it? So we have a resilient, you know, uh survivable community Open source ecosystem. So I think that organizations need to be looking at that. Like, what do we use? How can we get back to it financially or in terms of labor and expertise and contributions, how can we evaluate the risks of the things we use? 
 

And then going back to our first topic that we touched on, [00:19:00] do we even know what we use? How do we even, how do we start to track it? How do we start to have an inventory of it? And how do we maintain that inventory in an ongoing fashion, given that software is dynamic and ephemeral and changes so quickly as well. 
 

Sean Martin: So I want to. I want to get your perspective on this. It's a topic I want to dig into and, and since the beginning of the year, what are we in August now? You can see how well I've done pulling this together, but the, the, the idea of Platform engineering, where organizations basically, yeah, they build a platform with all and all the stuff plugs into it. 
 

Right. In simplest terms, I've been, I'm previously an engineer, I'm a program manager, this way, my brain thinks I like to look at things in a modular, modular way. So to me, platform engineering, the way to go, right? So you can, you know what you're using, where, which services are shared and how they're being used and.[00:20:00]  
 

Hopefully you can get a good, a good picture and good management overall across the entire organization sitting on that platform. Um, I had a chat with a gentleman named Oleg when I was at, uh, OWASP AppSec in Lisbon, and he basically said, great in theory, we've tried it and it's very, very difficult. In real life, um, just because of the complexities of how fast things move and who's involved and they're, the organization he works for is one of acquisition. 
 

So they're out buying a bunch of companies all over the world. And so it's hard to maintain and update that stuff. So I still love it as an idea. I don't know, in his case, uh, not a great idea or not a, not a feasible, uh, idea, but I don't know your, your thoughts on platform engineering and more specific in the context of. 
 

Software supply chain and security.  
 

Chris Hughes: Yeah, so it's, uh, it's definitely a hot topic and one that's been gaining more traction in the [00:21:00] industry and not just platform engineering, but security platform engineering teams to, you know, obviously to secure the platform that gets built and things like that. And the concept is, you know, uh, trying to accelerate things for development tier, uh, development teams and engineering teams, so they could focus on their core. 
 

for competency, the application, you know, whatever it is they're actually doing to deliver value to the business rather than all the underlying, you know, the infrastructure, the compute, the networking, the storage, all the things that they have to do to just get to a point of developing and deploying an application. 
 

Um, and the value there, and I come from a space, the federal and department defense space where they call it a platform as a service, or they call it, uh, you know, in DOD speak, they call it a software factory because everything has to be tied back to some kind of industrial age manufacturing paradigm. Um, but basically it's a team. 
 

Next. It's going to enable the development team to move faster and, you know, have everything they need at their fingertips. And also on the, uh, the other side of that, I have some standardization, some governance, some kind of uniformity across the organization. So everyone's not just using whatever the hell they want, doing whatever they want with no kind of oversight, no governance, no [00:22:00] standardization, which is very hard to, to, to manage at scale then, because it's very sprawling problem in terms of products and vendors. 
 

Dependencies. And that just becomes very untenable to try and manage. Uh, but the reason I've seen it become difficult too, it's like on, on, on one hand, you know, it's great. It offers all those benefits. But on the other hand, you're talking standardization and uniformity and conformity and developers, they tend to like what they like and they want to use what they want to use. 
 

So when you try to push an opinionated platform on them, it can be difficult to get buy in for that because they want, they all want to use what they're used to or what they're comfortable with and what they enjoy using or what makes their life easier. Um, so saying no to some things and getting people to rally around a certain set of tools or products or services can be difficult at scale, especially in large, complex environments, including the commercial commercial space where you have different business units who have, you know, budgetary autonomy, purchasing authority, and they can go do their own thing. 
 

They don't have to use this centralized, you know, um, pushed platform that you're advocating for. So you need to make it, uh, you know, everyone started, I think, [00:23:00] originally down the path of you kind of field of dreams. If you go that they will come and then they started getting to the point where they tried to mandate its use and, you know, try to force people to use these platforms and that didn't work well either. 
 

Um, I'll give a plug for the cloud native computing foundation. They have a great, uh, you know, platform engineering white paper that's worth checking out. It lays out some of the things not to do, uh, you know, lessons learned of, you know, things that didn't go well. And it talks about things like forcing adoption or, you know, Being too opinionated and things like that, but I personally think if you're a large organization with many different development teams, you know, moving at different paces, using different services and products and tools, you know, having a platform engineering team, having a centralized platform that can speed things up for them and provide that standardization and governance at the enterprise level is is crucial. 
 

And I've seen you go. Well, you know, when carried out. Uh, correctly with the right leadership and, and you're listening to the users. You know, you're not just kind of telling them what they must do. You're listening to what do you need? How can we help you? How can we serve you? You know, you're there to serve the development team, not vice versa. 
 

Sean Martin: Yeah. I, it makes me, uh, funny. I'm [00:24:00] a proponent of, of platform engineering, but, uh, years ago I built, uh, I was a product manager for security management platform, basically what Sim and Sora have become. And we struggled with architecting the right thing that was usable, not just across the The organization, but out to, uh, out to the end user world as well, in terms of, is it going to fit this environment versus that one? 
 

And how quickly can you onboard all that stuff? Um, so yeah, I just felt the pain all over again there. Let's say we're coming close to the end here. So we don't get this right. If we don't get this right. Transparency, what we're building, transparency, and what we're consuming. Transparency and how secure it is. 
 

All that stuff. We're left with the, the always fun, fix it afterwards. Vulnerability, vulnerability management. So I know you wrote another book on that. The Effect of Vulnerability [00:25:00] Management, Managing Risk in the Vulnerable Digital Ecosystem. And, uh, continue to talk about that as well, beyond the book. So, Are we going to be able to, I guess the question for me is, are we going to be able to scale vulnerability management as more stuff gets built? 
 

Chris Hughes: Yeah, I don't want to be a pessimist, but it's, it's challenging. It's very, very, and that's why, you know, I was in the book around software supply chain security. I was really fascinated with the topic and I, I had done vulnerability management in different environments, you know, on premise in the cloud and hybrid environments and everything like that. 
 

So I had felt that pain firsthand and I started looking at how we do vulnerability management as an industry. And I was like, wow, it's, it's fundamentally broken. You know, we largely, you know, for all the talk of DevSecOps and breaking out silos and things like that, we largely just dump massive vulnerability lists and spreadsheets onto engineers or development teams to tell them, Hey, you got, you got to fix all this before you go to production. 
 

Um, and in terms of keeping pace, um, you know, we are struggling to keep pace. Right now there's some amazing research from Ponemon and others [00:26:00] show that most organizations, large organizations have vulnerability backlogs in the hundreds of thousands, even millions of, you know, uh, known vulnerabilities in the organization that are just. 
 

They're just building up, right? We can't keep pace with the speed of remediation and mitigating these vulnerabilities. And right now, I think Qualys actually just published an awesome mid year report that showed that we're about 30 percent higher at this point than we were last year in number of CVEs. 
 

Um, so organizations are struggling to keep up already and the number of vulnerabilities keeps growing. And that's for a variety of factors, you know. More software. It's running everything, you know, just quicker, quicker software development cycles, you know, DevOps and now I generated code is only going to increase the velocity of generated code, for example. 
 

Um, and that's, you know, it's great for business outcomes, but it can be challenging on the vulnerability side of things. And then, you know, it's also driving a parallel conversation of. You know, context matters. And like right now in that report, they showed 0. 9%. So less than 1 percent of vulnerabilities that were published this year so far are actually weaponized or known to be exploited. 
 

[00:27:00] So literally, you know, less than less than 1%. But when you look at how organizations do vulnerability management, is it a CVSS critical or a high? You got to fix all these criticals and highs. It's like, is it known to be exploited? Is it likely to be exploited? Do we have mitigating controls in place? Is it public publicly facing? 
 

Is it business critical on and on? And is it reachable in the code base? You know, on and on. And so a lot of times traditional security tools, SEA tools and vulnerability management tools and stuff don't provide that context or organizations don't take the time to add that context. And it only increases that tension that exists between developers and security because, you know, they're saying, Hey, these are false positives or, Hey, this isn't, this isn't a risk and you're just keep dumping this list on them. 
 

But, you know, to your question of whether we can keep up right now, you know, the data is clear. We're not keeping up, and that's why context is so critical. We need to, we need to focus our finite, you know, resources. We know about the ever present security workforce woes and challenges, you know, for example, the outnumbering of security engineers to development teams and developers, you know, we need to, we need to focus on what matters, what [00:28:00] actually poses a risk to the organization or, you know, our stakeholders, our customers, not just something that has a base severity score of critical or high, for example. 
 

Sean Martin: So how can. Is that I'm picturing the S bomb or some transparency in what we're building with help us find a path where we can keep up and stay ahead is a question I like to ask when, when I get into a philosophical question about cybersecurity with folks, which is, can we not a security? Define the business such that it's not vulnerable. 
 

The answer is yes. Um, we don't, I don't think we just continue to patch stuff. So if we find a library that's used across a number of products and there's an alternative library, or there's a single mitigating control or some other, some other step we can [00:29:00] take to reduce or eliminate the exposure. Can we not do that instead of just patching over and over and over so I don't you get to talk to a lot of folks and see a lot of things. 
 

I don't know if you see any progress in that that regard.  
 

Chris Hughes: Yeah, somewhat. So, um, you know, we talked a little bit about the push for secure by design and CISA is, you know, advocating for that, that approach. They've actually published some great guidance on that and put it out there for vendors and software suppliers. 
 

And in there, they talk about getting to what you're, which, what you're mentioning, which is starting to look at CWEs, common, common weaknesses and enumerations. So instead of having, you know, many vulnerabilities that we just keep addressing and why this hamster wheel. Of fixing things over and over. What is the root cause within the product within the software that's causing this type of vulnerability? 
 

And can we eliminate entire classes of vulnerability? So we're not just in this pattern of, you know, having the patch repeatedly over and over. Um, but then again, uh, you know, to the supplier consumer dynamic. That's that's a situation that the supplier is in a situation to do me as a consumer. I can only [00:30:00] patch their product if a patch is available. 
 

I need a supplier to actually start to do that. We just call it root cause analysis and address those common weaknesses and enumerations and, um, and do things like threat modeling, which I know you've had people on to discuss and, you know, start to kind of, you know, Build a secure by design product and software from the onset. 
 

Otherwise, that all that activity that we're talking about, that churn, that toil, that cumbersome activity of vulnerability management, it's just gonna continue to fall downstream to customers and consumers to address. But the problem there, of course, is getting them to do that. Why would they if they can just, you know, put out features and, you know, things that actually generate revenue and marketing hype and headlines and buzz, if they can push that risk on the consumers and customers, and there's no real consequence, you know, their market share doesn't go down, their stock price doesn't really take a long term hit, even after an incident, there's no real regulatory or legal consequence, you know, that makes them change their behavior, why would they invest all that activity and time into those things when they can Focus on things that generate revenue for the business. 
 

Uh, and that's kind of the fundamental philosophical fork in the road that we're at, in my [00:31:00] opinion, as an ecosystem.  
 

Sean Martin: Yeah. And I guess the positive that I will, that I'll close with, and of course, any final thoughts from you, I certainly welcome, but. This idea of, so software by design and transparency from vendors. 
 

Cause I think what you described is a scenario where perhaps the consumer feels trapped and they only have that vendor as an option, right? And if they don't take security seriously, they only look at features and generating revenue. Then you're kind of in a holding pattern of this, this, uh, cycle of vulnerable software that you're constantly patching. 
 

But if there's an alternative and clearly there's enough people building stuff now, If we can get more alternatives and some of those alternatives secure by design and invest in security management and tackle vulnerabilities over features, then consumers of those. Systems and services have an option [00:32:00] to switch. 
 

And then to me, that's where the platform comes back into play. If you build a platform where you can swap things in and out easily, you have that abstraction layer. So you're not killing your own team. When you say this vendor and service is no longer. Cutting the mustard or they never did. And we're tired of it. 
 

Um, let's move to a more secure one. So I think, I don't know, hopefully we end up in a world where, where the security does, does, uh, take, take center stage and mean something to organizations. We have, we have a system and a program we can. We can actually deal with it better.  
 

Chris Hughes: Yeah. I mean, I know I may have sounded a pessimistic or a little, little negative, but I honestly, I think it's like the fact that we're even having these conversations, you know, you got to think these are decades in the making. 
 

And a lot of times these conversations haven't even happened and more, more and more, we're waking up as a society and saying, Hey, software powers, everything in our society. We have this complex dynamic relationship between suppliers. How do we go about, how do we go about, um, you know, changing that [00:33:00] dynamic to where suppliers will take more responsibility for security, build secure products, and it's gonna come in, in a variety of fashions. 
 

It's gonna come from consumers demanding quality products and services from vendors, having transparency so they understand, you know, what they're consuming, if there's risk associated with it. And then from the other side of we go legal and regulatory, you know, pushes and pressures, you know, to, to kind of drive them in that direction. 
 

A couple of plugs for a couple of resources. I would mention there is, uh, you know, Google has some great secure by design publications and guidance out there that I recommend checking out. They have a brilliant white paper on the topic and talks about platform engineering and enabling us through product design and development at scale. 
 

And then you talked about, you know, the diversity of diversity of suppliers in the ecosystem. Uh, there's a longstanding paper. It's 21 years old now, uh, talking about monoculture and cyber security and the risk associated with it. It's talking about Microsoft in particular, which ironically enough, here we are 21 years later, having some very similar conversations around Microsoft. 
 

Um, uh, so for, you know, as much as change, you know, something stayed the same. Uh, but I think the fact that we're having [00:34:00] these conversations is a really great indicator of where we're headed. And I think increasingly as a society, more and more people are waking up and raising the alarm bell that, Hey, we need to address this. 
 

You know, it's great what society does for our community. I mean, a software does for our society and opportunities and innovations, but we have these fundamental systemic problems we need to address, too.  
 

Sean Martin: Yeah, I love it. And I'm gonna, I'm gonna give you an action item because you've not, you've mentioned a number of resources. 
 

So hopefully we can, we get links to all those and share those with folks. Um, One, one more question because the audience, I don't know, some of the stuff seems overwhelming. So I'm just wondering, talking to the CISOs and security leaders listening and watching this, do they need to have a deep understanding of what we're talking about? 
 

Do they have to have a program ready to deal with some of this stuff or are there some simple things they can do that [00:35:00] aren't so huge? But could, could actually maybe some of the resources will help with some of this, but is there something they can do that's not so too overwhelming that will have a decent impact on, on this for them and their program? 
 

Chris Hughes: Yeah, I mean, honestly, it's a, it's a complex topic. Like we said, there's a lot of moving pieces. It could be overwhelming. Um, and there's a lot of, you know, guidance out there and it could be confusing on where to start, but I would honestly recommend starting with the basics, the fundamentals of things like inventory, you know, just understanding what open source software we use internally for development purposes, how do we kind of track, manage and inventory that and govern its use and then externally, who are our, our, our vendors, you know, who do we have a business relationship? 
 

Who are we buying software from? What kind of reports do we have? If there's an incident, do they have vulnerability disclosure programs? And how do we make sure we stay in tune with those? Uh, you know, those fundamentals as well as we see this kind of parallel push for things like zero trust. Uh, so having things like network segmentation, least permissive access control, these are longstanding fundamental security practices and fundamental methodologies that can really limit the blast radius when something [00:36:00] happens. 
 

And then, of course, you know, just a plug for the ever, you know, critical business continuity planning, incident response planning and practices, tabletops and so on. So that when we have another incident, whether it's a benign, you know, kind of software update and it causes an outage or it's a malicious incident. 
 

You know, activity that causes an impact, you know, you you've practiced scenarios, you understand who we should be having involved, how do we respond to this, you know, what are our contingency plans, how do we ensure we recover and limit the impact in the business so that we can be a business enabler, rather than impeding the business, fantastic stuff,  
 

Sean Martin: Chris, this, uh, this chat was well worth the wait, hopefully folks, uh, enjoyed it as well, as much as I did. 
 

Uh, lots of good stuff here. Like, like you noted, there are a number of resources. So we'll, we'll grab some of those, include those in the show notes. And, uh, you're amazing. Appreciate your time. Yeah. Thank you for having me on. I enjoy the show. Look forward to tuning in. Absolutely. And, uh, the, the two books, software transparency, [00:37:00] supply chain, security, and error of software driven society is one of them and effective vulnerability management, managing risk and the vulnerable digital ecosystem is the other. 
 

Grab both of those, connect with some of the resources, and connect with Chris. He's a good dude, and I appreciate you. Thanks everybody for listening and watching. We'll see you on the next episode of Redefining Cybersecurity.