ITSPmagazine Podcasts

Why We Can’t Completely Trust the Intern (Even If It’s AI) | An RSAC Conference 2025 Conversation with Alex Kreilein and John Sapp Jr. | On Location Coverage with Sean Martin and Marco Ciappelli

Episode Summary

What happens when AI writes your code—but you’re not sure you can trust it? In this episode, Sean Martin sits down with Alex Kreilein and John Sapp Jr. to unpack how zero trust thinking, threat modeling, and quality management are essential to building resilient, secure systems in an AI-driven world.

Episode Notes

When artificial intelligence can generate code, write tests, and even simulate threat models, how do we still ensure security? That’s the question John Sapp Jr. and Alex Kreilein examine in this energizing conversation about trust, risk management, and the future of application security.

The conversation opens with a critical concern: not just how to adopt AI securely, but how to use it responsibly. Alex underscores the importance of asking a simple question often overlooked—why do you trust this output? That mindset, he argues, is fundamental to building responsible systems, especially when models are generating code or influencing decisions at scale.

Their conversation surfaces an emerging gap between automation and assurance. AI tools promise speed and performance, but that speed introduces risk if teams are too quick to assume accuracy or ignore validation. John and Alex discuss this trust gap and how the zero trust mindset—so common in network security—must now apply to AI models and agents, too.

They share a key concern: technical debt is back, this time in the form of “AI security debt”—risk accumulating faster than most teams can keep up with. But it’s not all gloom. They highlight real opportunities for security and development teams to reprioritize: moving away from chasing every CVE and toward higher-value work like architecture reviews and resiliency planning.

The conversation then shifts to the foundation of true resilience. For Alex, resilience isn’t about perfection—it’s about recovery and response. He pushes for embedding threat modeling into unit testing, not just as an afterthought but as part of modern development. John emphasizes traceability and governance across the organization: ensuring the top understands what’s at stake at the bottom, and vice versa.

One message is clear: context matters. CVSS scores, AI outputs, scanner alerts—all of it must be interpreted through the lens of business impact. That’s the art of security today.

Ready to challenge your assumptions about secure AI and modern AppSec? This episode will make you question what you trust—and how you build.

___________

Guests: 

Alex Kreilein, Vice President of Product Security, Qualys | https://www.linkedin.com/in/alexkreilein/

John Sapp Jr., Vice President, Information Security & CISO, Texas Mutual Insurance Company | https://www.linkedin.com/in/johnbsappjr/

Hosts:

Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.com

Marco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com

___________

Episode Sponsors

ThreatLocker: https://itspm.ag/threatlocker-r974

Akamai: https://itspm.ag/akamailbwc

BlackCloak: https://itspm.ag/itspbcweb

SandboxAQ: https://itspm.ag/sandboxaq-j2en

Archer: https://itspm.ag/rsaarchweb

Dropzone AI: https://itspm.ag/dropzoneai-641

ISACA: https://itspm.ag/isaca-96808

ObjectFirst: https://itspm.ag/object-first-2gjl

Edera: https://itspm.ag/edera-434868

___________

Resources

JP Morgan Chase Open Letter: An open letter to third-party suppliers: https://www.jpmorgan.com/technology/technology-blog/open-letter-to-our-suppliers

Learn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsa-conference-usa-2025-rsac-san-francisco-usa-cybersecurity-event-infosec-conference-coverage

Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverage

Want to tell your Brand Story Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrf

Want Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us

___________

KEYWORDS

sean martin, phillip miller, rsac 2025, cybersecurity, ciso, startups, risk, marketplace, leadership, technology, event coverage, on location, conference

Episode Transcription

Why We Can’t Completely Trust the Intern (Even If It’s AI) | An RSAC Conference 2025 Conversation with Alex Kreilein and John Sapp Jr. | On Location Coverage with Sean Martin and Marco Ciappelli

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] We're rolling. We have no idea. We have so many things to talk about. We don't know what we're gonna talk about, but it's gonna be a great conversation either way. But John, Alex, it's great to have you, man. Thanks, Sean. Thank you. Appreciate you gentlemen. Sir, it's always good to see you. Always have fun having you on the show. 
 

Um, I'm not gonna do introductions. Well, John s It's Alex, Alex Kline Quas. There we go. We're gonna talk about. AppSec development, ai, who knows what cyber resilience, what, what's resilience is always fun. Um, mind, let's go. You know what, uh, so me, me and Alex were having a conversation right before we came on, and I, we were talking about, you know, the concept of enabling the secure adoption of artificial intelligence. 
 

So not just the secure adoption, but the responsible use. And Alex had had a great take on that. You wanna share that Alex? Uh, yeah. I'd like to, like, I wanna understand from my development teams. By the way, I should preface this with, there's probably a few people who are more enthusiastic [00:01:00] about AI and what it can do for making products safe than us. 
 

Like I feel like we immediately have this common, like, we, let's go, let's figure it out. Right. But I think our, our main question, and it's the question that I bring to the table when we work in responsible ai, Qualys, is. Well, help me understand. Why do you trust this thing? Like where does that come from? 
 

Where does it come from? The idea that you're gonna trust not just this one particular model or that you're gonna trust this output, or that you're gonna trust this ML flow process, but like where's this concept that we should just give it over to the machines come from, right? When we do failing road analysis for site reliability engineering, when we do threat modeling for security, we do so much. 
 

Is this different in so many other places? Why is this different? It's fine here. Yeah. So we ask our four questions, right? The Adam tack questions like, what might go wrong? How bad would that be? What are we gonna do about it? Did we do a good job? And, uh, I don't know, what's your confidence in how people are answer that? 
 

So, um, very, very low. And, and it's in part because, well, [00:02:00] and I was using this analogy the other day, the horses left. You know, the horse is al already out running the race. Yeah. You know, the jockey is still sitting here going, wait, wait a minute. Where, where are we going? Um, and so now we are the jockey trying to play catch up to this prized horse that's out there going to deliver all of this efficiency and, and all these things, right? 
 

And we're gonna become these, these companies that are going to innovate and use AI in that innovation. And so part of the challenge with that is they are too trusting. It's kinda like that the younger generation today, they trust everything. You know, so there is no concept of zero trust Yeah. In, in that regard. 
 

And I think we have to apply a zero trust mindset to this because no, we can't trust it. Yes, it does. The great things, you know, I can make and, and crank out a, a new version of a resume really quick or I can. I can do all these really cool things. Right. Write a ton of code, right? Yeah. Like high quality, really effective, right. 
 

Optimized performing code that I don't necessarily know if I can trust. Right. [00:03:00] That respects like my input, validation rules. My sanitization rules. Yes. Yeah. Why? Because 'cause the actual agent's gonna rewrite itself, right? Yeah. And so you, you think about what, what you just said, Alex, and now I can, I can create an attack just in the, in the read me file. 
 

Mm. Yeah. I don't even need code to actually create the attack. Right? Yeah. So now what the, the personalization of the, of the, uh, so the, you know, we, uh, the past couple of years have all been about like software supply chain, right? Mm-hmm. That's like the big thing for like what past five, seven years? Yeah. 
 

Supply chain. Okay. This is the focus here we're gonna have, and it's great, um, because it brings to light, I think, uh, historic reality, which is, this is code of unknown providence, right? You gotta check it in and treat it like your own. If you're gonna take dependency on it, you're taking dependency on all of it. 
 

Not just the benefits, but the distractions. It's hard enough to teach developers about basic attacks like typo squatting, right? Sure. So then if that's [00:04:00] hard, why do we think it's gonna be easier to teach 'em about things like model impersonation or integrity based attacks? Right. Uh, those are hard enough conversations, I think, to have without the pressure cooker. 
 

And, you know, it's great to walk the showroom floor and find that everybody's sprinkled some AI on some stuff, but like, you know, the, I think it's a question about like, do we have the societal governance to take it seriously? Do we have the corporate governance to live up to the responsibility and do we have the personal responsibility to interrogate our work for quality and what it's worth? 
 

Yeah. And you know, if, if I can tag one thing on there. So I did a session here at RSA in 2011 called Innovations in Application Security. Back then, we were talking about a few days ago. Yeah, a few days ago we were talking about application security, risk management. We are, what are we talking about now? 
 

Managing risk. And back then there, there was a, a acronym that was used, soup software, unknown pedigree. And that's what Alex just, yeah, it's, it's even [00:05:00] more. Mm. Relevant now than it was even back then. Because back then we were talking about open source libraries and Yeah. DLL files and that type of thing. 
 

Right. But now we are talking about entire code being written by this AI thing, but now also the agent, this whole concept of the explosive agents. Yeah, I, I saw, uh, uh, uh, a stat the other day that there were, there's been 30,000 new agents in the last four months. That feels right. It feels low. So you, you talk about managing risk. 
 

Yeah. We, our job has gotten exponentially more difficult. But, but are you not on some level though, optimistic, like I, I am, yes. I, I am. I, I, no doubt about it. I've done a huge amount of quick value and output Right. From like aligning my mental model thinking to get the AI to produce the output for me. 
 

Yeah. Like. You know, here's this thing that we want to build, right? Run a threat model against it. Give me a bunch of evil user stories. There we go. Tell me the technical controls I need. Gimme the security user [00:06:00] stories and acceptance criteria. And better yet, if you can build me a detection or at least some kind of logic that helps inform what a detection might look like, that's gonna help my soc, it's gonna help my developers. 
 

It's gonna refocus our time. So things that maybe are more meaningful, right? Like I'm tired of asking my developers to just chase CBEs, right? Like I want them to invest the time in like security architecture reviews. Yes. I want 'em to invest the time in like fault analysis and cyber resiliency in your favorite topic. 
 

Yeah, exactly. And I can't do that if I'm asking 'em to chase things that are not exposure based. Right? Get that. There's a CVE. Is it a true positive? Is it applicable? Is it exploitable? Is it internet facing? Is there a compensating control? 'cause if not, and we have Gary Hayslip outside, just crank on us and Larry Weiss. 
 

Yeah. What's up? Yeah. If it's not, then like I would rather recuperate that time for my developer and have them put it to something that might be a higher use. Yeah. So I want to [00:07:00] talk about this resilience piece because we're talking about environments that even if they were static, yeah. Have so much in them. 
 

Absolutely. And they're not static, they're dynamic and they're, I'm gonna say they're, they're hyperdynamic with ai. Okay. So how, how do we get a handle on that? It's gonna have to be on some form of automation and, and perhaps the use of ai and Yeah. We we're, we're gonna have to use AI to figure out how to manage the risk associated with ai. 
 

Is that scenario you described, Alex, is that Yeah. Is that a manual thing that you're Uh, no. Uh, I've been working on building, uh, an agentic approach to training a model to deliver that as an output for me. Mm-hmm. Which is helpful, but also I treat it like an intern, you know, which is like, I respect the intern, I'm gonna value the intern. 
 

Right. But I don't trust the intern. Right, right. Um. Yeah, I think the resiliency piece, look, there might be something that I like a hot take on, you know, application security for just a [00:08:00] second. I'm not sure it's really the problem that we think it is. I feel like it's a symptom of another problem. Just like most of the time developers build applications without the need to be able to take change. 
 

So what happens when they don't? You build an application that can't take change. So now you have a hard time patching its dependencies, right? Because everything is a major version change, right? Mm-hmm. Why? Because you don't do testing, you don't do unit testing, you don't do integration testing. What's the result of that? 
 

The result of that is a lot of vulnerability debt, but it's a symptom of something else, which is that they're not using test driven development. They're not using quality management systems. So how can we make something resilient that can't actually take change when we need it to? So I think we're maybe focusing on the wrong problem by like 10%. 
 

Well, and then my question back on to that is do we need to redefine what resilience? Um, well, to me, resilience is really simple. It's the ability to respond and recover because you're expecting something is going to happen. Mm-hmm. [00:09:00] Whether it's a cybersecurity attack or some technology outage, it's going to happen. 
 

And it's resilience is what's my ability to respond and recover within my tolerance from a risk standpoint. But if, if I can tack onto something of what Alex was just saying in that, how do I, how do we get to some of that? Resilience is threat modeling. And we, we were talking about this a little bit earlier and one of the things that I'm doing in our organization as we are in this modernization effort, this transformation. 
 

Uh, digital transformation to the cloud into a, a private cloud environment. Yeah. It becomes, okay, well we're gonna threat model that, but what we are doing different is we're taking a threat model and doing penetration testing in the middle, in, in unit testing. Mm-hmm. Mm-hmm. So now I don't achieve that. I don't, uh, end up with that vulnerability debt. 
 

Yeah. So, so now, because AI we're, uh, going to, we could find ourselves in and JP Morgan Chase's, uh, CTO said the other day. Right, uh, in their open letter is that we're taking on AI security [00:10:00] debt at a pace faster than we can pay it down. And so this is one way we, we can stop that debt upfront by threat modeling and then taking that threat model based on this application and everything we know about it, what it's expected to do, and how it could be abused and be able to take that in pen test in unit testing. 
 

Totally. That way we're achieving what we started off way back when in terms of a secure SDLC. How do we change that environment though? 'cause I've, people have heard this now a few times from the conference. Me say that I was at a legal conference where lawyers are writing code. Yeah. Through Vibe, coding and LLMs. 
 

That sounds like a terrible idea. Right. That's a horrible idea. Right. Um, okay. So, but end users are writing code Yeah. To do things they need to do that they can't get their development team to do. But it's part of the business. I, I have this belief that there are like two missing partners. For most security teams that they really, really need to spend more time sharing love and appreciation with quality [00:11:00] assurance, site reliability, engineering. 
 

Sure. Security is a, is a aspect of quality management. I don't think anybody really disagrees that you can't have a high quality application that's not secure or vice versa. Right. The diff here is if we work hand in hand with quality assurance. To coach them on how to build like test cases for security leveraging like Oasp, A SVS or other standards where it's like, Hey, if you're gonna change a parameter of a token, here's your test case to make sure you did it so that you're not creating a race condition. 
 

Right? Or if you're gonna change this API, here are your series of test cases, right? If we can empower our partners in QA to be 10% more security curious. I think we're gonna have a great outcome, and if we leverage like the great data we get out of like a SPM, uh, not a SPM, um, uh, application security, performance management, or application performance management tools like Datadog and App Dynamics, [00:12:00] new Relic and a bunch of others. 
 

We'll figure out, well, what really is the exposure? Right? Right. We'll have a good map. Sure. But we can't work alone. And that's what I think too many people in security do. They try and nug out the solve. Yeah. Without understanding that it's part of an ecosystem, man. So I'm gonna add Couldn't agree because you look at it from a certain perspective, maybe right or wrong. 
 

Yeah. You look at it from a different perspective. I do. You, you have to answer. You. You have to drive a team. To deliver your stuff, you have to report to or work with your peers at the, uh, executive leadership team report to the board, right? Which you have to get buy-in. This stuff's already happening. How do we, how do we drive from? 
 

So for, for me, the, the concept is being able to manage up, down, and across, right. You know, up to the, the C-suite and the board, making sure that they have visibility into, and that's when I talk about cyber risk governance. I'm giving visibility to them at at their level. Now that's visibility into [00:13:00] what's happening down at the technical level. 
 

Yeah. Are we focused on the right vulnerabilities or the, the threats, and that's why the threat modeling is very important so that whatever we deliver, it has, it's a quality application that's delivered with value, but that we're remediating the right vulnerabilities. That are gonna move the needle up at the top. 
 

So let's, let's just use a car for example. You know, you've got the gauges on the dashboard, right? You've got one for oil, one for your temperature, one for, yeah, for your gas. Well, would you be comfortable that, okay, now all of a sudden you, your gas needle shows that you're running outta gas. So you are, you're below a quarter of a tank, but the mechanic is back there instead of making sure you got gas in your car. 
 

He's worried about changing your oil, which it just changed a week ago. Totally the wrong you, you, you. You're the wrong actions and the wrong thing are moving the wrong needle. Yeah. Yeah, that's right. So kind of thinking about it that way and, and making sure that there's traceability up, down, and across the stack because those folks at the top [00:14:00] are being reported to by those people in the middle Yeah. 
 

Across the organization. And so whether I am, uh, in, and I'll use insurance if, if I'm the, the. I lead the underwriting function and there is a vulnerability that is an active exploit that threat intelligence says that, you know, we are being targeted and now I'm going to disrupt this company's business and we do nothing about it. 
 

Instead, we worry about what some, uh, scanner told us was the critical. Well, critical without context doesn't matter. Thank you. Because a medium can actually be more critical to my business than that critical rating. You're explaining the difference between vulnerabilities and risk. Very artfully. 'cause it's about exposure management. 
 

Right? Exactly. It's not just about my CVSS score drove me to this outcome. Like that's not a mature way of thinking. Putting things in context is our job. That's the art of security. Um, at least it should be. Yeah, I, I think we're gonna take this show on the road somewhere. That's right. [00:15:00] I'm actually gonna say that we we're, we're gonna wrap here 'cause we've run out of time. 
 

Sad. Yeah. But we're all gonna the after party, right? We're going to the after party, A light and, uh, AI village baby. I seriously would like to have this conversation deeper, so maybe we can line that up. Sold. Absolutely. Let's do it. Glad to see Sean, Alex, pleasure. Thank you. Thanks everybody. Itsp magazine.com/rsac 25. 
 

Stay tuned to all our coverage.