In this episode of "On Location with Sean Martin and Marco Ciappelli," the duo delves into the intricacies of AI integration in today's technology landscape, joined by experts Helen Oakley and Larry Pesce at SECTOR 2024 in Toronto, Canada.
Guests:
Helen Oakley, Director of Secure Software Supply Chains and Secure Development, SAP
On LinkedIn | https://www.linkedin.com/in/helen-oakley
On Twitter | https://x.com/e2hln
On Instagram |https://instagram.com/e2hln
Larry Pesce, Product Security Research and Analysis Director, Finite State [@FiniteStateInc]
On LinkedIn | https://www.linkedin.com/in/larrypesce/
On Twitter | https://x.com/haxorthematrix
On Mastodon | https://infosec.exchange/@haxorthematrix
____________________________
Hosts:
Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]
On ITSPmagazine | https://www.itspmagazine.com/sean-martin
Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast
On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli
____________________________
Episode Notes
Sean Martin and Marco Ciappelli kicked off their discussion by pondering the intricacies and potential pitfalls of the AI supply chain. Martin humorously questioned when Ciappelli last checked the entire supply chain of an AI session, provoking insightful thoughts about how people approach AI today.
The conversation then shifted as Oakley and Pesce were introduced, with Oakley explaining her role in leading cybersecurity for the software supply chain at SAP and co-founding the AI Integrity and Safe Use Foundation. Pesce shared his expertise in product security research and pen testing, emphasizing the importance of securing AI integrations.
Preventing the AI Apocalypse
One of the session's highlights was the discussion titled "AI Apocalypse Prevention 101." Oakley and Pesce shared insights into the potential risks of AI overtaking human roles and discussed ways to prevent a hypothetical AI apocalypse. Oakley humorously noted her experimentation with deep fakes and emphasized the importance of addressing the root causes to avert catastrophic outcomes.
Pesce contributed by highlighting the need for a comprehensive Bill of Materials (BOM) for AI, pointing out how it differs from traditional software due to its unique reliance on multiple layers, including hardware and software components.
AI BOM: A Tool for Understanding and Compliance
The conversation evolved into a discussion about the AI BOM's significance. Oakley explained that the AI BOM serves as an ingredient list, akin to what you would find on packaged goods. It includes details about datasets, models, and energy consumption—critical for preventing decay or malicious behavior over time.
Pesce noted the AI BOM's potential in guiding pen testing and compliance. He emphasized the challenges that companies face in keeping up with rapidly evolving AI technology, suggesting that AI BOM could potentially streamline compliance efforts.
Engagement at the CISO Executive Summit
The speakers touched on SECTOR 2024's CISO Executive Summit, inviting senior leaders to join the conversation. Oakley highlighted the summit's role in providing a platform for addressing AI challenges and regulations. Martin and Ciappelli emphasized the value of attending such events for exchanging knowledge and ideas in a secure, collaborative environment.
Conclusion: A Call to Be Prepared
As the episode wrapped up, Sean Martin extended an invitation to all interested in preventing an AI apocalypse to join the broader discussions at SECTOR 2024. Helen Oakley and Larry Pesce left listeners with a pressing reminder of the importance of understanding AI's potential impact.
____________________________
This Episode’s Sponsors
HITRUST: https://itspm.ag/itsphitweb
____________________________
Follow our SecTor Cybersecurity Conference Toronto 2024 coverage: https://www.itspmagazine.com/sector-cybersecurity-conference-2024-cybersecurity-event-coverage-in-toronto-canada
On YouTube: 📺 https://www.youtube.com/playlist?list=PLnYu0psdcllSCvf6o-K0forAXxj2P190S
Be sure to share and subscribe!
____________________________
Resources
Learn more about SecTor Cybersecurity Conference Toronto 2024: https://www.blackhat.com/sector/2024/index.html
____________________________
Catch all of our event coverage: https://www.itspmagazine.com/technology-cybersecurity-society-humanity-conference-and-event-coverage
Are you interested in sponsoring our event coverage with an ad placement in the podcast?
Learn More 👉 https://itspm.ag/podadplc
Want to tell your Brand Story as part of our event coverage?
Learn More 👉 https://itspm.ag/evtcovbrf
To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcast
To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast
Unveiling AI's Impact and Challenges at SECTOR 2024 | A SecTor Cybersecurity Conference Toronto 2024 Conversation with Helen Oakley and Larry Pesce | On Location Coverage with Sean Martin and Marco Ciappelli
Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.
_________________________________________
[00:00:00] Sean Martin: Marco.
[00:00:03] Marco Ciappelli: Sean.
[00:00:04] Sean Martin: When was the last time you checked out the whole supply chain of, uh, of your last Gen AI session.
[00:00:13] Marco Ciappelli: Uh, what time is it now? I don't know. Never?
[00:00:18] Sean Martin: Never. I think sadly I have
[00:00:22] Marco Ciappelli: so many questions already about this conversation, so I'm going to let you lead, because my first question will probably be like, what is it?
So, we'll get there. Well, here's a good
[00:00:32] Sean Martin: example. Actually, I think Helen was on a different show. Um, and I think we I don't know if we had it, or One of the hosts said in a short story is we had an episode go out that CHAT GPT helped with and it called it S BOMB, S B O M B. Oh yeah, yeah, yeah, yeah. Remember that?
With a nice B on the end. Not exactly what we want or intended. Um, so that had to be fixed. So that's kind of the whole point here. Are people looking at AI and what it produces and who's producing it and how it's, how it comes together. And there's a whole supply chain around that, just like software, which is, that leads me to a gazillion questions.
So let's bring in our guest, Helen Oakley and Larry Pesce. How are you both?
[00:01:21] Helen Oakley: Awesome.
[00:01:22] Sean Martin: Yep.
[00:01:23] Larry Pesce: Doing great. I have no, I have no complaints.
[00:01:25] Sean Martin: There you go. And if he did, you can just, CHAT GPT, a better response. But, uh, Here we are. This is part of our on location, we call these chats on the road to some conference and this some conference that we're heading to today is in Toronto.
It's the Security Education Conference in Toronto, also known as SECTOR. We just had a chat with Steve Wiley yesterday, kind of getting the big picture and the history of the event. It's an event we've covered for a few years now and I'm excited to be going to see everybody. And, uh, there. I want to meet Ellen in person and Larry in person, but for now, we'll take a virtual introduction so everybody else gets to meet you as well.
Ellen, a few words about your role, what you're up to, and then we'll pass it to Larry to do the same.
[00:02:14] Helen Oakley: Yeah, I'm leading cyber security supply chain, or security of software supply chain at SAP, and also secure development. What, um, I'm up to outside of SAP is AI bomb. So I'm co leading a forum and working groups where I bomb under sees the government.
And also I'm founding partner of AI Integrity and Safe Use Foundation.
[00:02:41] Larry Pesce: Love it.
[00:02:42] Sean Martin: Larry.
[00:02:43] Larry Pesce: Awesome. Yeah. So, uh, I do some product security research and, uh, some pen testing, um, at a company And not only that, do a lot of supply chain security type of stuff, whether that be thought leadership or helping our product do better for helping folks secure their supply chain, arguably with SBOMs.
And on the outside, try to do a little work with Helen with some AI and he'll help some of her missions for, uh, for AI BOMs as well.
[00:03:12] Marco Ciappelli: Okay. Why don't we start with the definition of. Because also, the title of your session is AI Apocalypse Prevention 101. So it's kind of, you know, the happy walk in the park.
So what do you want to talk about? Let's start with the part you're walking
[00:03:31] Sean Martin: in. Not
[00:03:32] Marco Ciappelli: my part, but you never know. So Helen, maybe you want to take that.
[00:03:37] Helen Oakley: Yeah, it's funny because, uh, when Larry and I, we've been talking about, you know, about our session, potential session at Sector, um, there are a lot of talks, um, at least that time, beginning of the year, uh, what if AI will take over our jobs, our world and everything?
Yeah. But what does it actually means? Like, um, let's, and we started to chat and it was more of a, kind of play for conversation, you know, Oh, you know, how do we prevent the apocalypse? And actually I also did some deep fakes of myself, like one on one apocalypse prevention, uh, for AI. And then we figured, you know, this is like the best way because actually if you go backwards, if you go to the root cause, This is what you do to prevent the potential apocalypse right down the road.
Larry, anything from your side to it?
[00:04:25] Larry Pesce: Yeah, no, I think, I think you're, uh, you're hitting the nail on the head with that one. Like going into, uh, we don't want it to have AI be in everything. And if we do, we want to know it's there. And we think that's where the bill of materials, noting that AI is going to be potentially in a product or how it's used is, is incredibly helpful to prevent that apocalypse.
[00:04:48] Sean Martin: So I don't, I don't know if this is a good, uh, comparison or analogy or use of words, I'm not, not great at that sometimes, but so apocalypse to me is the big things just explode at some point, right? Then there's the, maybe not explosion, but life still sucks in this dystopian world. So there's the apocalypse and there's, we live with it and we don't explode yet in dystopia.
Um, the reason I'm bringing that up is the difference in, I want your opinion, both of your opinion on this. The difference between an AI bomb and a software bomb, Bill of Materials. Because I see a lot of separation of AI from software development. And I can see where it's appropriate to do that. Um, so your, your perspective on does AI need its own bill of materials separate from software bill of materials?
And if so, why is it to prevent the apocalypse or, or is it, I don't know. Cause I feel, I'll just leave it there. What are your thoughts?
[00:05:49] Helen Oakley: Larry, you want to start with that?
[00:05:51] Larry Pesce: Sure. Sure. And I, and I think. Yeah, we want, we want to prevent that, that, that apocalypse. Um, but what arguably makes it, makes it different in my opinion is that we start talking about cascading bombs as it, as it were, you know, we've, we've talked about a hardware bill of materials and well now on top of that hardware, we build software and admittedly maybe a piece of software is now some of this artificial intelligence, um, in, in various capacities.
But. We need to further classify the various components, whether it be hardware, software, or even types of software going into our products to give us that Insight as to how we can improve or maybe, maybe not. So I think it's really appropriate that we have sort of these cascading levels of a BOM.
[00:06:45] Helen Oakley: Yeah.
And if, uh, if you step back a little bit and say, what is S BOM, software bill of material, it's an ingredient list, right? Of what is in the software and AI also software. It also has this S BOMIC, what we started to call fields. So field, whatever is software has also. AI software needs to have that description.
But on top of that, we want you to have other ingredients of AI, like how the systems are built, um, all the models, right? Data sets, how it's been trained, and then even up to the, um, energy consumption, uh, details, because we need to understand how it's built you. Prevent and catch the early decay or drifts or any anything malicious that could happen that can actually potentially lead to the apocalypse, right?
And we can go into apocalypse scenarios a lot, but I think we are in the perfect moment to define what needs to be done to prevent and really steer AI systems in the direction to be human allies, not, um, not adversaries.
[00:07:48] Marco Ciappelli: All right, I have a question because I like that you went into the ingredient example, which is a little, you know, nicer than an apocalypse, um, but, but so, and I'm thinking like, all right, so AI, sometimes we, we don't even know how it works.
So like, you know, we, we know about, there's a black box and whatever happened there, eh, no sure. So how are we going to put the ingredient when we don't really? Eventually, no. Or is this my perspective that we don't know?
[00:08:20] Helen Oakley: So there are multiple ingredients and they're fundamentally different from traditional software because now we're not just talking about ingredient list, but also how we, uh, train those ingredients, right?
So how do we, uh, use how, how it's been modified throughout its life cycle. And then we talk about training during development. And then we also talking about Runtime, uh, modification and fine tuning of AI systems, right? So how is it consuming, uh, data? How's it working with the prompts? How, how the rack is being consumed, right?
It was the integrity of that data. So all of these aspects are being collected as part of the AI bomb. So it's, it's really fundamentally different. However, it still has the foundational ingredient list of an S bomb.
[00:09:12] Larry Pesce: And, and for a little bit from my perspective, too, I tend to work in analogies and I'm a product of the 1980s.
So from my formative years, and I, and I love movies. So, you know, Helen and I have got a little bit of a theme to our overall presentation, but I think about that theme where we're sort of going with that with the AI apocalypse and that having some of those ingredients, just like food, like We can see that there's an ingredient in food, and many years later, we find out that, hey, that food wasn't necessarily all that great for us.
And now we've got these ingredients in all of our food. We now have, with this AI bomb, the ability to go back and look at some of these models and look at the way we've done some training data and go, we probably shouldn't be using these anymore. We need to have some additional ways of, of changing that to make it better for us.
[00:10:05] Helen Oakley: Data is still gold.
[00:10:08] Larry Pesce: Yes. Yeah. Yes. Yeah. And it gives us a great opportunity to have some decisions about what we consume, whether that be an ingredients list for food or the types of, uh, types of AI and the way they're configured as well.
[00:10:21] Sean Martin: Yeah. Red, red number seven. Um, so monosodium glutamate.
[00:10:26] Marco Ciappelli: Talking about eighties, right?
And we're all aging ourself here.
[00:10:31] Sean Martin: Yeah. Yes. So to that point, so an ingredients list, is a great start. And I'm assuming Larry, you get to dig into some of this, Helen, as well. Um, I was on a CISO call this morning. Um, not going to disclose anything that I shouldn't, but there was talk about the term of pen testing for AI and what that means and red teaming and, and What are we, in the traditional software testing realm, we have what I deem is a clearly defined scope of this should do this and therefore we can fairly simplistically determine what it shouldn't do and then test the boundaries of inputs and outputs and come up with a nice set of test case.
It's a little more difficult when we start talking about AI. We talk about training, we talk about data sets, we talk about systems coming together to produce bigger, bigger sets of results. So how do we get our hands wrapped around beyond what's just in it and what the actual ingredients mean to the results for our business, I guess, ultimately.
[00:11:42] Larry Pesce: Yeah, yeah. And I think, yeah, we talk about pen testing for AI and like, this is still so new and we've done this adoption of AI so rapidly that, you know, even the pen testers are starting to figure this out, um, about, you know, the types of attack paths that they need to do. And, but I think something like an AI bomb going into some of that pen test has been very derivative of some of the other work I've done around software build materials is that this type of data can start.
Informing how some of these pen testers might approach some of these problems, give them some initial starting points, and then they can start to evolve over time about how they interact with that and the data that they've acquired from those AI bombs to improve the overall, the overall testing. And arguably make all of this better.
Helen, any thoughts?
[00:12:33] Helen Oakley: And, and I like that you brought, and you really connected, uh, the AI BOM and the use case of AI BOM, right? So PENTEST is actually one of the use cases that we can have. And like, there are multiple different types of risk assessments that we can extract from the data that is collected from AI, with AI BOM.
And I think it's very important for industry to understand what What are those use cases? Because terminology of software bill of material and AI bomb is surfacing the industry, but, um, understanding how to use it and how to optimize for risk management is very important, and I think that's what, um, this session and many further sessions that we're going to do.
Uh, we will focus on that.
[00:13:15] Marco Ciappelli: Can you, can you give us some of this example of case studies or common risks that could apply to the supply chain?
[00:13:24] Helen Oakley: Yes, absolutely. So, um, for example, what, um, as part of the target team, AI Bomb target team under CISA forum working group, uh, we're discovering and writing down this use cases.
And one of the highest priority use case is for compliance. So, um, we've done some research already, um, uh, that shortlisted, um, The fields mapping from the compliance fields that we want to ask, uh, like ourselves or, uh, third party models that we collect, collecting in our environment, um, that we can collect from AI BOM.
So we can automate the compliance. So that's a very important use case because as we see regulations are evolving around AI and supply chain. We will need to collect this information in a more automated and scalable way. Right now, a lot of companies are trying to do this manually, which is a very, very difficult task to do.
Understanding, like you said, it's a black box. It's very difficult to understand. So we're trying to achieve it in a programmatic and automated way.
[00:14:30] Sean Martin: Is that through frameworks and standards around how the data is formatted so it can be produced and ingested by different systems? Or what's that look like?
Yeah,
[00:14:42] Helen Oakley: so, uh, data would be ingested. Um, so data would be collected through generation of AI BOM. So AI BOM would be as an artifact and then the upstream system would ingest that data and present the risks based on different criterias.
[00:15:04] Marco Ciappelli: Larry, what do you see or what you don't see happening today in the Yeah,
[00:15:16] Larry Pesce: so I think the organizations are maybe not addressing the problem, but have really figured out that they need to address this problem.
Um, again, back to that whole, we're doing this so fast, uh, and it is evolving so quickly that the organizations are having to evolve at just as quickly. And I think many of them are behind the ball about even knowing what they need to do about some of this stuff. So, um, I think we're not seeing a lot of folks do it and that they're going to need to really fast.
[00:15:49] Helen Oakley: Yes. And just to add to your comment, Marco, about the apocalypse or how the ingredient now ingredient is more positive than apocalypse. But the really point here is that, um, yes, by collecting ingredients, we can identify when the training went wrong, when something went wrong. So we can catch it, right?
Because if we don't, this is how it's going to Uh, drift further and furthermore, and leading into some catastrophic events, perhaps, right? Smaller, bigger, we don't know. But I think it's very key to understand what it's doing, the system, how it's progressing, how it's being trained, so that we can catch those and mitigate those risks in the early stage.
[00:16:29] Marco Ciappelli: Catch it where you can.
[00:16:31] Sean Martin: Catch me, catch you if you can. Yeah. Um, and,
[00:16:34] Larry Pesce: and, and Helen, those words so speak to the overall sort of theming that we picked for that presentation. Catch it early before it goes bad. And yeah.
[00:16:47] Sean Martin: So I wanted to just highlight that there's no lack of, AI topics throughout the whole sector agenda, the two days of briefings, and there's a whole AI summit on the first day of the event, and so we've had, Marco and I have had a couple chats around some of the AI risks like deep fakes and things like that.
One touch on the AI summit. At this point, I'm going to cover that when I get, get up to Toronto, but maybe Helen, I know you're, you're also part of the advisory board for the executive summit, which is a collection of CISOs, and you can't share too much, I'm sure, but I would only imagine that AI is included in some of those conversations, maybe you can maybe highlight some of what you expect to cover during that summit as best as best you can, and maybe some other things that you think CISOs should think about, and maybe think about.
Consider joining you for during the CISO summit.
[00:17:45] Helen Oakley: Certainly. Um, the agenda is not a secret. Everyone can see the agenda and topics that will be presented. Um, for the executive summit, we didn't want it to be very focused on only AI because we know that AI is now everywhere, but we wanted to, um, um, Can I collect the story for CISOs and leadership who will be joining us?
What are the common risks? And of course, AI is one of them. So we'll definitely have some, uh, discussion on, on AI, for example, software supply chain, um, um, topic that I will presenting. I will touch base on AI as from overall CISO perspective, what do we need to think about, right? So other presenters will also cover some aspects of AI.
Because it's a very impactful field and definitely everyone needs to think how they're managing those risks within their organizations.
[00:18:39] Marco Ciappelli: And is AI sitting at the summit? Taking notes for AI? I
[00:18:49] Helen Oakley: don't think AI is allowed at the executive summit. So you have to be there in person.
[00:18:55] Sean Martin: You're me. I don't know.
Maybe you trust AI more than me.
[00:19:00] Marco Ciappelli: I don't know.
[00:19:01] Sean Martin: I certainly would, but, uh. All right, well, let's, um, back, back to the session you're both doing. What level of detail do you expect to get into? You were looking at program level type things here. We're looking at risk level, executive level, conversation. Who, who, who do you want to see sitting in the chairs and coming up to speak with you after they're all done?
[00:19:27] Helen Oakley: So, um, I will start Larry, you add, uh, from my perspective, we trying to tackle all levels because the topic is new and we want to explain what it means and why is it, why is it important? So, uh, while we will cover some level of Um, we're not going to go into too many technical details because I think it's important to, uh, explain the concept first.
So the concept, what it means for organization, what it means for professionals, what do they need to understand and steps that they need to take, and we will provide resources for the on how to, uh, you know, where to follow up, what to do next and, and. Things like that. And I know Larry is the master of hacking and, uh, he'll cover some of that aspect.
[00:20:14] Larry Pesce: Absolutely. And, and I think, you know, arguably from my per, per per perspective of the hacking and the, the pen testing that, uh, opera operationalizing some of our AI bombs is really targeted to sort of all of those levels that that Helen man mentioned because. You have to get adoption from the top and then someone down at the bottom is actually going to have to perform something potentially technical with with the results of that.
[00:20:39] Sean Martin: I love that word operationalize. That's what I'm all about. It's a
[00:20:44] Larry Pesce: mouthful. It's a mouthful. It is. It's hard to say
[00:20:47] Sean Martin: sometimes. For your session AI Apocalypse Prevention 101 meet AI Bomb. Only one B in the AI Bomb. Your new best friend. That's Wednesday the 23rd of October 2. 15 p. m. in room 714. So Uh, to see everybody there.
And of course, if, if, uh, you're an executive, a CISO, uh, that can, that can clear, clear the test. Uh, there's the, uh, sector executive summit on Tuesday, the 22nd. And, uh, I hope everybody has fun in there. And, and I, in all honesty and joking aside, that those are some of the best conversations, uh, cause people open up and share the real world stuff that's going on and, and they learn from each other.
So if you are an executive and you're available on the 22nd of October. Please do attend that event. Helen, Larry, it's been a pleasure chatting with you. I'm looking forward to seeing you both in person. Uh, any, any final thoughts before we, uh, photograph?
[00:21:46] Larry Pesce: Uh, we'll see you in Toronto. But Helen's got some better ones, I'm sure.
Helen's got some better ones, I suspect.
[00:21:53] Helen Oakley: We'll definitely, we'll see you and let's, uh, prevent the apocalypse.
[00:21:58] Sean Martin: I never
[00:21:59] Marco Ciappelli: had so much fun talking about the apocalypse.
[00:22:02] Sean Martin: We won't let Marco come, that'll prevent the That already prevents
[00:22:05] Marco Ciappelli: a lot of problems. But, but for next year, everybody pay attention because I want to be there.
There you go. So, uh, this year you have a, you have a pass.
[00:22:14] Sean Martin: Yep. This year is October 22nd through the 24th. One day of summit, two days of briefings. And, uh, I will see everybody on location. In Toronto. Helen, Larry, thank you so much, everybody listening and watching. Thank you for joining us. Please stay tuned for more coverage from Sector 2024 in Toronto and all of our on location event coverage.
[00:22:35] Marco Ciappelli: Just subscribe. There's a lot coming up.
[00:22:37] Sean Martin: And subscribe as well.
[00:22:38] Marco Ciappelli: Yeah, right on. All
[00:22:39] Sean Martin: right. Thanks everybody.
[00:22:41] Marco Ciappelli: Thank you.
Thanks.