ITSPmagazine Podcast Network

AI BOMs, and other insights into the future of Cybersecurity and AI | An RSA Conference 2024 Conversation with Helen Oakley and Christina Stokes | On Location Coverage with Sean Martin and Marco Ciappelli

Episode Summary

This year many conversation at RSA conference rotate around artificial intelligence. Yes, AI is becoming more prevalent and essential, even in cybersecurity.

Episode Notes

Guest: Helen Oakley, Director of Secure Software Supply Chain and Secure Development, SAP

On LinkedIn | https://www.linkedin.com/in/helen-oakley/

____________________________

Host: Christina Stokes, Host, On Cyber & AI Podcast, Founder of Narito Cybersecurity

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/christina-stokes

On LinkedIn | https://www.linkedin.com/in/xTinaStokes/

____________________________

Episode Notes

This year many conversation at RSA conference rotate around artificial intelligence. Yes, AI is becoming more prevalent and essential, even in cybersecurity. At ITSP Magazine's RSA 2024 coverage, Helen Oakley and Christina Stokes shed light on the critical role of AI BOMs in safeguarding our digital ecosystems.

The Introduction of Helen Oakley with SAP

Christina Stokes sits down with Helen Oakley, director of software supply chain security and secure development at SAP, to learn about her journey from software development to cybersecurity. Helen discusses the importance of securing software supply chains in a global context where attacks can have far-reaching implications.

Unpacking the Significance of Supply Chain Security

Helen elaborates on the evolving landscape of cybersecurity, emphasizing the increasing focus on supply chain security as a prime target for attackers. She highlights the vulnerabilities present in open source components and the imperative to instill transparency and automation in securing software development processes.

The Intersection of AI and Security

As the conversation steers towards AI being used as a weapon in supply chain attacks, Christina and Helen explore the concept of weaponizing tools and the proactive measures needed to mitigate AI-related security risks. They underscore the need for vigilance in understanding AI systems and guarding against malicious manipulation.

The Role of AI BOMs in Cybersecurity

Helen connects the dots between the workshop's focus on AI BOMs and the imperative for comprehensive transparency in AI systems. She elucidates how AI Bill of Materials (BOM) acts as a framework for understanding AI models, their development processes, and potential risks, allowing for effective risk assessment and response strategies.

The Evolution of AI and Its Industry Impact

Christina reflects on the rapid evolution of AI in shaping industries and the need for professionals to adapt to AI technologies. She envisions AI as a collaborative ally in enhancing security measures, emphasizing the pivotal role of humans in monitoring and optimizing AI systems for accuracy and reliability.

Exploring Hypothetical Scenarios of AI Apocalypse

In a thought-provoking discussion, Helen and Christina speculate on hypothetical scenarios where AI could potentially pose existential threats. They stress the importance of training AI models with precision to align with human values and prevent catastrophic consequences.

Resources and Community Engagement in AI Security

Helen encourages following her on LinkedIn for educational content and highlights the upcoming AIBOM forum by CISA government, inviting industry experts and enthusiasts to contribute to the dialogue.

As we navigate the complexities of cybersecurity and artificial intelligence, the insights shared by Helen Oakley and Christina Stokes illuminate the path towards a more secure and transparent digital future. From supply chain intricacies to the transformative potential of AI, the discourse echoes the need for collaboration and innovation in safeguarding our digital ecosystems.

Be sure to follow our Coverage Journey and subscribe to our podcasts!

____________________________

Follow our RSA Conference USA 2024 coverage: https://www.itspmagazine.com/rsa-conference-usa-2024-rsac-san-francisco-usa-cybersecurity-event-infosec-conference-coverage

On YouTube: 📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS-B9eaPcHUVmy_lGrbIw9J

Be sure to share and subscribe!

____________________________

Resources

Learn more about RSA Conference USA 2024: https://itspm.ag/rsa-cordbw

____________________________

Catch all of our event coverage: https://www.itspmagazine.com/technology-cybersecurity-society-humanity-conference-and-event-coverage

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcast

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Are you interested in sponsoring our event coverage with an ad placement in the podcast?

Learn More 👉 https://itspm.ag/podadplc

Want to tell your Brand Story as part of our event coverage?

Learn More 👉 https://itspm.ag/evtcovbrf

Episode Transcription

AI BOMs, and other insights into the future of Cybersecurity and AI | An RSA Conference 2024 Conversation with Helen Oakley and Christina Stokes | On Location Coverage with Sean Martin and Marco Ciappelli

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] Christina Stokes: For joining us, ITSP Magazine's RSA 2024 coverage. Today we have an opportunity to sit down and speak with Helen Oakley with SAP. 
 

Thank you. Helen earlier today did a workshop on AI, and we're going to talk more about that and learn about what workshop is happening and what people learned there today. But first off, I'd like to introduce Helen. Helen, thank you for coming and joining us here at Broadcast Alley at RSA. I'd love to learn more about you, so tell us a little bit about yourself. 
 

[00:00:33] Helen Oakley: Absolutely. I joined cybersecurity, like I've been in tech for a while, but I joined cybersecurity maybe a bit over 10 years ago, maybe 15 or so. Um, and I joined it by curiosity. So I just got very curious. I was doing some pen tests to learn on my own. I had a pen test team and it was just an experience of learning before we had all these degrees in cybersecurity, right? 
 

Um, so that's how I transitioned into cyber from software development into cyber security gradually. And, um, later on I, I got a full time job in cyber security and working more on application security side. Tell me a little bit about what you're doing at SAP. I am director of software supply chain security and secure development. 
 

So I have a team of experts, um, architects, um, and developers. So I'm part of the global team, which is a team of engineers who are implementing different practices around software supply chain security and secure development. And those practices and requirements spread across the whole engineering. SAP has over 40, 000 of employees in development, so every team, every product needs to follow standards and requirements. 
 

And we also do it in an automated way in many cases to help streamline the processes. So I'm part of the global team. But our work, uh, distributes across the whole development.  
 

[00:01:58] Christina Stokes: Yes. We were just speaking about how supply chain is incredibly important, because if one area of the globe gets attacked, that impacts supply chain for other countries and other regions. 
 

So, what you're doing, the work that you are doing is incredibly important. So, thank you for that.  
 

[00:02:15] Helen Oakley: So, supply chain is, uh, very interesting. Um, I mean, in security world, we've been talking about supply chain for a while. Um, But right now, and in the past years, attackers are realizing, like never before, that it's the most efficient way to actually, uh, infiltrate the data to get access into the system, uh, into the organization and execute on their objectives. 
 

So this is, um, they find how many open source have vulnerabilities and how easy to get into it. And there are plenty of examples and it's only growing. Yeah. Yeah. So. Supply chain, uh, can be, um, resulting in ransomware, for example, but it can also be a data exfiltration or else.  
 

[00:03:00] Christina Stokes: Are there any concerns about AI being used as a weapon in supply chain? 
 

[00:03:05] Helen Oakley: Oh, weaponizing tools. I mean, weaponizing tools. There is always some sort of a tool that being weaponized. For example, pen test tools, right? Right. So we use them to test, but attackers use them to actually infiltrate and weaponize them. So, similar with AI and any other technology that we may have in future, they take advantage and they use, and the, the most difficult part of this is that they always step ahead while they're still trying to figure out how to build technology, what features build in, uh, attackers are already thinking how do we use this technology to actually abuse the system. 
 

[00:03:41] Christina Stokes: Absolutely, and so this workshop that you did earlier today is incredibly important and it's just coming absolutely soon. Absolutely at the right time. Can you tell me a little bit more about what brought this workshop together? And I know there are so many leaders in AI attending that workshop helping to educate people and and give them not just the highlights But you know extremely important key data that we need to help maintain safety  
 

[00:04:08] Helen Oakley: So let me connect your previous question to use this question. 
 

I'm so weaponizing Uh, why this workshop important? It's not, of course, directly related to weaponization of AI, but rather security, understanding what is built in, in AI system, how do we protect, and also to avoid attackers to, uh, manipulate with our models, um, to poison our data and to bring different results. 
 

So we need to look at different aspects. And, of course, it's important. Uh, the separate aspects of fighting AI with AI, right? But right now, let's talk a little bit about AI bombs, so Artificial Intelligence Bill of Materials, and why this workshop that we had is very important. So, it's, it's one of the first one, and there will be many more, and there are actually, um, also kicking out, um, a forum out of CISA government to, uh, focus on AI bomb topics. 
 

So, AI software supply chain is very important because like any software, we have multiple, not we, um, attackers have multiple opportunities to, um, get into systems and abuse that and, and use it in their advantage. With AI BOM, we want to gain transparency on AI, how it works, what processes are done to make it in very high quality. 
 

standardized and machine readable manner so we can automate as much as possible. Absolutely. Extract data about AI itself. So what's AI? BOM. Right. So if, um, if you're familiar with SBOM, software bill of material, which is like an ingredient list of the software, AI BOM is a subset. So SBOM would contain also fields about the AI BOM and consider it like a scenario. 
 

Yeah. Of technology that's using. Maybe there are other scenarios too. But we're focusing right now on AI BOM. So, uh, we're collecting not just the ingredient list of AI BOM, because AI, uh, technology also has open source, another component, right? So we're, we're collecting that. But we're also collecting properties of model. 
 

What model does? How is it built? What algorithms are used? But also, um, the process around that, if you're familiar with ML SecOps, Machine Learning SecOps. So it's a process on top of software development that we focus in specific activities that needs to be done for machine learning or AI, AI models. So we're collecting from the process details also and provide and storing them in AI BOM so that we know how the model was trained, where the data is coming from. 
 

Is it proprietary data? Is it, um, is it third party data? How it's been trained? And, and even up to the consumption of, uh, uh, energy, uh, that we're collecting. So with that, we build the transparency, how the AI is built, so we can assess risk effectively and react. Because over time, also we can see the drift of models, even if everything was done right in the beginning, there is a concept of drift. 
 

We need to figure that out and we need to have transparency on this as well. And to kind of wrap it up, we can use It's a way to use AI bomb for risk assessment when we, uh, use the data in analytics and different systems that can provide that, um, effectively for us to respond, to see what's there, what risks are there and to respond. 
 

Absolutely.  
 

[00:07:36] Christina Stokes: So tell me a little bit, and we were just talking a minute ago, how has AI changed things over the last few years? What is the impact that is, is that having now? Not just in your area, but also, um, across the industry and, and, and the areas of the industry that tie into the area that you're working in. 
 

[00:07:57] Helen Oakley: I see the very accelerated change in the industry, like never before. Because with AI, um, especially when CHAT GPT came out and everybody started to use it. Yes. Um, we see a lot, a lot of progress on that. You can see so many companies who now extend AI, AI models, open source or third party. Uh, to develop their own solutions on top of that, um, and how it changes the whole industry. 
 

So aside of, um, different companies, organization building their tools with AI, um, optimizing the processes, learning from data, and we have a lot of data. I mean, it's a, it's a great time right now, I think, to implement those kinds of solutions. So, um, aside of, uh, tech, um, aside of organizational companies, um, we did, um, Um, myself and Mitri Reitman, uh, who, uh, was co organizing the, uh, workshop with me, uh, AIBOM. 
 

Uh, we did a little experiment. So we did AI, um, avatars of us, uh, and it was completely, um, done by AI. Uh, generated the AI voice and also text, so we used CHAT GPT to generate text. And it took us very little effort to generate that video. It's literally You know, 30 seconds of movement in front of the camera, and it generates your avatar, which is, um, such a amazing way to create even content. 
 

And you see a lot of content right now generated by CHAT GPT, you know, posting. So I think it's shifting a lot. And what's coming also, uh, into industry is how do you identify that content has been generated by AI versus is authentic by human. Maybe we need to know, maybe we don't need to know, but I think it's good to know. 
 

Absolutely, yeah, I agree. So, right now, um, um, Adobe had released a little, um, character to identify, to put like on a video, or in any other artifacts that are generated by AI, to identify that this has been generated by AI.  
 

[00:10:02] Christina Stokes: Yeah. What, what do you see for the future when it comes to AI? And how that is going to impact what we do today and how that will impact the work that you are doing. 
 

[00:10:14] Helen Oakley: I think we see, we will see AI, um, evolving to support us. I'd like to see AI to become our ally. To really, uh, work with us to help securing the systems. And also our lives, right? Um, I know that, uh, some people are, uh, worried that the jobs are gonna be lost and so on. I see it a little bit different. I think there will be a lot of shift in a skill set and many, many, uh, professions that don't know AI, don't work with AI, will have to learn it somehow to optimize. 
 

So, AI can do a lot of activities that are kind of repetitive tasks, right? But we still need humans, we still need analytical minds. to grow that, to grow AI and monitor it and make sure that it's still accurate. And I want to add one more fun thing, about AI and the future looking into the glass bowl. So there are a lot of fiction movies that are, um,  
 

[00:11:25] Christina Stokes: Yeah, we hear that a lot. 
 

It's just going to take over everything. Every aspect of our lives. Apocalypse. AI apocalypse.  
 

[00:11:32] Helen Oakley: Besides, actually, this year they had like, there's no dystopia without, spelling dystopia without AI, right? Right, yes. But, coming to that fun part, let's exercise, just for fun, how AI could take over the world. Right. 
 

And I really see like, two examples. Two, two possible ways, uh, hypothetically. One is AI decided that, um, I don't like humans. They're actually not letting me do something right. And I wanna eradicate the humanity. Yes. Yeah. That's a fear. So let's think why AI came to that decision. Mm-Hmm. . This means that somewhere in the beginning, at very beginning, we didn't. 
 

We didn't catch that drift. We didn't catch that drift in the learning, in the training process. Where AI became thinking somehow else. So it's really us right now, when we're building technologies, we need to look at that drift. We need to look at the training very carefully to make sure that we direct it to support humans and to be our ally, not to eradicate. 
 

And the second one is not, nothing as, um, nothing against the humans. But maybe AI wants to expand their data centers, and now they, uh, decided, um, AI made the decision that, okay, I like the geographical location of one of the big cities, and now I'm just gonna erase the city and build my data centers there, and humans are collateral. 
 

Absolutely. So, in similar way, somewhere, in the training model, in the training process, we missed that step. So, it's very important as we build our technology, build our, um, Models right now. To put very precise, uh, attention to that, an AI bomb is exactly the mechanism that will help us with that transparency and build the security measures around it. 
 

[00:13:27] Christina Stokes: That is incredible. Um, as a final question, where can people go to learn more, either just about AI bomb or AI generally, if they wanna come into the industry and work specifically with cybersecurity and ai.  
 

[00:13:44] Helen Oakley: So definitely, uh, follow me on LinkedIn because I post a lot about that, um, and, uh, that's my passion. 
 

Uh, yes, that's my work, but it's also my passion, yeah. Um, but also, we have, uh, created our, uh, GitHub, uh, at RSA, and I can provide link that maybe you can put in the comments or, or, um, in the description of the video, um, so in the GitHub, you'll have a collection. But also, monitor CISA. gov, yeah, CISA government, because the AIBOM forum will be kicking off very soon. 
 

So it will be published on their website, and people can register for that. So right now, registration is happening through the Google Doc, you know, for people who read the NSBOM community, but it will be published so that others who hear about it can also join. Great. And we need a lot of community input for that, because Yes, of course, they are experts, but we need more people to brainstorm this challenge and brainstorm the opportunities for AI. 
 

[00:14:47] Christina Stokes: Yes. There's power in numbers. Yes. Yes. Well, thank you, Helen, for joining ITSB here at RSA. I really appreciate it. And thank you, everyone, for tuning in. Follow Helen Oakley on LinkedIn for more about, because she does provide some fantastic content. This has been absolutely fascinating. Thank you so much, Helen. 
 

Thank you for having me here. You're welcome. Thank you. Thanks. Bye. Thank you Helen. That was  
 

awesome.