ITSPmagazine Podcast Network

It's Just Software, What Could Possibly Go Wrong? Exploring Deterministic GenAI and AI Trust Cards | An OWASP AppSec Global Lisbon 2024 Conversation with Isabel Praça, Dinis Cruz, and Rob van der Veer | On Location Coverage

Episode Summary

Join Sean Martin as he dives into a lively discussion with AI and cybersecurity leaders about the intersection of artificial intelligence and application security at the upcoming OWASP AppSec Global conference in Lisbon. Discover how AI is reshaping cybersecurity practices and why interdisciplinary collaboration is crucial for navigating this evolving landscape.

Episode Notes

Guests:

Isabel Praça, Coordinator Professor, ISEP - Instituto Superior de Engenharia do Porto

On LinkedIn | https://www.linkedin.com/in/isabel-pra%C3%A7a-07b86310/

At OWASP | https://owaspglobalappseclisbon2024.sched.com/speaker/icp

Dinis Cruz, Chief Scientist at Glasswall [@GlasswallCDR] and CISO at Holland & Barrett [@Holland_Barrett]

On LinkedIn | https://www.linkedin.com/in/diniscruz/

On Twitter | https://twitter.com/DinisCruz

At OWASP | https://owaspglobalappseclisbon2024.sched.com/speaker/dinis.cruz

Rob van der Veer, Senior director at Software Improvement Group [@sig_eu]

On Linkedin | https://www.linkedin.com/in/robvanderveer/

On Twitter | https://twitter.com/robvanderveer

At OWASP | https://owaspglobalappseclisbon2024.sched.com/speaker/rob_van_der_veer.1tkia1sy

____________________________

Hosts: 

Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/sean-martin

Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

____________________________

Episode Notes

In this episode of On Location with Sean and Marco, host Sean Martin embarks on a solo adventure to discuss the upcoming OWASP AppSec Global conference in Lisbon. He is joined by three distinguished guests: Isabel Praça, a professor and AI researcher; Dinis Cruz, an AppSec professional and startup founder; and Rob van der Veer, a software improvement consultant and AI standards pioneer.

The episode kicks off with introductions and a light-hearted comment about Sean’s co-host, Marco Ciappelli, who is more of a psychology enthusiast while Sean delves into the technical aspects. Sean expresses his enthusiasm for the OWASP organization and its impactful projects, programs, and people.

Each guest contributes unique insights into their work and their upcoming presentations at the conference. Isabel Praça, from the Polytechnic of Porto, shares her journey in AI and cybersecurity, emphasizing her collaboration with the European Union Agency for Cybersecurity (ENISA) on AI security and cybersecurity skills frameworks. She underscores the importance of interdisciplinary expertise in AI and cybersecurity and discusses her concept of "trust cards" for AI, which aim to provide a comprehensive evaluation of AI models beyond traditional metrics.

Dinis Cruz, a longstanding member of OWASP with extensive experience in AppSec, brings attention to the challenges and opportunities presented by AI in scaling application security. He discusses the importance of a deterministic approach to AI outputs and provenance, advocating for a blend of traditional AppSec practices with new AI-driven capabilities to better understand and secure applications.

Rob van der Veer, founder of the OpenCRE team and a veteran in AI, elaborates on the integration of multiple security standards and the essential need for collaboration between software engineers and data scientists. He shares his perspective on AI’s role in security, highlighting the pitfalls and biases associated with AI models and the necessity of applying established security principles to AI development.

Throughout the episode, the conversation touches on the complexities of trust, the evolving landscape of AI and cybersecurity, and the imperative for ongoing collaboration and education among professionals in both fields. Sean wraps up the episode with a call to action for data scientists and AppSec professionals to join the conference, either in person or through recordings, to foster a deeper understanding and collective advancement in AI-enabled application security.

Listeners are encouraged to attend the OWASP AppSec Global conference in Lisbon, where they can expect not only insightful sessions but also vibrant discussions and networking opportunities in a picturesque setting.

Key Questions Addressed

Be sure to follow our Coverage Journey and subscribe to our podcasts!

____________________________

Follow our OWASP AppSec Global Lisbon 2024 coverage: https://www.itspmagazine.com/owasp-global-2024-lisbon-application-security-event-coverage-in-portugal

On YouTube: 📺 https://www.youtube.com/playlist?list=PLnYu0psdcllTzdBL4GGWZ_x-B1ifPIIBV

Be sure to share and subscribe!

____________________________

Resources

Trust Cards for AI (Session): https://owaspglobalappseclisbon2024.sched.com/event/1VTaD/trust-cards-for-ai

Deterministic GenAI Outputs with Provenance (Session): https://owaspglobalappseclisbon2024.sched.com/event/1VTaO/deterministic-genai-outputs-with-provenance

AI is just software, what could possibly go wrong? (Session): https://owaspglobalappseclisbon2024.sched.com/event/1VTaI/ai-is-just-software-what-could-possibly-go-wrong

Learn more about OWASP AppSec Global Lisbon 2024: https://lisbon.globalappsec.org/

____________________________

Catch all of our event coverage: https://www.itspmagazine.com/technology-cybersecurity-society-humanity-conference-and-event-coverage

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcast

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Are you interested in sponsoring our event coverage with an ad placement in the podcast?

Learn More 👉 https://itspm.ag/podadplc

Want to tell your Brand Story as part of our event coverage?

Learn More 👉 https://itspm.ag/evtcovbrf

Episode Transcription

It's Just Software, What Could Possibly Go Wrong? Exploring Deterministic GenAI and AI Trust Cards | An OWASP AppSec Global Lisbon 2024 Conversation with Isabel Praça, Dinis Cruz, and Rob van der Veer | On Location Coverage with Sean and Marco

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: [00:00:00] And hello everybody you're very welcome to a new on location episode here on ITSB magazine. I'm your host Sean Martin, I, uh, flying solo, oftentimes Marco, uh, Chipotle, my co founder will join me, but he doesn't, he doesn't like AppSec. I don't know why I'm kidding. No, he, he's the, uh, he's the psych guy. I'm the tech guy. 
 

And, uh, and I get to do all the fun, all the fun technical bits. And I really wanted to do OWASP. I'm a huge fan of the organization and the. And the, uh, projects and the programs and the people and the outcomes, uh, that come from it. And, uh, I wanted to be in Lisbon as well. So, uh, so I decided let's, let's talk about AppSec and let's talk about OWASP AppSec Global. 
 

In Lisbon. And who better to do that with than, uh, some of the folks who are sharing stories as keynotes as part of the, part of the conference. And I'm thrilled to have Isabel Dennis and Rob on. How are you? Y'all?  
 

Isabel Praça: [00:01:00] Super,  
 

Sean Martin: super, super. Here we are. We're going to have some fun in Lisbon. We're going to do AppSec and DevSecOps and, and, uh, Web stuff and who knows what else we're going to get into Um, and that's what we're going to talk about now before we get into the topics though a few words from each of you about your role what you're up to and uh, And i'll say congratulations to all of you for uh, getting a getting a speaking spot there isabel. 
 

What are you up to?  
 

Isabel Praça: Hello Well, I'm, I'm happy to be here and we'll be more happy to be there, of course. Um, so I'm, um, I'm an academic, I am a professor and a researcher. I like open science and I like a lot AI that I'm, um, working since ever. And from several years up to now, I work on AI for cybersecurity solutions and on the security of AI. 
 

I'm also collaborating with [00:02:00] Anisa, the European cyber security organization on the topics of AI security and the cyber security skills framework. I come from the Polytechnic of Porto. Also nice to visit.  
 

Sean Martin: Yes, maybe I'll take a train ride up and take a tour. Well, thanks for taking the time, Isabel. Denis?  
 

Dinis Cruz: So, I was born in Lisbon, so I'm actually Portuguese, right? 
 

So this is like coming home to me. It's really nice. Although I've been in London for 28 years. Uh, I've also been doing AppSec forever. I was one of the early sort of, you know, members of OWASP, was a board member. I involved a lot of things. I think I still really helped with the last Portugal chapter for a while, some conferences there. 
 

So I've done a lot of stuff. And, uh, and now I'm actually doing a startup on the cyber, the cyber boardroom, uh, which is, you know, I, I, that my sister jobs. And now I really want to figure out how to do. Uh, AI and scaling and figure out how to get it under control and get a lot of value from it. [00:03:00] And I'm very excited to, you know, to go to the, uh, AppSec in Portugal. 
 

Right. Which is going to be an amazing conferences that we're going to talk about.  
 

Sean Martin: Absolutely. And, uh, good to have you on again, Vinicius and Rob, you've been on as well. Good to see you again. What are you up to?  
 

Rob van der Veer: Great to see you, Sean. Yeah, what I'm up to? Well, see, uh, see friends again at the AppSec conference. 
 

I look very much forward to meeting the SAM team of who I'm a sort of part, uh, meeting the OpenCRE team, uh, OpenCRE that I founded, uh, with the integration of multiple security standards in one resource, the AI exchange team, so three, all those projects that are dear to my heart. Um, a lot of today's work, of course, this has to do with AI and it also has to do with my background. 
 

I have 32 years experience in, in AI. So I was doing AI way before it was cool and way before you could earn any money with it. But it was a lot of fun. I learned a lot [00:04:00] and I like to dispense what I learned, uh, in standard writing, in, uh, doing keynotes and regarding standards, uh, I worked on the ISO 5338 on AI engineering. 
 

And currently I'm working on the AIX security standards. Pretty big, uh, big challenge. I have to say.  
 

Sean Martin: I can only imagine because I can never do it. So I'm thankful that you, that you are doing it, uh, Rob, along with the team there, um, it, it's not lost on me that, uh, the three of you, each of your sessions touches or leads with, or is all about AI, um, at an OWASP AppSec Conference. 
 

So, um, clearly you're all working in that field. But why do you? I don't know if you have any insight. Um, why? I have some ideas. But why? Why? A. [00:05:00] I is so prevalent as keynotes in this year's conference. 
 

Rob van der Veer: Well, uh, of course, AI is prevalent everywhere, so it's on everybody's minds. Everybody's struggling how to make sense of it, uh, how to work with it. So I guess, uh, when you organize a conference, you want to cater to the needs of the audience, which is guys help us. With how we need to deal with this topic of AI, give some guidance. 
 

And that's what we're going to do.  
 

Sean Martin: And I guess that's probably a better way to put my question. Are the, are the developers engineers asking for help or are we saying they need help? 
 

Isabel Praça: I think that the landscape is so huge and so emergent topics that come to the light so quickly and so much topics that everybody is in need of help, I would say. I, as a [00:06:00] researcher, I usually have much more questions than answers. And I think that all the community is like this at this point. And, uh, yeah, let's see what we can bring to the table. 
 

to, to dismiss, demystify and to, to try to organize some of the topics and some of the ideas that are around these. That's, that's what I hope at least.  
 

Dinis Cruz: Yeah. It's an interesting one because there's actually multiple ways to look at this. In one way, this is another technological revolution. And I think another massive one that is actually also been driven by the top. 
 

It's very weird, you know, for, uh, you know, to be an exec meetings and the actual senior leaders, I say, Hey, where's your strategy in a path? They're like, well, we don't want cloud or who cares about it? Who cares about mobile? Who cares about that stuff? Right. But I, I'm not, I'm not optimistic on this. I think there's an element that needs to be controlled. 
 

I think there's an element that, you know, people are not understanding the risks and I think we need to explore that, but I also feel that. This allows us to solve problems in [00:07:00] AppSec that in the past, I could never see how we scale. So the way, the way I kind of look at this is that I remember going from AppSec from nobody knew about it, that nobody and just cared about it. 
 

Nobody kind of knew why it was for to the, all the way that, yes, we know it's very important. And I think we got to the point where we know how to do it right. The problem was scale, right? There was a moment maybe about 578 years ago, where a lot of the key patterns were in place. The main problem was how to scale it, how to actually do this at scale, where there's more and more stuff to be developed. 
 

There's more and more systems interconnected, et cetera. And I think what GNI gives us, and I think on the understanding side of things, the interpretability side of things, it gives an ability to understand data, understand systems, understand how things work, that in the past. We're either impossible or too expensive to code on top of it, right? 
 

And then you have all the other stuff, right? That the attack surface is going to grow [00:08:00] dramatically. The, I think, deepfakes and the attacks are going to go another level. Um, although I do think that these will tip. the hands of the defenders. But from the attackers, this is also an insane capability, right? 
 

So it's kind of like it's a revolution that's happening at all levels. And then you add to the mix the fact that the biggest companies in the world are not ignoring it. They are investing on it, right? Where, if you think about the last revolutions, all the major companies in the world at the time Couldn't care more for it. 
 

Right. Where in fact, they actively discourage it, right? They almost view the new technology revolution as a threat where now the new major companies, yes, they see as a threat, but also they're investing heavily on it, and that means that the acceleration and the hype, of course, is even higher, right? And it actually works, which is another reason why it's so powerful. 
 

Rob van der Veer: Yeah. Sean, so I work for a software improvement group. We help clients, you know, create better software and they have a lot of questions or may I, but. [00:09:00] Almost only from clients that are already somewhat mature with AI. The ones that are just starting, they're just curious. They want to experiment. They don't need any help. 
 

And then they run into issues to take it to the next level, you know, to go into production where they. You know, you need to start worrying about, about security and maintainability and things need to move out of the lab. That's where they, especially are calling for help.  
 

Sean Martin: And I, I did a, I guess a panel, uh, on this topic the other day. 
 

And one of the, one of the things we talked about was kind of to the title of your session, Rob, AI is just software, right? And then the other point we talked about is it's really just data as well. Which is code now, right? With data. Exactly. It's code now. And so it raises, raises a question of, so are we able to apply what we [00:10:00] know to this world of AI enabled apps? 
 

Yep. So, or are we, are we like twisting our mind so much because it's so new? That we're kind of missing the point that it is just, and I'm interested to hear how you're going to speak to this, Rob, that it is just software and data. So we, if we just follow the general rules, we already know, and then to Dennis's point, if we scale it, scale our development in a safe and secure way, and maybe it might be, I'm not leaving yours out on purposes, but we're going to get there in a second, but I don't know, what are your thoughts on this? 
 

It's just software. It's just data. We know how to do this stuff. Maybe we have a chance to do it better now.  
 

Rob van der Veer: Yeah. It's what I often repeat to prevent the organization from going, going down a rabbit hole and building their own framework for AI, because they see this as something magical, weird. That requires something special. 
 

I also see standard makers write entire frameworks for AI where 90 percent is actually, you know, [00:11:00] the description of risk analysis and the description of red teaming as we know it, of course, things are weird in AI and they need special treatment, but. Part is relatively small. The rest is, you know, familiar stuff, risks, threats, controls, security awareness. 
 

Um, so yeah, try to start there and see what is special about AI and then focus on that. That would be my recommendation.  
 

Dinis Cruz: Yeah, I actually did a presentation to an OWASP crowd and about actually how to break and how to not, not to get apps break. I don't, HNI broken, but, and what I was basically saying is exactly that, is that You know, ultimately, even if you take the top 10, right, most of it is actually absent the top 10 for Jenny. 
 

I write, uh, most of it is this good application, security, good network security, good risk management, et cetera. In fact, I would actually argue that this needs everything we were doing at OS even more, right? Because if you think about [00:12:00] it, the AI as an API, right? You know, request goes in, comes out in at the same time is the most vulnerable API that you have. 
 

Right. Because literally data is code. It's like there's even no segmentation. Like we don't even know what a buffer overflow for, you know, a gen AI looks like. Right. But the worst part is that not only that, that is also your worst inside the threat, because when that thing pops, right, that is now attacking you from the inside. 
 

Right. So if you think about it, everything we're doing in AppSec, it's needed times 10. Right. I actually think that what, and Rob, I feel that There's a, again, a path there, which is like, we don't need to reinvent the wheel. Most, in fact, I would argue apart from prop engineering, which is actually a step backwards because we went from trying to separate code and data and everything we did all the way to the point where now data is code, right? 
 

Which is pretty problematic. Just about everything else is just pure AppSec. Information management, risk [00:13:00] management, cybersecurity, risk, you know, all, all the practices that are very mature right now from a, kind of a defense point of view, right?  
 

Sean Martin: So Isabel, I want to get your, your perspective on this. Cause I was chatting with Jim Manico the other day, he's doing training at, at uh, OWASP in Lisbon and many, many, many moons ago, I was a quality quality assurance engineer where I was responsible for. 
 

defining how the, how the thing should work and, and validate that it's doing what it's supposed to. And I was doing that white box, black box. And the point I was making in the conversation with Jim is I could create a fairly finite set of cases that I could test against. And to Dennis's point of data as code now, pretty much anything can happen right in an AI enabled app and you don't know what you're putting in is good if you're getting out as good if it's being manipulated along the [00:14:00] way, and we're to your session with the word trust in there are we able to trust. 
 

Our apps and I, I look at it not just from a app function perspective, but a business logic perspective, right? We're using this stuff to make business decisions, letting, letting people create accounts, letting people transfer money, letting people do whatever, um, based on this data. So your, your thoughts on all that. 
 

Isabel Praça: Yeah. Okay. So, so back to the previous topic. I, I, I believe we cannot for sure forget that AI is on the top of an infrastructure that for sure needs to be secured by default. Okay. So all the concerns we have should be, uh, insured. And then, uh, we focus on what exactly AI brings that is different, that is new and the new types of attacks it can bring. 
 

Um, and from the trust point of view, uh, trusting it's a process. It's the same when we are in a [00:15:00] new organization, when we meet some new people, there is a process to start trusting, uh, the, the others. And what I see about AI is that, um, it's a process that we need to look into. Um, Not just, uh, from the point of view of the traditional metrics we use to evaluate what we can get from AI, uh, that typically we wish to have a wonderful accuracy, the best recall, F1 score and, uh, et cetera, but also from the point of view of, um, And if, um, I can get, um, those good metrics, those standard metrics, um, in, in good values, but I can have a more robust models. 
 

And if I can get, um, a way to understand what the model is saying. So what I want to bring here is there are a lot of properties that we want AI [00:16:00] to show and, and we want it to respect to start and to, to. Be able to build this trust relationship on the on the models. And for that, there are some, uh, some, um, yeah, some research on the topic. 
 

And I will try to as much as possible, bring all the research that I know about it. And how can we really identify and what other metrics are? What other, um, classifications or what other ways do we have to classify how good is an AI model?  
 

Dinis Cruz: So Isabel, your, your, your thought, your talk is actually trust cards for AI. 
 

Can you talk a bit more about cards?  
 

Isabel Praça: I will, what do I want to bring here? Yeah, the, the, the cards will be like a, Uh, an identity card, not from the identity, uh, um, point of view from on cyber, [00:17:00] but, um, to show not usually we look into a model has, okay, it has, it is an ensemble learning. It is, it has these parameters. 
 

Um, I don't want to look at it just like that. I don't want to cross check a model with data and just say, we got this accuracy. We got this recall. I want to show. We have this level of robustness and this level of explainability and this level of privacy. And so based on this kind of data cards and this kind of data models that I intersect, I can bring more trust or I can more easily go into a trusted relationship with that or not. 
 

Rob van der Veer: Interesting. Isabel, will you also, um, go into data poisoning? Um, um, the relation between data poisoning and risk and, and trust in your trust card?  
 

Isabel Praça: Of course I need, I need to bring those [00:18:00] concepts to show what can we do, for example, to increase a model robustness. to check how robust it can be or not. Uh, so those concepts will I guess come naturally and and sometimes I also feel that we need to put some simple clarifications and simple organization on the landscape because I guess that all the people speak about the buzzwords related to AI and from an academic point of view um I believe that we need to have people that come with background that is strong on both domains. 
 

Like we have, um, super good people on cyber and we have super good people on AI. We need to have profiles and to educate the, the, those profiles to come with strong and solid background on the intersection of these two worlds.  
 

Rob van der Veer: Yeah.  
 

Isabel Praça: That's, that's, uh, also, um, a goal I have as a professor on [00:19:00] these fields. 
 

Rob van der Veer: We talked about this Isabel when I visited your, your institute.  
 

Isabel Praça: Yeah, exactly.  
 

Rob van der Veer: That this is, this is sort of a rare combination of, uh, of expectees. And I guess Sean, that also ties into your question. So why does this APSEC conference have so many talks about AI? You don't find. Uh, AI security people around very easily, uh, they're, they're quite rare. 
 

Uh, and it has to do with, um, that is relatively odd that these things are combined for historical reasons, but they're also different skill sets. Uh, I mean, from programmer to hacker, I mean, that's sort of. That, that's a shorter path, I would say then from a data scientist to, to a hacker, because data scientist is mostly interested in a working model. 
 

Whereas a programmer is also, you know, educated to make sure that a system is, is secure and reliable. [00:20:00] So the distance topic wise, I guess, at least as my theory is, uh, is bigger. Why? And that's why you probably don't see as many people combining these.  
 

Dinis Cruz: Which actually, I think, brings an interesting opportunity because I do think that when somebody grows in cybersecurity, there's a, there's a very healthy Skepticism. 
 

Very healthy, you know, sort of questions that you ask. Well, how does it work? How does the actual operate? What's really happening in the hood that you can bring into it? Right? And so it's about one of these. We very soon to connect. So my talk is an area that I kind of been evolving into, which is about, you know, having deterministic Gen AI outputs. 
 

So like the output of Gen AI with provenance. So what I'm now more and more is I'm staying away from the sort of creation element of NLMs and the sort of the hallucinations. So I kind of view that that is when you do want it. Uh, creative, a level of creativity, a [00:21:00] level of, you know, I would say hallucinations. 
 

That's fine. But you know, the practice and the use cases I'm after are very deterministic. So it's almost about taking away the models. The whole thing is just saying, I just want to understand how they actually work and also understand the parameters and also operate on them where you bring your own content, where you bring your own logic and you almost use the models for reasoning, not for content creation. 
 

And you also use the models for translation. Which they're actually really good at, right? But not just translation from a language A to B. They're very good at translating, you know, a culture from one culture to another. One personality to another personality. One, you know, type of individual to another type of individual. 
 

And also those are things again, we can feed good examples to it. So I'm kind of staying away from custom models, staying away from any kind of of that world, which I think is going to be a horror story because we nobody knows how [00:22:00] he actually learns. Nobody knows that he actually holds data. I think I think it's very risky when you when you rely on that black box to do a lot of your logic. 
 

So I kind of want to make sure my my Jedi world is deterministic and also the provenance is interesting because It's about having a chain of trust of where the data is. And the interesting thing about a gen AI model is that that breaks the chain of trust. If you think about it, right, you know, because we need to be able to connect the data with where it came from. 
 

So it's an interesting gap that you have with a gen AI, both on creating of images and printing of content where they break in a way, the chain of trust. So then how do you glue that back together, right? Where you can say, well, this output is verified by this, this, this. So, uh, and I think  
 

even on that one,  
 

I feel the gen AI has so much value that it can give to us that we don't, even in the short term, we don't, we don't need the rest, right, just getting that right, allow us to become [00:23:00] crazy, more productive. 
 

Especially on upset side of things and at other parts.  
 

Isabel Praça: Yes, there are several interesting use cases of Gen AI on the cyber domain. And to capture the context, I think is very important and useful. But I don't think we should focus just on Gen AI. I think, um, a good, um, view on the landscape and how, what are the concerns how to create the trust, uh, also needs to take into account the more, let's say, discriminative, uh, type of AI and not, um, just forget it exists because I, I believe a good combination of both can make a difference and we can still rely on them for several of the, the problems and the use cases. 
 

Rob van der Veer: Yeah. I will, I will also go into Trust issues with AI. I will talk about packets hallucinations, which I think are [00:24:00] quite fascinating. And they show that LLMs can have very different model of the real world based on, on the examples that they've seen. I'll talk about criminal profiling that I did in the nineties, uh, on, you know, volume crime, but also on, uh, individual criminals. 
 

Which was very successful then, but is going to be illegal, uh, within a couple of months, uh, with the, uh, EU AI Act. It almost was canceled because I did that in the nineties. Long story. Um, it's, uh, the cause, uh, AI of, uh, the, uh, the falling of the Dutch government. Uh, recently, uh, where there was, uh, an affair around an AI application. 
 

Um, there was a recent, uh, situation in Netherlands also with bias. That was actually unavoidable. So even when designing the whole idea, you could already, the trust card would show that you cannot trust the output of this. So [00:25:00] it was no use going ahead at all. Now we'll touch upon sort of an elephant in the room in, um, in cloud AI. 
 

Also about trust, uh, where nobody's speaking off, but there is a situation where client data actually leaves the virtual private cloud. Nobody talks about it. Nobody wants to know, but it's threatening security. So we're dealing with a situation where we find AI too good to dismiss, which makes us make some concessions and sacrifices that we think should do explicitly, and we're currently doing it implicitly, unfortunately. 
 

Dinis Cruz: But that will blow up, right? Like, you know, those things. You know, there's a ton of startups. There's a ton of companies who do they're doing that kind of stuff and you know You're gonna have a crazy amount of horror stories coming along, right? Um, and also I think these days the margin of error that you have is much lower, right? 
 

Like I [00:26:00] think five ten years ago You could do that You can go to the cloud and get away with crazy amount of mistakes because the business models We're not very evolved. I think these days there's such a sophistication on the commercial business models or the exploitation that I think a lot of people are going to find that very fast. 
 

That when one of those models go wrong, the exploitation goes from, from little to a lot, right? Because, and these are not even the attackers using AI to attack, which is, they're also going to do it right. But there's a, there's a much more sophisticated business models on this. And, and there's an interesting element here that I find that, you know, and Isabella, I think your point that. 
 

We need to look at all the models. Of course we do. Right. And I actually think that we really had a massive problem with bias and discrimination with normal models. The difference is that because they didn't hallucinate. That badly and because it was like small margins and it was almost like yes There were a lot of people affected but they all like voiceless They got away with it. 
 

And again, I want to get back to the point where [00:27:00] you know We we need to have the term in a deterministic and and provenance on how we use these models to be honest We need to do that with software Like, Rob, one of the things you say about is all software. You have to remember that most of our software is broken, right? 
 

Like most of our software hallucinates as badly as gen ai, like most of our software, that it's developed by people that are not over there. The companies don't maintain it. Is is, is a, is is a pageant of a patch. Of a patch. It cannot just works. Right. So, so I like, you know, the idea that we now have the opportunity to clean a lot of these, where we're going to not just swipe the new AI stuff that needs to be under control, but also a lot of our historical stuff that we need to understand because the world depends more and more and more on software. 
 

Sean Martin: And you use the word understand and I'll, uh, I'll start to bring us to a close here. I'm sure we could talk for multiple hours around each of our topics. Um, because. [00:28:00] What I love about OWASP are all the projects, and especially what we're talking about here today, we're talking about bringing data scientists and developers and And, uh, security folks and hopefully some, some business folks as well. 
 

Right. Interested in this stuff and in a world where we're abstracting everything and we're trusting the abstraction to deliver what we want and what we expect. It's important that we have these conversations and these presentations to unpack and expose and highlight and discuss and, and, and work through the differences and really get to a point where we know kind of to your point, Dennis, we know we're doing what we want with this stuff. 
 

We're not just taking it blindly. And, um, so each of your sessions. Clearly are going to present research and other work that you've each done and bring conversations together. So for me, it's incredible and [00:29:00] I'm happy to be a part of it as helping to tell the story of each of yours, each of your sessions. 
 

I'm going to give you each a moment to. Maybe do a call to action for who you want to be in your session and what what you hope they take away from it, Isabel.  
 

Isabel Praça: Hmm. I was wondering, um, how many, um, data scientists will we have there? Um, because I would like, uh, these different profiles to be in the session. 
 

I think all the sessions. I don't. Wanted to be just a cyber community because like I said before, I think we need, uh, these two topics and these two types of skills to be, uh, together in a more, um, strengthened way. So I hope that we can speak to people that have cyber concerns. As much as we will speak to people that are dealing with data [00:30:00] that are building models and they need the security of the work they do. 
 

So for me, it's really to speak about AI for cyber people and to speak about security for AI people. That's what I hope, uh, could be some, um, the audience there and. I hope to contribute to, to clarify some of these and not just to contribute also to challenge. Like I said, as a researcher, I'm never satisfied. 
 

I always have much more questions than answers.  
 

Sean Martin: I love it.  
 

Rob van der Veer: Rob. Yeah. I present a full day training on AI security the day before the conference. I have 50 people attending and three data scientists. So, yeah, I think, I think that's what you can expect at an APSEC conference. But I would indeed really would like to call data scientists, if they can, to come to Lisbon.[00:31:00]  
 

Um, and if not to watch the recordings of the sessions that are going to come out, because, uh, it's time that software engineers and data scientists, uh, you know, um, work together and learn from each other. That's what I hope. So  
 

Sean Martin: that's two calling all data scientists. Dinesh, do we have three? Well, I hope for everybody. 
 

Dinis Cruz: I think OWASP is an amazing community, right? OWASP is an amazing culture. It's an amazing environment. It's very welcoming. It's been doing, you know, basically this problem for 25 years, almost 30 years, I guess. And, uh, and I highly recommend anybody to come, right? Because you're going to meet some amazing individuals. 
 

You're going to learn a lot. It's great for your career. It's great for your professional, uh, networking is great personally. Some of our best friends I met through all of us, right? You can publish your research on it. It's, it's literally an insane, uh, Great community and it's still going strong. And he, it is in, in a way, it has kept its soul, which is really [00:32:00] cool. 
 

Right. And it's really great to see a huge amount of innovation coming out of OAS after all these years. And, uh, and I think, again, this is gonna be another big, but gonna be another big player in that, in the next phase. So I, I, I think anybody should come to the oas to the OS conference in Lisbon and it's a, it's a great conference, it's a great venue. 
 

It's in a great place. Great food, great people, you know, great presenters, right? And, uh, not just us, right? I think there's, if you look at the people that are presenting, there's some insane talks and, and actually, and the lobby con, which is what the people you meet in the lobby is still one of the best in the world, right? 
 

So, you know, I highly recommend anybody to come to Lisbon, uh, in two weeks from now. Right.  
 

Sean Martin: That's right for the pasta, the nata, and, uh,  
 

Dinis Cruz: Oh yeah, the best place to be invented is like 500 meters away, right? From, from the venue, right? That we, uh, we are in, right? It's, uh, and it's by the bridge, you know, the big bridge in Lisbon [00:33:00] is literally have a view of the bridge. 
 

You got, you got the sea in front. Uh, it's, uh, it's an amazing venue for us to be in. And the weather should be really good, right?  
 

Sean Martin: So it should be nice. Well, I'm, I'm super excited. And I want to, again, uh, congratulate each of you on, on, uh, having an opportunity to share your thoughts and insights and research and everything else. 
 

So hopefully a very diverse group. And I want to thank you for sharing your time here with me today. Hopefully the call to action is. Come all data scientists and, uh, and come all AppSec folks. Yeah, that's all. Let's all meet together. So let's do it. Let's have some fun. Have a good week. They're a couple, two days of training and, uh, three days of sessions. 
 

Let me, I closed the main page. Let me open the main page is 24th through the 28th, uh, and it's in the Lisbon [00:34:00] Congress center. So, uh, thanks everybody for. Listening and watching this episode. And I do hope to see you in Lisbon, or as Rob said, if you can't make it, please do connect with, uh, with folks, this group, first off and foremost, and then, uh, listen to all the other sessions as well as you can. 
 

Perfect. Thanks everybody.  
 

Isabel Praça: Enjoy. 
 

Rob van der Veer: Thanks, Sean. See y'all.