ITSPmagazine Podcast Network

AI-Fitness and AI-Wellness and Deploying an Effective DevSecOps Team – What’s the Recipe for Success? | An Infosecurity Europe 2024 Conversation with Kevin Fielder | On Location Coverage with Sean Martin and Marco Ciappelli

Episode Summary

Dive into a riveting discussion on the fusion of AI and software development with Kevin Fielder, CSO for NatWest Boxed and Metal, alongside hosts Sean Martin and Marco Ciappelli. Explore the ethical, technical, and futuristic implications of AI in the realm of information security and beyond, unveiling the delicate balance between innovation and responsibility.

Episode Notes

Guest: Kevin Fielder, CISO, NatWest Boxed & Mettle

On LinkedIn | https://www.linkedin.com/in/kevinfielder/

____________________________

Hosts: 

Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/sean-martin

Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

____________________________

Episode Notes

In this episode of On Location with Sean and Marco, hosts Sean Martin and Marco Ciappelli engage in an insightful discussion on the intersection of artificial intelligence (AI) and software development, specifically in the realm of information security. The conversation features Kevin Fielder, CSO for NatWest Boxed and Metal, sharing his expert insights and experiences. The trio dives into the potential risks and rewards of integrating AI with software development, touching upon the inherent challenges and opportunities this fusion presents for the future of technology and security.

The episode opens with a dynamic exchange on what it means to combine AI and software development, sparking a debate on the potential of AI to improve or complicate software development processes. Marco Ciappelli humorously inquires about the concept of a 'black box' in AI, prompting a profound exploration of the reliability and transparency of AI systems.

Kevin Fielder provides a comprehensive overview of his current role and the innovative projects under his stewardship at NatWest boxed and metal. He eloquently describes the endeavors to leverage cloud-based banking and AI to deliver enhanced banking services to small businesses and non-banking businesses alike. Fielder's insights into 'banking as a service' and the ethical considerations surrounding AI deployment in the financial sector stand out as key discussion points.

A significant portion of the conversation centers around the ethical dilemmas and technical challenges posed by AI, including data integrity, the potential for AI-powered systems to exhibit biases, and the importance of designing AI with security in mind from the outset. Fielder articulates concerns about the rapid advancement of AI technologies outpacing the development of ethical guidelines and security measures, highlighting the critical need for a balanced approach to innovation.

The hosts and Fielder ponder the future of AI, reflecting on scenarios ranging from utopian visions where AI alleviates human toil to dystopian outcomes where AI autonomy leads to unforeseen consequences. This speculative dialogue sheds light on the philosophical and practical implications of AI's role in society and the importance of responsible AI development and deployment.

As the discussion winds down, the episode shifts focus to Fielder's upcoming presentations at the Infosecurity Europe conference in London. He shares his anticipation for engaging with the conference attendees and emphasizes the value of open dialogues about AI, security, and the future of technology. This episode not only provides a platform for thought-provoking discussion on AI and information security but also underscores the importance of community engagement and knowledge sharing in navigating the complexities of modern technology landscapes.

Be sure to follow our Coverage Journey and subscribe to our podcasts!

____________________________

Follow our InfoSecurity Europe 2024 coverage: https://www.itspmagazine.com/infosecurity-europe-2024-infosec-london-cybersecurity-event-coverage

On YouTube: 📺 https://www.youtube.com/playlist?list=PLnYu0psdcllTcLEF2H9r2svIRrI1P4Qkr

Be sure to share and subscribe!

____________________________

Resources

 Deploying an Effective DevSecOps Team – What’s the Recipe for Success?: https://www.infosecurityeurope.com/en-gb/conference-programme/session-details.3783.219354.deploying-an-effective-devsecops-team-%E2%80%93-what%E2%80%99s-the-recipe-for-success.html

AI-Fitness and AI-Wellness: NatWest Boxed and Mettle CISO's Thoughts on Safe AI Use: https://www.infosecurityeurope.com/en-gb/conference-programme/session-details.3783.219536.ai_fitness-and-ai_wellness-natwest-boxed-and-mettle-cisos-thoughts-on-safe-ai-use.html

Learn more about InfoSecurity Europe 2024: https://itspm.ag/iseu24reg

____________________________

Catch all of our event coverage: https://www.itspmagazine.com/technology-cybersecurity-society-humanity-conference-and-event-coverage

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcast

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Are you interested in sponsoring our event coverage with an ad placement in the podcast?

Learn More 👉 https://itspm.ag/podadplc

Want to tell your Brand Story as part of our event coverage?

Learn More 👉 https://itspm.ag/evtcovbrf

Episode Transcription

AI-Fitness and AI-Wellness and Deploying an Effective DevSecOps Team – What’s the Recipe for Success? | An Infosecurity Europe 2024 Conversation with Kevin Fielder | On Location Coverage with Sean Martin and Marco Ciappelli

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: . [00:00:00]  
 

Marco.  
 

Marco Ciappelli: Sean.  
 

Sean Martin: What, uh, what's it, what's it mean when you try to mix AI and software development together?  
 

Marco Ciappelli: I don't know. A black box?  
 

Kevin Fielder: Well,  
 

Sean Martin: to be honest with you, I'm going to take a wild guess that there's probably something on a plane that was generated by a machine that the plane uses to get us from A to B, which we'll be taking when we get to London, right? 
 

We're both flying to London for security.  
 

Marco Ciappelli: Are you trying to scare me to fly now? I am. No. No.  
 

Sean Martin: I think you did. It depends on how you think about it, right? Humans, humans may create errors. Machines maybe can check themselves and not create as many errors. Yeah. But then there's always the [00:01:00] humans created the data that the machine is using to. 
 

So I don't know. It's this, it's this infinite circle of.  
 

Kevin Fielder: But that's what it comes down to, isn't it? So much of AI stuff, you know, and you've seen, you've seen the disaster of some of the AIs they've put on the internet and said, go learn. And they've become very bad very quickly because there's so much rubbish on the internet. 
 

Um, so the data set you train with is always critical.  
 

Sean Martin: Exactly.  
 

Marco Ciappelli: Yeah.  
 

Sean Martin: Exactly. Well, for people wondering what the heck this is about, this is our, uh, this is our pre event coverage. Chats on the road to InfoSecurity Europe in London. Woohoo! We're, uh, just about a month away as we record this and, uh, as you know, for when Marco and I cover on location. 
 

We, we try to find the coolest people with the coolest topics. And, and today that's Kevin Fielder. And we're going to be talking about AI, if you hadn't guessed, of course, and a DevSecOps. So are we building the code the right [00:02:00] way and using technology to help us achieve the best outcomes? Or are we getting lazy, letting, letting it do it for us without any checks and balances in place? 
 

We're going to find out. Kevin, how are you? Good. Thank you. How are you guys doing? Doing great. It's gonna be good to see you in London again. Um, they're keeping you busy. You have two sessions it seems, uh, which we'll get into.  
 

Kevin Fielder: Yeah, really, really looking forward to it. I think it's gonna be a really good year this year. 
 

So, looking forward to, obviously, doing some talks, but also learning and networking and all the other good things.  
 

Sean Martin: And I think the, uh, yeah, the conference is growing tremendously. A lot, uh, a lot more going on this year. More folks joining too, I think. So, um, Kevin, it's been, uh, really good. Been a day or two since you've been on. 
 

Uh, I think you've, you've changed, changed roles and looking at things differently, uh, maybe perhaps since the last time we spoke. Uh, so what are you up to? Give her, give our audience a little bit of. Uh, who Kevin filler is.  
 

Kevin Fielder: Yeah. So I'm currently [00:03:00] CSO for NatWest boxed and metal. So, um, we're kind of a little spin off that's, that's part of NatWest, but like some of the, quite a few banks did when they wanted to do kind of DevOps cloud based banking, this sort of spun off a little unit on the side, um, rather than try and do it from when the, kind of the huge machine of the main bank. 
 

Um, so metal was created in about 2017. Um, and that's a. A banking app for small businesses. So you think like a Mondo or Starling, but aimed purely at small businesses with lots of kind of add ons like accounting and other things to help you run your small business. Um, and obviously to do that, we built a banking platform in the cloud. 
 

So we have a pretty much entire banking platform in AWS, um, which obviously available on the internet by APIs and whatnot. Um, and that then enables us to do the next thing, which is boxed. So NatWest in a box, um, and that is offering banking services to non banking businesses. Um, so you can then natively offer. 
 

You buy now pay later savings, loans, whatever else, but from within your site or app powered by us, but much more kind of still within your ecosystem. So the easiest example, I think is [00:04:00] obviously the buy now pay later thing. If you're a retailer, um, customer log into your site, browse around, you see what they're doing. 
 

You have a relationship with them. They'll, you know, maybe ask questions, all the things, whatever else, then they check out and they have a relationship with someone else for three to six months while they pay for their goods. Yeah. If you can natively offer that within your site or app, um, or even in, you know, even in, in physical things, you probably see like client or apps in shops recently as well. 
 

But then you maintain that relationship. So we will do the KYC. We'll do all the kind of like the things to make sure people can afford it and the money, the money side of it. But it's. It's kind of within your site and within your ecosystem. So you maintain that relationship with your customers. And it's, it's, I think it's a, you know, really big thing for a lot of businesses that want to be able to do banking services, um, you know, whether it's savings, loans, pensions, you know, buy now, pay later, save now, buy later, which is, I guess, a more ethical way of doing the same thing. 
 

Cause you say it first, um, all of those kinds of things. Uh, and we, we can power that for you. So it's, there's going to be huge growth and a big change for us from being. effectively a B2C, although, you know, metals for small businesses, because they're so [00:05:00] small, it's effectively business to consumer. 
 

They'll open up a bank account like any normal consumer. Now we're going to sell these capabilities to large businesses. So we're moving from B2C to B2B or B2B2C effectively, because we have relationship with a client who has a lot of customers. So it's going to be huge growth for us and it's big changes in how we work as well. 
 

So it's really exciting times. Um, and it's, it's, we had a moment recently when we're kind of Getting, you know, getting close to signing some of our first contracts and things, I realized how excited I was because it's, it's, I've not, it's kind of, I'm not usually that invested in the companies I work for, but this is like, because we've kind of built this and over the last year, I've kind of seen the banking as a service proposition come to fruition and become a real thing. 
 

It's actually really exciting getting to that moment when we're soon going to have an actual customers and actually do it in a marketplace and prove it works rather than it just being something we've built. So yeah, it's really exciting times. And. I guess, what's the space for sort of big news about of sort of growth from, from boxed. 
 

Sean Martin: So banking is a service real quick, Marco is, I can't get this out of my head. So I have to say it. Banking is a [00:06:00] service. Bass. Bass. Bass fishing. That's all. That's all I'm thinking.  
 

Kevin Fielder: Fish on the wall now. You know, the old fish that used to sing. That's right. The talking trout. Maybe you should think about it as embedded finance. 
 

Marco Ciappelli: You think bass, you think about fishing. I think bass, I think bass, I think about bass. I'm thinking about playing it. It's a little bit different. No, but where I was going with this is your excitement and how much is driven by AI. Having excitement driven by AI.  
 

Kevin Fielder: AI is interesting. I think it's, it is genuinely exciting because the amount of things that can happen. 
 

I think AI is, but it's also genuinely scary as well, right? About what's going to happen. And I think obviously there's, there's everything from small things to. You know, using it to help create better help pages and, and, you know, NLP, better responses to questions and, you know, things like get up co pilot and stuff that are effectively just, you [00:07:00] know, kind of helping you do stuff. 
 

And that's quite simple and not so scary. But then when you look at some of the stuff that's kind of, you know, the big things and, and whether it's, you know, the battlefield stuff or whatever else. AI and the potential future of it is, is. It could be utopia, or it could be disaster. Right? And we're at that point where there's going to be massive shifts in how we work, how we live, driven by the ability of AI to do so many things for us, especially when you look at, um, if you guys have seen the most recent Boston Dynamics stuff with their new robot and things as well, right? 
 

So, you know, you combine these super early Capable robots and yeah, I don't know if it's intelligent is the right word, but you know, intelligent AI stuff together and you've suddenly got things that can just go and do everything. Um, and yeah, so it's, it's, it's, yeah, I'm, I'm, I'm an optimist, but I think, you know, it could, it could go, it could go either way. 
 

Right. And that's, that's the interesting thing. And I think, um, one thing that's the really, one thing that's a bit scary about all of it is because no one wants to [00:08:00] lose, right. So whether it's at the nation state level, China and America vying for who's going to be best at AI or just at the business level, There's certain things where you might suggest being slightly more cautious and taking our time and make sure we train it properly and understand how it's going to evolve and everything else is more sensible. 
 

If your competitor is running full steam ahead to go really fast and get there first, you have to do it as well or you'll lose. So this is this kind of thing where it, I think we would, as a, as a species, as nations, whatever, be wise to be cautious and take our time with making sure we get the utopian AI outcome, not the disaster AI outcome, but no one's willing to take that step back and take the time because your competitors or the other nation is not doing that. 
 

So we're all rushing headlong into this thing, knowing we should probably take our time, but no one can take their time because you can't lose. Well, no one wants to lose. Right. So, so it's a really, I think it's near from the existential kind of thinking piece, it's really, really interesting because it's, it's, we know what we should do, but how many, how many countries would take a step back and go, we are [00:09:00] going to go slowly and let some other country get there first. 
 

Right.  
 

Sean Martin: I'm going to let the bots physical and software battle away and I'm going to go sailing and diving.  
 

Kevin Fielder: That would be amazing. But that's the utopia, right? When they do all of their stuff for us and we can be artists and philosophers and, and whatever else, because we don't have to do all the jobs we do now. 
 

And there's, you know, everyone is comfortably off and everyone has food and it's all great, right? And that would be amazing if we get to that. And it's just if we do or not.  
 

Sean Martin: So let's talk about your role as the CISO in that regard. Until it's built, we're in the process of building it. Hopefully we're building it secure by design. 
 

Hopefully we're building it so it can be monitored and audited and managed safely and securely, not just for yourself, but clearly in your case for customers and, and then their customers as well. So there's a, there's a chain of chain of events that can happen here. So a lot of organizations buy software [00:10:00] and use it to run their business. 
 

You're actually building. You probably do that too, right? To run your business, but you're actually building software that becomes part of your business. How, how are you looking at the development lifecycle, uh, such that you are building security in from the beginning, how are you building the products to ensure that. 
 

Your op, your own operations of that and the operations of your customers are, are maintained securely. I presume you're touching on some of this in, uh, in your talks that info security in London as well.  
 

Kevin Fielder: Yeah, absolutely. So the, the, um, I'm lucky enough to, I'm doing two talks and luckily I've moved to that, so they're both on the same day, which is convenient. 
 

But yes, I'm doing one with a company called sneak who do software development, security, um, around. So yeah, so I mean, a lot of the security principles are the same as anything else, right? It's just kind of adapting, adapting them to AI. So, you know, [00:11:00] instead of kind of SQL injections and that kind of thing, you're looking at prompt injection attacks, right? 
 

So it's a different type of attack, but it's still fundamentally putting things into it to make it do things it shouldn't do. But it gets more interesting because it's, it's how does it respond? Um, so yeah, sort of, obviously, you've got a whole range of things as use of use of other people's AI. And it's things like making sure your IP doesn't get into it, especially if it could then give it to someone else. 
 

So whilst in some ways, you could think about that as being the same as like traditional DLP and things in terms of not wanting to get certain data out there. The difference is obviously if you do put it into a form on a website somewhere, it probably doesn't end up becoming part of the answer to someone else. 
 

So there is kind of a slightly different kind of risk posture or don't write terminology or slightly different risks associated with your data potentially getting into an LLM because then it can give those that data to someone else. Um, so kind of, if you're using a A off the shelf third party kind of surface service. 
 

It's getting yourself comfortable that it's not going to take your data into the LLM if that's not what you want, or having your own instance [00:12:00] of the LLM, LLM, so that your data is safe in your own instance. Um, and obviously there's the. It's security adjacent stuff. So a lot of things become safety, right? 
 

So whether it's a third party, one of your own one, it are the output safe and correct. And can you demonstrate how they got there? And some of that's not necessarily traditional security, but it's obviously safe. It comes into the kind of safe use thing. So whether that's going to sit in security as we move forwards, or I think I saw a thing yesterday about C AI. 
 

Oh, chief AI officer becoming a role that may or may not exist. But yeah, is there going to be people dedicated to how we use AI? Right? Because at the moment, obviously you've got privacy people and security people and, um, you know, people in terms of saving financial services, you've got to make sure you can evidence how it got to a decision. 
 

So if you start doing AIs and making decisions that impact people, You have to be able to evidence certainly in the UK that the decision is the best decision for that person. And obviously if you've made a decision as a human, you've gone down a bunch of checks and whatever else you can easily evidence. 
 

We checked all these things. We did the sums, here's the output. Whereas when the [00:13:00] AI just goes, Oh, you should do this. And it's a black box. How do you build in evidence? So I think building into AI's, um, how they do things so they can tell you how they've come to an answer was going to become increasingly important. 
 

So we understand how a decision was made. So whether it's right or wrong, we know how it got there and we could evidence how it got there. Um, obviously when it's building itself, you've got all of those kinds of things. Um, and then obviously you've got the attack side of things as well. So making sure you're, that you can't, you're, you understand how people can poison AI and how they can do interesting prompts and things. 
 

Yeah. I think I'm sure they've got, they've rolled on from that, but they're really simple examples. You probably remember when chat GPT was in its early days of everyone using it. Yeah. You can ask it how to do a murder. And then they were like, well, you probably shouldn't answer that. So they stopped answering that. 
 

So you just said, I'm writing a book on how, on writing a thriller and the murderer Does this, how would I, how would I go about writing about this murder? And so that's, that's the kind of things as well, where it's not even, not necessarily poisoning, but kind of, you can make them give answers. They, they shouldn't by kind of working around how to get the answer out of them. 
 

So it's kind of social engineering, the AI, right. [00:14:00] Um, so there's a huge amount of, yeah, I'm probably going rambling a bit because it's such a big topic, right. But it's everything from how to build it safely, how to understand the answers, how to understand where your data is, how to understand. attacks against it and all of those things and obviously because it's a newish space for a lot of companies there's going to be understanding who's accountable for which bits right to make sure things don't fall through the cracks so with traditional software we'll do you know all the good stuff code scanning third party library scanning some sort of dusty automated scanning of a test environment um the four eyes checks on the code possibly as well and then kind of pen testing or something again you've got kind of quite a well understood sort of You know, SDLC and threat modeling and secure by design at the start. 
 

And we kind of have that fairly well understood. It's done less and less and better in different places, but we understand what that should look like. And there's some really good SDLCs out there or SDLC kind of designs out there, but we need the same value, right? And then you've got a whole load of new tools. 
 

So you've got, you know, one of the things we're, you know, I'm thinking about at the moment is how we secure things like Jupyter notebooks that [00:15:00] access your, um, Data Lake can do yes, you got people working ML and all your data and they've got whole new tools for doing that. And so it's not like, hey, throw this on your container or throw, you know, MDR and DLP on your laptop or whatever. 
 

It's a new thing and it doesn't, you can't run existing tools on it and existing pipeline things don't scan them yet because they're new. So where, how do we scan them? So there's a new set of kind of tools and processes to understand that thing. So there'll be kind of AI SDLC or something that we need to kind of standardize and understand. 
 

I'll stop there. So I'll keep rambling.  
 

Marco Ciappelli: So before Sean gets.  
 

Sean Martin: I want you to keep going, but Marco has something. 
 

Marco Ciappelli: No, no, no, no. This is one of those conversations that can go forever. Yeah. And it kind of reconnects. I'm envisioning this chief AI officer, and I'm wondering is this person a psychologist, a sociologist, a engineer, a computer scientist, because we go back to what you said at the beginning. 
 

It could go very well, [00:16:00] it could go very wrong, we should all be excited, but with caution, but then if we don't do things, somebody else is going to do it. So again, this hat, is it an ethical hat or is it a security and technology hat, or do we need? You know, uh,  
 

Sean Martin: no, it's a faster market. 
 

Marco Ciappelli: Faster market, you have the, you have the technology hat. 
 

Uh, the ethical is probably going to tell you now, dude, this is not going to go well.  
 

Kevin Fielder: It, it seems like those things are all things. Kind of. If you look at it now, when you're building something, you've got kind of product people need to be fast to market. You've got secure people need to help make sure it's secure. 
 

You got privacy people need to make sure you've got privacy by design and you're protecting people's privacy and everything else. Right? So it's, it's that, you know, even if there is a chief AI officer or whatever, there's going to be a combination of people need to make sure you've got that balance, right? 
 

Of how do we get there fast? But how do we get there fast and [00:17:00] safely? And that's always that kind of, and I think it can be a healthy tension, right? There's, there's always a healthy friction of how do we, and it's, it's where you find your balance and your, every organization has a risk appetite. People pretend they don't, and they're like, you know, I think I did work in one company where they had a statement around having no appetite for a risk or no tolerance of risk, and it's like. 
 

That's clearly untrue because everyone has some appetite for taking some risks. It's not a security risk, right? It's like taking a big bet, going into new market. You might spend millions to launch a new market. That's a huge financial risk, right? So companies all take some risks and it's just making sure we try to get to that point where the speed to market and the safety. 
 

And safety are in fair balance. Um, I think one thing as well as make it is bucketing it into different buckets. So kind of the way to be able to move quickly and safely in the main is to look at what you're trying to achieve with the AI and what it will do, and then start doing kind of, you know, it's as simple as low, medium, high risk, right? 
 

So something like GitHub copilot for me, that's super low risk because you know, my, my, my wife's an engineer and I joke that half her job is just [00:18:00] downloading. Sort of code snippets of stack overflow or whatever, right? Um, so I, so if, unless you block the internet from your devs, they're going to be downloading code that is untrusted and putting it into your. 
 

into your, your apps, right? And then you're going to have the other, all the things we spoke about for wise checks and all the SDLC stuff and scanning and testing to make sure it's safe. So something like that, where it's just an AI giving code chunks to developers, that as long as you've got a good process for making sure the code is safe, that's no more dangerous than what they do now. 
 

So. Yeah. Boom, go, go for your life, make sure you're comfortable. It's not going to steal your data. And you know that it's not putting any of your stuff into the LLM and whatever else. Right. But just go, then you'll have kind of some medium stuff where it might be customer facing, but it's very low risk. 
 

So it's probably, you know, things like helping customers, improving customer help, which is probably more like natural language processing and some ML but it'll be called AI by the vendor. Right. That's, that's customer facing, but it's potentially, it's pretty low risks. It's just giving them help page stuff, which they'll end up talking to an agent if they don't get the answer they want, right? 
 

So that's fairly low risk. [00:19:00] And then you'll have the higher risk. Obviously I'm talking to financial services. That's why I'm currently something that makes financial decisioning. Obviously that's super high risk in our world. Cause you could lose your house. Or whatever else, right? So then go really slowly and carefully here, because this impacts life. 
 

So anything where you can impact people's lives or, you know, privacy or health care, whatever else, let's be cautious and safe, but things where they're just going to make us faster and better. And there is very low risk from using them. Let's move quickly. And if we can get good at bucketing things quickly, you can enable organizations to get it. 
 

low risk benefits very quickly whilst we also manage the risk of kind of higher risk things where kind of life and limb and whatever else could be in danger.  
 

Marco Ciappelli: So it's like a little bit of the AI act, like high risk, low risk, medium risk.  
 

Kevin Fielder: Yeah, that kind of thing. And it's, it's sort of thing where we should Again, I think companies should get good at this because the worse we are at it, stronger the regulation will be. 
 

If bad things happen, you'll get stronger regulation. [00:20:00] And yeah, there'll be someone, you know, take, take, take my, obviously we're not doing anything in that space yet, but, but, you know, someone loses a house or, you know, Gets, you know, something happens and it gets in the press and it only has to be one or two examples and this was caused by AI and suddenly someone in government will come down with some terrible regulation to try and prevent it from happening again. 
 

That will make it really hard to do things well. So, so by us, by industry leading on doing it safely and well, we. Get to help make sure the regulation is better rather than just knee jerk. I think  
 

Sean Martin: so. Let me ask you this and for years I was a quality assurance engineer and we did white box black box test We used user stories We did functional tests and we had to document what how things were supposed to work so we could validate they did We identified the areas where they didn't, didn't work. 
 

Um, user stories was about the user experience [00:21:00] using the application. It seems, and I haven't done this for a long time, so I'm, I'm curious to your, your perspective on this. It, it seems that understanding what's right or what, what's correct is really hard to kind of capture, especially when you start adding the scalability and, and the broadness that AI can bring to the picture. 
 

How do, how do you document this is what it's supposed to do? If it strays from this, we're in trouble. Um, and then when we, when we talk about user stories, a lot of this stuff is machine to machine now. So, uh, there is an actual person sitting hitting the keyboard and getting the display back. Some of this is machine to machine doing all this stuff. 
 

So how, how do you look at the complexity of that as, as you're building these things to say, we're in good shape or this is, uh, this is something we need to pay attention to here.  
 

Kevin Fielder: I think I've had the full answer to that. I think, I think I'd be a lot more well off than I [00:22:00] am. Um, yeah, it's a huge question, right? 
 

It's really hard. And I think it wasn't some examples when AIs have talked to each other and started coming up with their own language and everything else, right? So understanding what they're doing is going to be increasingly more and more difficult. The more complex they are and the more they interact with each other and start influencing each other, us understanding it is going to get more and more difficult. 
 

So yeah, I don't have an answer other than I think it's really important and you need to kind of build into the AI as you're building ways of us understanding or it being able to tell us how it got to an answer. But I think there was, wasn't there a thing where they did it? I can't remember what the test was. 
 

I'm pretty sure I read somewhere that they did. A thing where the AO was supposed to tell her how it got to an answer. I was, I think it was a, a test of kind of court cases or would it get someone innocent, not guilty. And it learnt to give the right answer. So it would give a guilt, a verdict of guilt, innocent, whatever else, and then it would give an answer that it knew we wanted to hear about how it got to that solution. 
 

So it it, it learnt. to, to not tell us how it got to the answer, but to tell us what we wanted to hear about how [00:23:00] it got to the answer. So by giving away, you're happy with the response. It learns that that response works and we'll just give you that response. So even when you tell it to tell you how it got there, it can learn to give you the right answer. 
 

You know, I guess like, just like a little kid with, yes, I've done my homework and I watch telly now or whatever, right. Or, you know, whatever else, right. So they can learn. And that's really scary because we've told it to tell us how it got there and it. It will tell us what we want to hear. So yes, I think, yeah, that's gonna be a really interesting space. 
 

And again, I'm not gonna pretend I can answer it, but that's it's going to be very, very difficult as they get better and better at what they do for us to genuinely understand how they get to their outputs.  
 

Marco Ciappelli: There was actually another case where it was pretending to be. Less smart than what it actually is. 
 

Sean Martin: Pretending to be me.  
 

Kevin Fielder: Yeah. But yeah, so they're already learning to deceive and stuff now when they're comparatively, they're. [00:24:00] They're very good at some things, but they're still comparatively simple compared to where they're going to be in, in, in a month's time. Right.  
 

Marco Ciappelli: Yeah. I think that the biggest, the biggest fear is that, you know, to, to play with our mind, uh, that's, that's the thing. 
 

So that's the utopian,  
 

Sean Martin: this may be a philosophical engineering, engineering questions as a product manager, you'd kind of set the stage for, you know, Here's what we want this to do and here's the scope for what we're about to build. Do we, should we kind of frame the scope of and contain what's possible so that we know we're, we're achieving what we want it to and not letting it run wild and do other things. 
 

So if you're about giving a loan for a car, um, prevent it from giving loans that, where they could buy a house. This is a stupid example, but hopefully it makes my point. [00:25:00] So. In the requirements and then in the development define what that world looks like and don't let the AI run wild, I guess, is that a, that's something we should look at or, or do we want to let it run wild and see what we end up with and see what happens. 
 

Kevin Fielder: I think there's a problem that works for narrow use cases, right? Right. Everyone's trying to build more general AI that can do. So yes, if you've got something that it's only job is to look at your financial situation and go, yes, you're an a one or I don't know what they classify people now, but you've got a score on whoever scores it have some 900 and something wealth than 200 and something. 
 

Right? Then yeah, it can probably do quite a good job of, of doing that really quickly and looking at other factors and trying to get, you know, What's going in and out of your bank account if it's allowed access to that and whatever else to understand if you can afford it, but that's a [00:26:00] very narrow use case. 
 

It doesn't really get us the benefits of, of kind of gen AI. And obviously, um, you know, people are trying to get to the general purpose kind of AI that's awesome at everything. And that's when it gets, you can't contain it like that, right? Cause it's got to do lots of things. And I think to touch on the, on the scary stuff, it's over a year old now. 
 

So I always wonder, you know, you know, when this, um, you see stuff about what's really cool, like the Boston dynamic stuff or what, what Palantir are up to or whatever else you've, if you're kind of At all worried about people doing things that kind of weigh more. I think the stuff you see that they release probably isn't anywhere near where they're at now, right? 
 

Cause they're not going to be releasing his, here's the stuff that we give you. Palantir, for example, I think they're not going to be showing you what they're actually doing with NSA and people. They're going to reveal a little thing. And it's a year or so ago. Now, if you. Google it, um, battlefield AI, and it's basically, it will look at satellite images and everything else and it will work out where the enemy is likely to be. 
 

It will work out what ordinance they have and what weapons they've [00:27:00] got and what tanks there are in the field and that kind of thing. And then tell you, you'll pick something and it will tell you what you should use to destroy it, give you options. And then the only reason it doesn't just do it is because they've coded in, you've got to have a human decide what to do, but that just take that step out and suddenly it's running it for you and it's, it's destroying stuff on the battlefield for you. 
 

So it's already there that it can do this stuff. It's only because we programmed in, you're not allowed to, and there's got to be humans and failsafes. So at some point, someone messed that up, right? And then it'll just be. Um, so yeah, but you know, and so I think with all these things, even at night, you know, there's going to be stuff that's way ahead of what we've seen because they don't show us all the secret stuff. 
 

Right? So, um, yeah, and that that's the scary stuff is when it's going out to do very general AI things or battlefield based AI stuff. Not the use case of. Should someone have a loan or not, because that's a very narrow case that you could probably do now with just as well with ML, right?[00:28:00]  
 

Dramatic pause.  
 

Marco Ciappelli: No, I'm looking at the clock and I'm like, We could talk a long time. And what we want to do now instead is to present and with the presentation of your, uh, Yeah, there are two of them. Maybe we'll grab you there and we sit down and we have another conversation. That would be fun. Right? 
 

Kevin Fielder: Yeah, it'd be really good. 
 

We might get some good. I think we'll get some good. Hopefully good. All these interaction as well. Right? So for me, I'm not. I want, I don't hate because I enjoy talking, but I'm not a big fan of the talk at the audience kind of presentation. I'm much keener when it's a discussion and it's, what are you concerned about? 
 

Ask us questions because I'd rather we, even if it goes a bit off a tangent, it wasn't what you thought you'd originally talk about. You're talking about what the audience wants to, wants to know then, rather than you're standing up. Here's the stuff I want to tell you. And it's, I'm a huge fan of audience interaction and getting interrupted and people asking questions. 
 

So hopefully we'll have a [00:29:00] fair chunk of, of interaction and debate. And then obviously we can talk about that afterwards.  
 

Sean Martin: That'd be fantastic. So the two sessions, both on Tuesday, the 4th of June. Uh, AI Fitness, AI Wellness, NatWest, Boxton, Metal, CESOS, THOTS, and SAFE AI use. So you're going to dig into that a little more. 
 

That's at, uh, 11, I'm sorry, 1030 local time. And then later that afternoon at 2 local, it's deploying an effective DevSecOps team. What's the recipe for success? I believe that's a panel.  
 

Kevin Fielder: That's a TLDR that one on you. It's much more about people in process than it is about tools, but we can talk about that on the day. 
 

Sean Martin: Perfect. So clearly, uh, lots of fun stuff to think about, talk about, take action on, and, uh, encourage everybody to catch both of those sessions with you, Kevin and the others. That, uh, joining you and yeah, hopefully we can have a chat after and other people are thinking on this on this topic as well. Thank you guys. 
 

Look forward to it.  
 

Marco Ciappelli: Good to [00:30:00] see you and looking forward to see you in London.  
 

Sean Martin: Yeah, so thanks everybody for watching this chats on the road to InfoSecurity Europe in London. We're just getting started. We have a few episodes published already, lots more coming up with keynotes and speakers and panelists and other cool people like Kevin. 
 

And, uh, so we'll see you all there. Please stay tuned. ITSP magazine on location with Sean DeMarco.