ITSPmagazine Podcasts

AI Development: Can Ethics Keep Up with Innovation? | A Conversation with Aric Perminter, Pam Kamath, Darrell Hawkins, and Taiye Lambo | Redefining CyberSecurity with Sean Martin

Episode Summary

This episode of Redefining CyberSecurity Podcast features a dynamic panel discussing the transformative potential and inherent risks of AI across various industries. Join Sean Martin and experts Aric Perminter, Pam Kamath, Darrell Hawkins, and Taiye Lambo as they explore the balance between leveraging AI for productivity and ensuring ethical considerations and privacy.

Episode Notes

Guests: 

Taiye Lambo, Founder of Holistic Information Security Practitioner Institute (HISPI), Founder and Chief Technology Officer of CloudeAssurance, Inc.

On LinkedIn | https://www.linkedin.com/in/taiyelambo/

Pam Kamath, Founder, Adaptive.AI

On LinkedIn | https://www.linkedin.com/in/pamkamath/

Aric Perminter, CEO, Lynx Technology Partners, LLC.

On LinkedIn | https://www.linkedin.com/in/aricperminter/

Darrel Hawkins, Cyber Chief Technology Officer, Otis Elevator Co.

On LinkedIn | https://www.linkedin.com/in/darrellhawkinscissp/

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/sean-martin

View This Show's Sponsors

___________________________

Episode Notes

The latest episode of Redefining CyberSecurity Podcast brought together a distinguished panel of experts to delve into the intricacies of artificial intelligence, its benefits, and its risks. Hosted by Sean Martin, the panel included Aric Perminter, Founder and Chairman of Lynx Technology Partners; Pam Kamath, Founder of Adaptive AI; Darrell Hawkins, an IT industry veteran with extensive experience in cybersecurity; and Taiye Lambo, who established the Holistic Information Security Practitioner Institute in Atlanta, Georgia. One of the primary topics discussed was the pervasive influence of AI in various industries, particularly the dichotomy between generative AI and traditional AI.

Pam Kamath highlighted the overlooked capabilities of traditional AI in fields like healthcare, which already show significant advancements in areas such as radiology. This underscores the point that while generative AI, epitomized by models like ChatGPT, garners much of the public's attention, traditional AI applications continue to evolve and solve complex problems efficiently.

Darrell Hawkins brought a commercial perspective into the discourse, emphasizing the balancing act between leveraging AI for profitability versus ensuring societal safety. The key takeaway was that AI's role in enhancing productivity and creating new opportunities is undeniable, yet it is imperative to remain vigilant about its societal implications, such as privacy concerns and job displacement.

Taiye Lambo shared insights from his experience with AI's practical applications in cyber operations. He underscored the diversity of AI's utility, from improving threat intelligence to automating secure responses, demonstrating its potential to transform cybersecurity protocols dramatically. Lambo also provided a thought-provoking view on privacy, suggesting that with the integration of AI into daily operations, the traditional concept of privacy might inevitably evolve or even diminish.

Aric Perminter, focusing on sales and operational efficiencies, shared his insights on how AI-driven analytics can profoundly impact sales strategies, enhancing proposal effectiveness and positioning high-value services. This reflects AI’s broader potential to revolutionize internal business processes, making organizations nimbler and more data-driven. A common thread throughout the discussion was the emphasis on learning from past technological advances, like the adoption of cloud services, to guide AI implementation.

Sean Martin and the panelists agreed that clear use cases and identified outcomes remain critical to leveraging AI effectively while managing risks thoughtfully. In doing so, organizations can harness AI's strengths without repeating past mistakes. Ultimately, the episode revealed that the journey with AI entails navigating both opportunities and risks. By focusing on practical applications and maintaining a vigilant eye on ethical and societal concerns, businesses and individuals can find a balanced approach to integrating AI into their ecosystems. This nuanced conversation serves as a valuable guide for anyone looking to understand and leverage the power of AI in a meaningful and responsible way.

Top Questions Addressed

___________________________

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

___________________________

Resources

Beyond the hype: Capturing the potential of AI and gen AI in tech, media, and telecom: https://www.mckinsey.com/~/media/mckinsey/industries/technology%20media%20and%20telecommunications/high%20tech/our%20insights/beyond%20the%20hype%20capturing%20the%20potential%20of%20ai%20and%20gen%20ai%20in%20tmt/beyond-the-hype-capturing-the-potential-of-ai-and-gen-ai-in-tmt.pdf

AI Summit Roundtable Topics Summary: https://watech.wa.gov/sites/default/files/2024-04/AI%20Summit%20roundtable%20summaries.pdf

Washington State IT Industry Forum & AI Summit: https://watech.wa.gov/washington-state-it-industry-forum-ai-summit

___________________________

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: 

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring this show with an ad placement in the podcast?

Learn More 👉 https://itspm.ag/podadplc

Episode Transcription

AI Development: Can Ethics Keep Up with Innovation? | A Conversation with Eric Permenter, Pam Kamath, Darryl Hawkins, and Taiye Lambo | Redefining CyberSecurity with Sean Martin

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: [00:00:00] And hello everybody. You're very welcome to a new episode of redefining cybersecurity here on ITSP magazine. This is Sean Martin, your host. And as you know, if you listen to the show, I get to talk to all kinds of cool people about cool topics and. There's one in particular you can't seem to get away from. 
 

It's on everybody's mind, everybody's, everybody's tongue, and, uh, and everybody's tech stack, I think, as well. They like it or not. And, of course, I'm talking about artificial intelligence and all the, the variants of that and machine learning and whatnot. And, uh, I'm thrilled for this. Uh, it's a, uh, A panel series we've been talking about for a while, and uh, we've, we've been able to pull this first episode together, and we'll have many more following this, uh, looking at different topics, so depending on what we talk about today, that might trigger the next one. 
 

And of course, if you have, if you're listening and you have thoughts, uh, on what we want to cover next, just let us know in the, in the comments. Um, I want to introduce the, the [00:01:00] whole panel. So a large group of us today. So we're going to have a good conversation. We'll see where this goes. And, uh, I'm going to first pass it to Eric Permenter. 
 

He's going to. Say a few words about himself and, uh, and then, uh, I'll bring the rest of the group into, into the conversation.  
 

Aric Perminter: Great. Thank you very much, Sean. Uh, I appreciate, uh, uh, spending the time with you and this distinguished panel today. Uh, my name is Eric Crementer. I'm the founder and chairman with Lynx technology partners. 
 

I also have the privilege of being one of the. Co founders of cyversity, which is a nonprofit focused on bridging the gap between women in minorities. And I've also been fortunate enough to, uh, been tagged to work alongside of one of our other distinguished members who will introduce themselves shortly to be a part of the hispy think tank project cerebellum. 
 

So really excited to be here. Uh, and I'd like to turn it over [00:02:00] to my friend Pam to introduce herself. And thank you for joining us, Pam.  
 

Pam Kamath: I'm looking forward to being part of this panel. Thank you, Eric. And my name is Pam Kamath. I'm a founder of adaptive AI provide technology consulting services in AI risk management and data being in technology for 25 something odd years. Worked across a number of different roles and, uh, more importantly in healthcare, where my focus has been over the last 10 to 12 years, um, supporting number of leaders in addressing their, uh, business challenges around data, AI regulations and whatnot. 
 

So very excited to be part of this, uh, group and I look forward to speaking on, uh, on the, on AI. Which is, uh, the topic of the day.  
 

Aric Perminter: Awesome. Thank you very much, Pam. I appreciate it. Uh, Darryl, my friend, welcome. Uh, please [00:03:00] introduce yourself, sir.  
 

Darrell Hawkins: Hey, thank you, Eric. I Darryl Hawkins and I come in as an IT statesman, if you will, and been in the industry for a few years. 
 

Over three decades and working in both I. T. And cyber security. Um, I've had the wonderful experience of being able to work in health care, transportation, manufacturing and most of my career and financial services. So I'm happy to be here and join in this panel. Thank you.  
 

Aric Perminter: Thank you, Daryl. And Ty, welcome, sir. 
 

Please introduce yourself.  
 

Taiye Lambo: Okay. Hello, everyone. I'll give you the short version. Just like Daryl, I've been in technology over three decades, been in information security almost three decades as well. Founded the Holistic Information Security Practitioner Institute here in Atlanta, Georgia about 20 years ago. 
 

And I claim to fame is we've trained a lot of, um, aspiring CISOs and [00:04:00] CISOs for, I would say, 60 percent of the Fortune 500, um, going back almost 20 years. Um, so last year we founded a think tank called Project Cerebellum. Um, so HSP has, um, that think tank. We're kind of incubating that think tank. So it's made up of CIO, CISOs, you know, great Entrepreneurs like Eric, who's not only a friend, but he's also a mentor, and we, we published that first deliverable, I'd say, within 90 days of launching the think tank, so we created a trusted AI model, which pretty much is based on the NIST AI risk management framework, but mapped across about 20 different, um, Crosswalk to 20 different, I'll say, areas, including standards like the six ISO standards around AI risk management, as well as the White House executive order and some of the regulatory compliance like HIPAA.[00:05:00]  
 

GDPR, etc. So that's pretty much me in a nutshell. And that's the short version, by the way.  
 

Sean Martin: You've done, you've done a few things, Tai. This is going to be fun. So I want to start off with this question. And I'm going to pass it to Pam first. And I'd like everybody's viewpoint on this. Because in the last few weeks, I've had a chance to talk to a lot of people and two things come up. 
 

Data and applications. We have a lot of guidelines, frameworks, laws, regulations, what have you, around how to manage data for privacy. We have a lot of standards and whatnot around how to build apps and use apps securely and a lot of conversation around supply chain. So my question is, why do we put this special label around AI [00:06:00] when AI to me and what I've heard the last few weeks. 
 

Is a lot of data through an app. So what, so Pam, to you, is that, is that being too simplistic? How come, how can we need to have this special view on AI or Dewey?  
 

Pam Kamath: It's, it's. A lot more complex than that. When, when we take a step back and look at how artificial intelligence as such has evolved over the years, it was predominantly data. 
 

It was predominantly just building code. If you look at machine learning, right, it was all about building massive amount of source code. And then the way you would crunch that. If you're looking at, you know, even if we take a step back, um, even before, uh, deep learning or generative AI, um, it was mostly rule based algorithms. 
 

And the way these algorithms work was pretty much crunching data. [00:07:00] And when you layer in machine learning on top of that, what you're really doing is you're letting the code itself adapt, develop on its own, based on the results obtained through this data. So, for at least over a decade, there was really no applications that stood up this machine learning techniques, meaning we didn't have IOTs, we did not have, um, you know, applications that we build today in our mobile telephones. 
 

Um, it was basically, It was a standalone, it was just these programs running and crunching data. So it made so much more sense to focus on regulations related to data. And if you even look at deeply, um, the programs developed were mostly for financial services, these rule based applications, we did not see [00:08:00] AI as much. 
 

In supply chain or in health care, in other places where typically you need capabilities to host data. And in these other places, manufacturing, health care, there are other regulations, FDA, and IEEE was very much focused on, um, Regulations that was mostly comprising for both applications and hardware. So at some point, we kind of converged on this deep learning and generative AI. 
 

Now you need applications to host data. So it's pretty, Um, what do I say sticky, you know, in, in terms of how would you layer in the necessary safeguards that take the best of both and you're not, um, repeating, or meaning you're not creating duplicates, um, has been a pretty interesting right. From that [00:09:00] standpoint, because you have privacy laws that focus on data. 
 

Then you have, of course, the GDPR, and then you have other laws within manufacturing and other places. And then you, you're now looking at how do we create a framework that brings in the best of the two realms that we are looking at that is both applications and, um, and of course, data and the reason why I say we need AI governance that under that sits the data governance and other things. 
 

It's a long story, but that's really how I think we found ourselves where we are today.  
 

Aric Perminter: Well said. Well said. I'd love to hear, hear, hear, hear Daryl's perspective on that as well.  
 

Darrell Hawkins: No, I think, I think Pam is right. I think it's, it's a little more complicated than, I mean, we look at it like, Hey, you know, what's the big deal, right? We've been doing something that looks and smells like AI for the last, you 10, 15 years, right?[00:10:00]  
 

It's, it's gotten a little more complicated than that. And that we, when we start to use these, you know, like Pam said, you start out with machine learning, then you go to predictive analysis, then, um, start analyzing that data. And then you start using the output of that data for things like, and healthcare, I think that's where you were going, Pam, and healthcare, the. 
 

Eligibility of folks, for instance, to get certain types of operations based on that data that that you have, right? So it gets a lot more complicated when you when you look at. where we were, which was, hey, we're just going to analyze things in the past and say, yeah, this must be right. So let's do it too. 
 

Hey, we're going to take that and say, well, Daryl, based on this, I think in one of the popular movies, they called it precogs, right? The people that predicted whether you were going to have a crime before you did it. Right. So I think that's what we're nervous of. Like, how do we use that data in a. Socially sensible manner. 
 

Pam Kamath: That [00:11:00] makes perfect sense. So just to kind of layer in another comment, right? I talked about health care and I also talked about supply chain manufacturing, particularly where you use industrial IOTs, but when you take health care, when you look at medical devices, um, and I have worked at the past in a number of different places that uses, uh, that use medical devices, um, um, It brings its own layer of FDA requirements in terms of how you want to document and protect those devices from both safety and efficacy standpoint, which is the That's pretty much what the core is of any regulation. 
 

And then when you look at some of the things there, when you start peeling apart different domains of that regulation, then there is data protection, then there, there is privacy within that is embedded data requirements. Now not only do you have how the device needs to be protected, but also the data [00:12:00] within the device. 
 

And then came other regulations. I was challenged on, um, on one of the healthcare companies that I was working for, a global pharma company that had massive presence in Europe, and they were now having to comply with GDPR. But then they're also, uh, in compliance with a number of other regulations and requirements around these medical devices. 
 

Now, how do you take The two and marry it in a way that you're providing proper documentation for the GDPR and still making sure that you're not leaving any stones unturned, because some of the nuances between certain points within the regulation can be pretty tricky to navigate. Um, and I'm starting to see what, how this can be so much painful for companies who have. 
 

Poured money into supporting other regulations, and then we are layering in the A, the A. I act and [00:13:00] GDPR. I mean, it's it's crazy. The amount of compliance and the money that gets spent just in trying to keep sanity across these multiple different regulations. I just wanted to kind of throw that in as well. 
 

Aric Perminter: Yeah, so and so tight. Is it the unknown? Because I think there are you started to pull on the unknown, right? And and Pam, that gray area that you're referencing is What do we do about that area that we haven't figured out yet? Is it, is it the worry of the unknown, Ty?  
 

Taiye Lambo: Wow, you, it's like you were reading my mind, Eric. 
 

So, at the risk of oversimplifying this, and I'm not going to, I agree with everything Pam and Dario say, but I'm not going to, um, get into the technical details. And I'm going to look at things from the consumer standpoint of perspective. So back in 2030 years ago, we, unfortunately, we preached the security message using what they call [00:14:00] FUD. 
 

Fear, uncertainty and doubt, um, at the risk of sounding really colorful, like, you know, replace the D with C, so you can translate that in your mind, but I can't say it. So the F is for fear, there's a fear. The U is for unknown, because it's almost like uncharted territory, right? Um, and then the C is, it's cool. 
 

So when you have those three things combined, nobody wants to be left behind. So on the fair side, nobody wants AI to potentially want to take over, right? Humans don't want AI to completely take over, right? We still want to have our own autonomy. Right? Um, at least most humans, right? And I'll probably say most reasonable humans still want to have some level of control. 
 

They don't want the technology to control them. So there's that fear [00:15:00] that if I don't jump on the bandwagon, this may actually end up destroying me or destroying us. I know that's an extreme. It's irrational, but it's still a fear, right? You know, going back to Terminator, um, things like that. And then the unknown is, What is going to be the impact in terms of our jobs, our livelihoods, right? 
 

Everybody has to be thinking about that, right? Whether you're retired, if you, if you're not thinking about that for yourself, you're probably thinking about that for your loved ones, you know, the younger folks. So it's, there's the unknown of what is going to be the impact. Both positive and negative. And then you have the cool factor. 
 

We had the same thing with cloud. It became very cool to be in the cloud. You know, the U. S. federal government was the first to say cloud first. You know, the former White House CIO, who was also the CTO for Washington, D. C. Vivek Kundra said, we're going to go cloud first. [00:16:00] And that really spurred a lot of cloud adoption. 
 

Not only in government, but in the commercial space as well. So AI has all those three things combined. You know, it's not us trying to preach, we're not pushing it. I mean, as security folks, we're actually probably the ones who are most on the fence, especially security leaders, because we just don't realize, you know, if we think about AI, we think about how can we use it to automate, you know, maybe more threat intelligence, maybe more offensive security. 
 

You know, maybe more detection capability response recovery capabilities, but we're not thinking about it from Oh, wow. How can this make the company better in general? That's the job of the CTO or the CIO and in some cases now with the federal government, um, White House executive order AI, the chief AI officers responsibility to figure out how to do it safely and securely and to get [00:17:00] benefit from AI. 
 

So, but I think, Eric, to your point, I, I think the unknown is probably the biggest factor, but I think the cool factor is also kind of overshadowing the unknown factor, but there's still a game affair.  
 

Darrell Hawkins: I think you're right. When you talk about, when I talk to the tech people like myself, it's cool, right? 
 

This AI thing is cool. You know, you talked about offensive security and being able to, you know, Prepare, um, offensive measures against threats in our environment. But when I talk to folks that are not in a technical industry, like us, there's that fear. They're like, Oh, this thing is going to take my job. 
 

I'll be unemployed. Right. I think there was a, uh, I don't want to. misrepresent the school, but I think Deauville College in Buffalo, New York had a commencement speech this week, and they use [00:18:00] an AI generated commencement speech with a robot to talk to the kids, right? And, you know, the first thing that comes to mind is, oh, wow, they're preparing us that the future is going to be this AI thing is going to take our jobs, right? 
 

Cool, but scary.  
 

Aric Perminter: Well said. I'm sorry, Pam. I have to chime in on this because over the weekend I was walking and, um, I happened to walk past a fast food restaurant that everyone was, um, tickled to death about, right? And they're going through the drive through and I can hear them in their car. Well, lo and behold, The restaurant, the fast food restaurant deployed AI to take orders. 
 

So no humans were involved in what whatsoever. By the time they pulled up to the front, the human did the transaction with the monies, but AI was, [00:19:00] was, was taking the orders. So I had to throw that in there because that kind of piggies backs off of what you're seeing. Is it, is that a positive or a negative? 
 

In that scenario, right? Because it's probably hoping the customer, the company deliver orders more accurately. All right. But it's replacing, um, someone's job at the end of the day or a portion of someone's job. Sorry, Pam over to you now. 
 

Pam Kamath: It's kind of going back to the question Sean raised about application versus data is something else triggered that. 
 

Thought for me as well is that I know we started talking about how, you know, these two came about differently and we are converging on that in the realm of AI. But then again, I ran across a really great article from McKinsey, which made me think further because we kind of keep talking about AI in the way that it's unfolding today, [00:20:00] but that's very much focused on generative AI. 
 

Again, if you want to kind of put more distinction on that, we are talking about large language models. And now we are even focused on multimodal looking at GPTO. For Oh, which is multimodal. So within generative AI, we are looking at transformer models, which is now everything around, um, you know, chat GPT and all the things that we are seeing all the competition around that. 
 

But then there is also the traditional AI AI before chat GPT was released. Which had made significant inroads in health care. I mean, we cannot forget that AI, um, you know, because those systems, especially used in radiology and other places, um, had matured significantly. I mean, even before large [00:21:00] language models came about, uh, there's big talks about how what we knew back then as AI would bring significant shift in the dollars that it would add to the economy. 
 

And the risks of that is kind of different from the risks of generative AI. And I still hope that in the midst of this craziness that we are looking at, we don't forget that domain, the AI domain that was maturing, especially in healthcare and manufacturing other places, um, also built on deep, uh, learning algorithms, which was kind of the next generation. 
 

Um, but those have very. Distinct risks related to model drips and not just data poisoning. When you think about AI, there's federated learning. And as a result of that, you have data poisoning, uh, backdoor data issues, [00:22:00] uh, in the way the data is pieced and weaved and fed, uh, to the model from testing standpoint and training standpoint. 
 

But model drips is pretty significant. I mean, and I kind of get nervous sometimes that we get so carried away with generative AI that, um, we forget about the AI that, uh, we were using and we knew of very dearly before, you know, it had JPD was, uh, released. I just wanted to make sure going back to what Sean, you were saying in the applications and versus data, there was also that AI models. 
 

Uh, that, uh, we cannot forget.  
 

Sean Martin: Yeah, and I want to go to, I want to bring those three things. So data, apps, the models. Ty, you mentioned cloud, because I think there's the delivery of this stuff. And Pam, you mentioned we have it on the phones now. I mean, with 5G and 6G coming, this stuff's going to be distributed everywhere. 
 

And I'm just [00:23:00] So the reason I asked that question up front was, are we throwing everything we know away because we think this is so new and we have to look at it completely different? Or are there things we can leverage experiences we can learn from the cloud is the best example? I think I can draw upon to say, and you noted it that the government said, let's go here first and really put a strong plan around how we do that. 
 

Um, yeah. And yes, you have to train differently. The systems look different. We have new technologies that emerge from that containers and the way we deliver apps to those to those environments as well. But a lot of it's coming back on premise as well today as well. So I guess the question I have is. 
 

Because of the F. U. C. Do we kind of throw everything we know away and start looking at this fresh? Or is that what we're doing? And should we perhaps [00:24:00] kind of find out where the deltas are and where we what we know and where we think we might be going? 
 

Taiye Lambo: I would prefer the ladder, right? I would prefer that we're using from experience. So if you think about cloud first, The biggest driver for Vivek Khandra at the time, because I literally read almost every article, every, you know, presentation he did, I became a big fan of his, because I'm like, if somebody can figure out a way to like convince the government to like, be innovative, like they have to be a rock star, like they have to, or either that they've scammed everybody, but this guy is either a genius or he's a, you know, he's a biggest scam artist, but he was able to pull it off. 
 

But the biggest driver was that the, Federal government had like things like 3000 plus data centers, and they weren't in the business of being like managing building and managing data center. So he was like, first of all, we need to consolidate all this data center. So the whole concept [00:25:00] of data center migration. 
 

I mean, it felt like the federal government had the biggest need for that. Because some data centers were like really bad and some were really good. So I think that was like the biggest driver, like efficiency and effectiveness of the way of the IT spend. Right? And I can't remember the exact money, but it was in billions of dollars. 
 

And the whole idea was that if the federal government could move to the cloud, public cloud services, you know, which obviously created the need for FedRAMP, you know, and stuff like that. To make sure it's done securely safely and securely. I think we can borrow from that. But to your point, some of it is moving back on frame. 
 

So maybe we can say, Okay, why didn't we get right in the cloud first strategy and we can apply the same concept to AI because it's technology at the end of the day, you know, again, without. Yeah, the risk of oversimplifying this cloud is technology. AI is technology, right? I mean, [00:26:00] there's different facets to it. 
 

There's different aspects to it, but it's at the end of this. It's about managing technology risk. So if we can maybe learn from. What did we do right for the cloud? The cloud is now mainstream, right? Everybody is in the cloud one way or the other. Every consumer, every business in the world, every non profit is in the cloud in one shape or form. 
 

If our small businesses tend to use more cloud services, the average SMB, I think the research I did, has at least 50 cloud services they use. And in many cases, free, which means they're giving their data away without any form of, you know, like protection right from a legal standpoint. But I think we can borrow from what we did with the cloud, you know, 10 years, 10, 15 years ago, and some of the lessons learned. 
 

So we're not reinventing the wheel and we're not repeating the same mistakes.  
 

Pam Kamath: You know, [00:27:00] I have a slightly different take, and I think you're right from the risk standpoint, it makes perfect sense, but I think the way we sold cloud to the corporate leaders was somewhat misguided because, you know, when you look at where we landed today, it's more expensive to manage your, uh, capabilities on cloud along with your data compared to, you know, On prem, but if we had sold it in a, in a, in a different manner through different optics, you know, our cyber security security risks are getting more and more sophisticated data risks are getting even more crazier. 
 

There is so much unknown with the data itself in terms of how data can be misused at scale and what it may do to society at large. Um, [00:28:00] you know, if we were to look at hosting all of those capabilities today on. Data centers on prem. We would not have a mechanism or the governance to manage the sophisticated cyber security risks and data risks that we have today. 
 

And I think in so many ways, cloud answers that. But it was unfortunately so sold under the umbrella of cost optimization, which it is not. And, you know, you know, sometimes leaders get clouded with that, right? They kind of, now they're like going back at their technology leaders and said, what the heck you guys sold this. 
 

And I'm paying more for this nonsense than I ever did. If I had put all this. In my data center, I would have had a better chance of managing my cost more optimally. Right. But then if we had sold that under the risk standpoint, we would have been in a [00:29:00] much better place. Um, but then to twist that story, obviously, is always a challenge because getting them to agree on one thing. 
 

thing, right? Getting them to align on one thing is, uh, is almost always very problematic. And it's somewhat dramatic actually. Now to go back and reshape that story is the challenge. But I really think cloud is the answer. And even if you look at the way large language models are, are coming about in terms of what it can, um, it can, uh, solve from problems standpoint. 
 

And Between cloud, each of these big techs having their own language models, which I have no problems with, by the way, I know we talk a little bit about the digital divide, and that's my favorite subject, but the energy consumption and all of that. I would rather they have it and own it and manage it. Um, and smaller models. 
 

For very specific needs, but I think it's going to become that ecosystem, that platform [00:30:00] where each big tech will have fantastic cloud, um, infrastructure with all the capabilities we need to build a great applications and the backdrop being the large language models that they're able to. Host and provide. 
 

Um, that's kind of how I'm seeing it. It may kind of come through in the future for us, but I think it's going to be a challenge again to go back to our leaders and say, by the way, we want to do this because risks will get more and more sophisticated geopolitical and everything is starting to emerge as the big problem, right? 
 

So I just kind of wanted to bring that back, Ty. I know you were talking about risk. You're absolutely right. But there has been a bit of a misnomer in how this was sold to begin with.  
 

Darrell Hawkins: That's that's interesting, Pam, because you're right. I think everyone was sold on cost optimization. Um, for cloud and that's not really true. 
 

They got the first bill and then [00:31:00] everybody knew that wasn't really the case. But I think the introduction of, of risk, um, um, as, as Ty talked about, when, when you have your own data center, you're containing your risk to whatever those four walls are. But the risk has grown with, with large AI and, and machine learning and not, not through your own company risk, but now you have a, uh, a service provider to you and they have a third party to them. 
 

Now you from a risk standpoint, your risk are four layers deep, right? You're not only looking at second party, you're looking at third party and fourth party and what they're doing with your data. And if they're developing code and developing things based on this large data set of information that you put together, where's, where's the, where's the borderline of that risk? 
 

And I think that is really the challenge. You can't, you can no longer say that, well, [00:32:00] uh, I gave the data to Eric and Eric gave the data to Pam and Pam gave the data to Ty, therefore I'm no longer responsible for it. It just doesn't work anymore.  
 

Pam Kamath: You know, there is only three ways you can manage a risk. 
 

Transfer, accept or mitigate, right? Transfer by far is the best way if you're looking at managing risks with third party that has better systems. Transcribed I'm better. They're in a better position equipped to manage it, right? If you look at all the capabilities that they have as your or, um, a W. S. You. 
 

It's transfer is far better choice than accepting and mitigating in your own landscape. It's just how well you do that. I think is important. Boy, it's down to that.  
 

Darrell Hawkins: When you talk about risk versus privacy, though, right? You're right. Right. You can transfer it. But when you talk about [00:33:00] privacy, us having a conversation is a transference, right? 
 

We can be on a call with somebody in Europe and then we violate GDPR rules because we've said something on a call that constitutes as a transferring of data. So that's where it really gets tricky, right?  
 

Pam Kamath: So that is the data at rest, data in transit, and now data in process is the new risk. Right. So who owns what? 
 

You know, that, um, what I call as, um, role, roles and responsibilities between third party and, uh, on the owner of the data or the custodian of the data. Um, I think it's really what it boils down to data at rest. As you know, most, uh, cloud providers are supposed to take care of that. Uh, the dish they're accountable for that because they provide you the guardrails, all of them. 
 

But once the data is [00:34:00] in transit, um, it's just a handoff. Right. The roles and responsibilities become a bit of a challenge and how well you define that. Um, I always go to different companies and one of the first things I ask them is how do you have your, your, uh, roles divided between, um, your, your cloud provider and yourself. 
 

And obviously there are many other parties in between, uh, their customers and their third parties. The transfer of data, how, where, where is this handoffs, uh, managed and, um, taken care of contractually or through other means?  
 

Aric Perminter: You know, Ty, I'm curious, I know you're trying to get your way in, um, uh, because of the cloud. 
 

Security background plus the work that we're doing over on the, uh, project Cerebellum. So please, I'd love to get your thoughts on what, uh, [00:35:00] what Darryl and Pam was just, uh, talking about as well.  
 

Taiye Lambo: Sure. So, I, I know I opened up a kind of worms when I mentioned cloud, and that led to you mentioning privacy. So I'm gonna say something very controversial. 
 

When it comes to AI, I think we can kiss privacy goodbye. 
 

In another five years, I think with the way things are going with the adoption of generative AI, I think we can kiss privacy goodbye. Um, that's my prediction. That's the worst case scenario. The best case scenario is that we do it so well that we actually leverage AI to strengthen privacy. But I doubt if that's going to happen. 
 

Aric Perminter: And Ty, are you seeing that based off of kind of the The, the harmonizing of the various control frameworks and kind of what you're seeing. What, why? I don't [00:36:00] understand the  
 

Taiye Lambo: why. So in the NIST AI Risk Management Framework, so we have folks at NIST that are way smarter than all of us. Well, at least me, you know, 50 of me combined. 
 

And those folks only have one control out of 72 that speaks to privacy. One, not because it's not important, but because it's like the train's already left the station, like, I mean, if you, AI's lifeblood, for lack of a better word, is data. So if you don't have the, if an AI engine or platform doesn't have the data, it's useless. 
 

So it's our ability to take in data, good data, right? Is what makes AI very unique and it being able to crunch that data and present it in ways that we love and, you know, makes our jobs, our lives easier and helps us make decisions better, even make sometimes [00:37:00] make make decisions for us like self driving cars, right? 
 

You wouldn't catch me in one of those. But anyway, that's a different, um, conversation.  
 

Pam Kamath: I agree with you, Ty, but there is a big problem with that. For me, you have to distinguish between what privacy is, And what privacy is for, there are completely two distinct philosophical views, because what privacy is for, you know, when you look at, even you talk to our kids, when you ask them what privacy is, it's completely different for them. 
 

They probably will not even think about it. But when you ask a mother of two, what privacy is, she will have a different answer. But when you talk about somebody who has a lot of money, In many different retirement funds in their sixties and you ask them what privacy is, they'll have a different answer. 
 

So privacy is, is a very personal [00:38:00] thing, but what privacy is for is something we all can agree upon, which is when it shifts in large scale, there is unfortunately leaves society vulnerable and the power is transferred. When power is transferred, it's, it's so, um, and there's so much imbalance that a society, a significant scale will lose its voice in how the data can be handled. 
 

And given what AI can do in terms of its ability to nudge, manipulate people to doing things, you know, over nudging, Ultra nudging or hyper nudging is what it's called, I think. Um, now that you can even sense sentiments, uh, with GPT 4. 0, we have to fight hard for privacy. I'm not sure, probably it's not me, but [00:39:00] like I said, what privacy is for is very, very important for the society. 
 

And, um, I agree with you. Um, but it's a fight that. Citizens are communities should not give up on.  
 

Darrell Hawkins: I think you're right,  
 

Pam.  
 

I think that when we talk about privacy. All right, let's say the bad word tick tock, right? You talk about the implications of social media and the A. I. Engines in the back rooms that tell us what who we like, what we should eat, where we should go. 
 

Right. And then you, you, you multiply that on a global scale, right? And you talk about where is, where is that, where is the line of safety? Um, where's my, where's my data private? Versus giving it to someone in China, uh, do they have a right? Because I signed the license agreement. Do they have the right to use my data? 
 

And that goes back to what we talked about [00:40:00] in the past. Like where's the human element in this that's auditing the data sets to say what's really. So if you let that genie loose and say, we're going to just have it all automated and have nobody auditing that data. Then I think that's where we really get in trouble. 
 

Pam Kamath: And I think even with that Scarlett Jones, I'm not sure I would love for somebody to kind of layer in on this particular one. The voice of GPT for Oh, um, when it was released and Sam. Altman and his people had that massive demo of what it can do, the fact that it can sense the environment, the fact that it can speak to where you are and how you want AI to integrate into your own personal life. 
 

Um, the voice that was used was really, really sweet woman's voice, right? And it was really, um, Scarlett, her name is Johnson. Joanne's. She actually made a big deal about it. And I think that those are the kind of things that I want to [00:41:00] highlight as the problems that we may face as a society at large. So  
 

Sean Martin: I want to, I want to touch on one thing here and it's This idea, I'm going to go back to the cloud. 
 

Sorry, misguided or not. I think we had a clear use case. You do this, that may be tricky to do. You might, that might look slightly different, but you end up with something. And we, we knew what that looked like. Kind of now with, with certainly with generated AI, it's any use case for anything across any number of partners in the supply chain, I've, I've had a conversation about signature SAS service. 
 

That's eating data for who knows why, uh, cloud storage services, eating data. So you can find an information in your storage service easier, where, where are those services running on the, on a data center [00:42:00] and somebody else's cloud who has access to those to, to enable more services on the services that they provide, and it gets really tricky and everybody's looking for, how can I use. 
 

This technology for my own benefit to serve other people, but to make more money as well. And I think it becomes kind of the highest point. I think all of that, all the data somehow somewhere ends up in a bunch of places. And becomes accessible to many at scale. No question, just a thought.  
 

Taiye Lambo: So, so I think on a positive note, I think the, the pros outweighs the cons for AI. 
 

Um, so we've been, I've been doing this panels where I've invited like state CISOs and CIOs. Um, we have one tomorrow in Atlanta at Secure World. Um, it's actually been hosted by Cyber City, um, Atlanta chapter. Thanks for watching! But we're [00:43:00] using the secure world events as kind of the platform. So we're going to have the state of Georgia CIO, who I've known 20 plus years, we're going to have Joy Persa, who's the former region for director for CISA, and she now works for Veritas, and Angela Hinton, who was my attorney when I was with the city of Atlanta as their first CISO. 
 

So, I always like to ask the audience first, do you think it's a threat, opportunity, or both? And then I'll then ask the panelist, you have to pick one. More of a threat, more of an opportunity. I know they wear their headsets because I know they're thinking it's both, but I actually prefer that they pick one. 
 

And nine out, I'd say most panels I've done, they think it's more of an opportunity. And this has CISOs, right? So that's good. So they see the opportunity, because with cloud, I know CISOs who said, I'll never move to the cloud, and some of those CISOs ended up being pushed out, [00:44:00] not fired, but pushed out, or they just had to retire, take early retirement, and now they do work with cloud service providers. 
 

One of them actually is a principal. consultant at AWS, right? And 10, 15 years ago, he said, I'll never move to the cloud. So anyway, so I've seen that. So the good news is the security community seems to be open, you know, and like cloud where we're like, we're never going to move to the sunset. We're never going to move to the cloud. 
 

I've been in CISO roles where we had data centers, in, in a, in the basement, right underneath the water fountain. And I said, and we get, we had bomb threats every week. And we had metal detectors into the building. It was a public building. And I said, we should never be in the business of owning our own data center. 
 

We need to move everything to the cloud. Two years later, I was long gone. This organization ransomware attack. And guess what? [00:45:00] Workloads were still there. Walking, the ones in the cloud, the ones, all the on premise stuff because of all the technical debt. You know, you fix potholes or you patch systems, right? 
 

You know, it's competing dollars. A lot of the on premise workloads were down for like weeks because there was so much technical debt. But the stuff in the cloud stayed up. I say all that to say, I think there's a lot of, we may have missold the cloud, you know, some CIOs may have done that, but we can learn from those lessons and say, okay, for this. 
 

new technology. It's not really new, but Chad GPT makes it sound like it's really new because now consumers have they have a seat at the table, right? They can use it right and they can experience it. So that's why it's become mainstream. And, you know, the media has done a great job, but I think we can borrow again from those lessons and say, Okay, yeah, We're going to put the guardrails in. 
 

We're going to try our [00:46:00] best to make it safe and secure. We're going to provide, protect privacy. For me, I think they are known for privacy. As long as I know how my data is being used, I'm fine with it. That's my risk tolerance from a privacy standpoint. Right now, 20 years ago, I had a different perspective because I lived in the UK and the European mindset is my data is mine, right? 
 

My data is mine. Nobody has the right. I never got any junk mail through the post, you know, because privacy is a big deal in the UK, right? I was a big deal back then. Now I have a different perspective because I've seen the benefit of it. organizations actually being able to use our data for good use, right? 
 

Or you leverage our data for good use. So my perspective has changed. So to Pam's point, privacy means different things to different people and different generations. And I think if we can change that on known to known, you know, like [00:47:00] given the consumer, um, the visibility into how their data is being used, but I don't think organizations like open AI can do that without being transparent and without being accountable. 
 

Right? So the privacy principles, transparency, accountability. You know, explainable, all those things, they have to be able to do that. And for me, those are the kind of guide rails that we feel everyone needs to have. Still do it, but still have those guide rails. And make them top of mind.  
 

Darrell Hawkins: I think you're right. 
 

I think though, you know, you're absolutely on target that the mantra of the CISO has changed, right? In the past, the CISO's job was, you know, to paraphrase, never, never let a good disaster go to waste, right? If something happened, you got budget, right? And that's, that's how you build your program. But I think today, if I talk about commercial space, right? 
 

Well, if you talk about healthcare and you say, do no [00:48:00] evil, we're going to use gender AI for the better of mankind. But in the commercial space, it's all about what do I like to say? Mr Permanente? The answer is money. What's the question? Um, when you talk about the commercial space, you have to show the return on investment because big enterprises are not going to use it unless it generates revenue. 
 

So where's the line between what's commercially profitable and what's bad for society? And I think that's our challenge.  
 

Pam Kamath: And I think I love the way you put it, Daryl, because that is where I think use cases come into play. And I think some of the big companies are doing pretty good job in outlining the use cases for generative AI. 
 

Um, and I, that should provide us with some level of guardrails around, um, risk acceptance or risk tolerance. Um, you know, when you look at retail, when you look at entertainment and other [00:49:00] places, um, by far risk is the last thing that they think about. And obviously we can shape that, have, You know, constructive conversation around that. 
 

But when you look at health care, um, obviously our tolerance is a lot different, um, especially when it comes to data itself. And we do have some, uh, made some really good headways around, um, privacy enhancing techniques, right? Um, And we could layer in some of that, but how these models get trained, unfortunately, is still a big question mark for all of us. 
 

Um, and also that's where, um, custom models make a whole lot of sense. Small models that's very specific to that use case, because your risk tolerance is Practically zero for some of those things, you obviously want to say, well, if you're [00:50:00] using open AI, you want to think twice if you, for this particular use case. 
 

So I like use case as a way to filter out how we want to manage risks and hopefully that'll. Kind of, in a way, help us, um, you know, integrate these models and not integrate. Yeah. Integrate, um, AI into other business capabilities based on use cases and the risk levels that each use case holds commercial versus healthcare and other sensitive areas. 
 

Aric Perminter: So, so Pam, I have to ask what And this is for everyone. What is your and each individual's personal use case for AI today? How are you using it today, if at all?  
 

Pam Kamath: That's a great question. For generative AI, there is a great appetite for sales operations, marketing, virtual assistants. [00:51:00] People seem to be very open in wanting to learn how to use, and I've been helping some of my own customers with marketing strategies and how they can layer in generative AI. 
 

But I still see a lot of hesitancy in other areas, in the financial services and, um, healthcare, um, for good reasons, especially around privacy and security. Um, But the, if you go move away from generative AI practice, um, AI seem to have a pretty significant hold, um, in financial services and healthcare. 
 

Aric Perminter: How about yourself, Ty? How are, how are you using AI today?  
 

Taiye Lambo: Um, so I've, I've used, I've actually sat down with CISOs, believe it or not, um, some state CISOs to actually come up, help them refine their strategic plan, [00:52:00] their vision statement, mission statement, using, using ChatGPT, believe it or not. And so that's one clear use case. 
 

Um, from an institute standpoint, HSPI actually moved to a proctor free platform. Um, so I'm not going to mention the name of the vendor, even though it's in the description. Um, about five years ago, we were proctoring exams in person, end of the, you know, like, it was a five day class, end of the five days, I walked around the room, and folks would still find out that Potentially may have been cheating, right? 
 

Because they, you know, you couldn't watch their screen all the time. And then we started doing like a Zoom type where you watch them, you know, they share their screen, but you just don't know if they have another screen going, right? And so we tried this platform, uh, which is already integrated with a learning management system, which is a proctor free platform. 
 

And this, it's AI proctored. So this [00:53:00] started four or five years ago. So now all our exams are delivered online. So whether you take a in person class, instructor led class, everything is delivered online and it's proctored free. So it's actually proctored by AI. The cameras have to be on, so it knows when they're sight reading. 
 

It knows when they're talking to somebody in the background. It actually flags that as, hey, that's something you need to review. So it's proctored. And you get a report once they've completed the exam, it doesn't stop them from completing the exam, but you're going to get a report and it's going to highlight the areas where there were red flags. 
 

So that is a perfect use case because I've had to proctor exams halfway around the world actually did a trip to Taiwan once to proctor. An exam for five auditors. And I flew halfway around the world to spend 12 hours in Taiwan. I'll never do that again. Thanks to Procter free.  
 

Aric Perminter: Yeah. [00:54:00] Good stuff. How about yourself? 
 

Darrell Hawkins: So I'm going to throw the fun factor in there. Right. Um, in my spare time, as a, as, as an amateur DJ, right. I'm using, AI models to put together playlists, right? So that I'd never have the same playlist again. So it examines my entire, um, library of music and I give it parameters based on what type of audience it's going to be, and it puts together a playlist. 
 

Of course, the human factor is I still have to understand that when people are dancing or not dancing, right? You still need, again, going back to my statement, you need a human auditor. So AI takes the music, looks at the library, I give it the parameters, it pulls together a nice playlist based on beat matching and pitch, and it says here, play this, and then I still have to perform the function. 
 

Just a different spin on all of the heavy duty AI techie [00:55:00] talk.  
 

Sean Martin: Pun intended. Spin 
 

Taiye Lambo: So Dar, you're not worried about that AI replacing you as a dj?  
 

Darrell Hawkins: Well, y you know, it's funny because that is a big concern of a lot of DJs that you can put, you know, you can put AI in a, in a, you know, studio, right? You can put AI in the studio in a recording studio or radio studio, or in a DJ booth and just. 
 

Feeded parameters and it plays automatic, uh, but there's still that human fight. So, right, people, people are different, right? We know what we like and  
 

Sean Martin: yes, until we become numb to it, right? Except that I have this theory of low lowest common denominator and we all end up there and we're okay with it. And it sucks. 
 

So maybe you can have  
 

Taiye Lambo: Daryl. ai or DarylHawkins. ai that is like a clone, a digital clone of you.  
 

Darrell Hawkins: Exactly. [00:56:00] How do you know I'm really here now?  
 

Sean Martin: Nice. I want to share quickly, Eric, on this point, because one could imagine one use case for us, which I'll go ahead and share. We use AI to kind of summarize and find the key points. 
 

We have during these conversations. So we use that to summarize and find, find interesting points. We also use it because we have a network of shows where we have a very strict rule of not having promotional content in thought leadership conversations. So we check for somebody trying to pitch their company or product in a show, and they shouldn't be. 
 

So we can, we can identify those. We, a number of months back, and I've talked about this in other episodes, We started to build a, an AI bot to help people find conversations that we've had [00:57:00] and, and to then find key points from those conversations to help them learn. Um, we chose to not do that for two reasons. 
 

One, um, errors. So it was not accurate. It was placing quotes for people that, We're not quotes. And there's no me in between that engagement with the, uh, with the end user to verify it. So I didn't want that inaccuracy. And the other was cost on back to the cloud. Um, if somebody misuses or abuses my bot who pays for it, this guy. 
 

So those two risks made me decide not to. Pursue that particular project.  
 

Aric Perminter: Interesting. You know, Sean, I did have a question for you specific to this. Um, you know, a lot of the actors and folks in the entertainment businesses worry about AI kind of replacing, replacing them, uh, sort of along the themes of the replacing the DJ as [00:58:00] well. 
 

But, but Sean, do you have a risk for ai kind of outthinking. Right? You're, you're, you're, you're the content that you have been successful generating over. Over the year by just training itself right on, on your content. Is there a concern for you there? And if so, what are you thinking about that?  
 

Sean Martin: So I'm not too concerned. 
 

I, uh, I think the, because it is multiple people, I don't know that AI will eventually perhaps, but this conversation with five people thinking on the fly, responding to each other. Um, I almost never prepare for a conversation and therefore it's very random, my thoughts, and sometimes they're way out there. 
 

So you can try to figure out what the heck I'm thinking. Good luck with that. [00:59:00] Um, it, it may over time, of course, be able to do that, but. Okay. But I, I think, I think multiple people, um, and for the, uh, the, uh, the actors and actresses, I think, um, yeah, Marcos had direct experience with that, uh, with his spouse and, and being part of that whole strike. 
 

And I think it, it come, it your, your point, uh, was it you, Darryl, that said. Money's the answer, what's the question? Right? I think it's, you know, all boiled down to where, who has control and where do the money, where the money flows. Well said, well said. And I know we're at, uh, kind of at time here, and I'm gonna summarize in a second, but for you, Eric,  
 

Aric Perminter: ah, for me, we are actually utilizing it in, uh, sales operations as Pam indicated. 
 

Um, our specific use case is around, [01:00:00] uh, looking at order winning. Uh, proposal language and, uh, pulling those winning themes forward and future responses. But but building upon those responses by leveraging a I to lean forward on the top. And it's been a true needle move for us, both from the perspective of positioning higher value services with, um, our our customers. 
 

But more importantly, Better defining how we operate and what we're positioning as a, as a company. So it's making us a bit smarter, if you will, from a sales operations perspective.  
 

Sean Martin: Love it. And I think you're going to have a conversation about that proposal. You have to be able to speak to it. You do, you do. 
 

Pam Kamath: And I think that's probably where we will see. As all heading towards this, how can [01:01:00] we be smart as a result of this tool? Um, but I think they'll always, we'll always be stopped for human element. And it's just how we are made. And that gives me confidence that no matter what, we won't have a situation where, um, AI will practically rule over some of the things that we've been doing, um, for the longest, because we as human beings, we, we value no matter what, uh, that human touch. 
 

Um, and also we hear more about soft skills, soft skills being, um, the most important factor in the future leadership and things of that nature. And I think, um, I cannot enough emphasize on that.  
 

Aric Perminter: Yeah. You know, and for me, Pam, I think for the first time, this, this is the first time that I've personally been willing to relax [01:02:00] some of the. 
 

controls that I place on my data for the benefit of the efficiencies that it's going to deliver. Um, and I think the more value that that continues to deliver the broader or the more relaxed I'm going to continue to be. Um, but so far it has probably been able to increase My personal productivity by about 25 percent and our team's overall productivity probably by about about 40 percent realistically,  
 

Pam Kamath: that's kind of pretty much what I hear as well with some of the people that I speak with. 
 

So you're very spot on on that. Because people are starting to see that they are working smarter and not so much harder. Um, and obviously that's the shift we want to make. Um, the reason why I [01:03:00] say there's two things. One is not only will it make us smarter as a, uh, you know, in how we will be positioned, um, in front of our own customers and vendors or providers. 
 

Uh, but also it's the. It's the creator's economy right now. We can create, you know, amazing things with this tool. Um, I can't buy, I use it because I love cooking and it's amazing to see that I can say, I only have like these three ingredients, give me five different options and this better be a global cuisine. 
 

And it baffles me each time. You know, it's like, oh my goodness. I would have, I always. because I've been cooking for the longest. That's my passion. I always keep thinking that, you know, of course I have all these cool things I can do, but I am actually amazed at what it's able to do for me. And I'm like, Oh, this is amazing. 
 

So  
 

it's fun. It's fun.  
 

Actually  
 

Darrell Hawkins: something that [01:04:00] was probably in edible to anybody but myself.  
 

Sean Martin: I have to have the technique slept out of the technique. Baking is different than barbecuing, I think. Well, this has been fantastic. I think what I what I took away from this clear use cases, I think we kind of closed with that as well. 
 

And, uh, with a view for what the outcomes are, what, what are we trying to achieve with it? And to your point earlier, Pam, kind of scoping that, taking it directly to something that we can really focus on. So that we're not trying to boil the ocean there, I think really helps. And then we can switch to looking at risk tolerance. 
 

And then I think some of the other points we made were leverage what we already know, right? And what can we learn from the past where we've encountered something like this before? I think with that, we, we have some guardrails and we, I think we're going to have many more conversations. I think some ideas on what [01:05:00] guardrails might look like would be a good, good one next, right? 
 

So, um, Indeed. Indeed. Well, Pam, Eric, Daryl, Ty, thank you so much for a fantastic conversation. I hope everybody enjoyed it. And, uh, the goal is to get everybody to think, right? I'm certain we did that today. So thanks everybody for listening and watching to this episode of Redefining Cybersecurity here on ITSP Magazine. 
 

Thank you, panelists. Thank you, everybody. Uh, please do subscribe, share with your friends and enemies and, uh, stay tuned for more. Thank you all.  
 

Aric Perminter: Thank you, Sean. 
 

Sean Martin: Thanks, everyone.