In this Brand Story episode on the ITSPmagazine Podcast Network, Terry Ray SVP of Data Security at Imperva, discusses insider threats, AI in data security, and the importance of human intelligence and oversight.
In this Brand Story episode, hosts Marco and Sean discuss data security and insider threats with their guest Terry Ray, field CTO and senior vice president for data security strategy at Imperva. The conversation covers a range of topics related to data security and the challenges organizations face.
Terry highlights the need for clear policies and strategies to detect and prevent insider threats. He points out that while organizations may trust their employees and contractors, people are not always security-minded, which can lead to trouble. He also mentions the presence of malicious individuals, although they are fewer in number.
Terry shares statistics that reveal a gap between organizations' perception of their data security and the reality of lacking comprehensive strategies as the trio explores the potential of AI in data security, with a focus on the limitations of AI in making complex decisions.
Terry emphasizes the importance of human intelligence and oversight, arguing that AI is not yet capable of determining the best course of action in certain scenarios. He gives an example of using AI to compare web application firewalls and points out that AI may not have the context or intelligence to identify what is missing if it hasn't been done before.
The group also discusses the balance between security and convenience, particularly in areas such as the medical field. They consider the advantages and risks of feeding AI with medical data and the potential for AI to find solutions that humans may not have considered.
The conversation sheds light on some important strategies and best practices as well. To dive deeper into this topic and gain valuable insights from industry experts, we encourage you to listen to the full episode.
Note: This story contains promotional content. Learn more.
Guest: Terry Ray, SVP Data Security GTM, Field CTO and Imperva Fellow [@Imperva]
On Linkedin | https://www.linkedin.com/in/terry-ray/
On Twitter | https://twitter.com/TerryRay_Fellow
Resources
Learn more about Imperva and their offering: https://itspm.ag/imperva277117988
Press Release: Shadow AI set to drive new wave of insider threats
Blog: 7 Facts About Insider Threats That Should Make you Rethink Data Security
Research: Forrester Insider Threats Drive Data Protection Improvements
Are you interested in telling your Brand Story?
https://www.itspmagazine.com/telling-your-story
Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.
_________________________________________
[00:00:00] Sean Martin: Marco,
[00:00:02] Marco Ciappelli: Sean,
[00:00:03] Sean Martin: you have 30 seconds.
[00:00:04] Marco Ciappelli: Have three seconds. What? Look around?
[00:00:07] Sean Martin: Oh. Three. I said 30. You said three. If you wanna show the three, that's fine by me. You're really gonna wanna use some new tools if you have to do something in three seconds,
[00:00:17] Marco Ciappelli: I don't know. Um,
[00:00:19] Sean Martin: and, and be creative and, and have it sound real.
[00:00:22] Marco Ciappelli: I think there is just one choice.
[00:00:24] Sean Martin: One choice?
[00:00:25] Marco Ciappelli: Yup, but it makes me really nervous. It's using AI. And I, you know, I look around and I feel like there's a lot of eyes looking at me and shadows lurking in the back of the office and, um, I don't know. I'm kind of nervous to use this AI thing.
[00:00:42] Sean Martin: There are shadows in, in the presence of things.
Uh, it's nothing new either. And we're going to talk about that today. Uh, the, the history of shadows in technology and people running around. What the heck is this all about? Uh, well, thankfully we're going to have, uh, our good friend, Terry Ray join us on the show. Good to have you on, Terry. Uh, you're going to help us understand what shadows are and uh, specifically shadow AI.
[00:01:09] Marco Ciappelli: I heard, I heard you're an expert in shadows.
[00:01:12] Terry Ray: Well, I have this little cool game where you put like a flashlight and you put things on there and you got to hide your guys behind it. I play with my kids. So yeah, it's a very shadow oriented game.
Very
[00:01:22] Marco Ciappelli: cool.
[00:01:22] Sean Martin: Shadow Hunter.
That's right. All right. Before we get into it, you've been on the show before. So folks likely have heard your voice. Um, well it's refreshed and Terry, what you do at Imperva and, um, maybe the catalyst behind this topic today. How did this come up?
[00:01:42] Terry Ray: Yeah, sure. Sure. Uh, yeah. So for those of you who aren't aren't familiar with me, I've been with perfect for about 20 years.
I think going on 20 and a half or so years at the same place focused strictly on data and any access to data. Uh, it doesn't matter what kind of data it is, whatever is relevant to you. And so my role has ranged from being somebody who goes out in the field and chats with customers, which I still do today, um, all the way to executive staff, CTO, my formal title today is, uh, field CTO and senior vice president for data security strategy, but it just means that I spend a lot of time talking with.
Radio personalities and, and end users and customers and every, everybody who wants to have a chat. So, uh, I spend a lot of my time talking about things like this, uh, shadow AI is one of these things that, you know, has been top of mind for, I think a lot of people, anytime you want to be part of any event over the last nine months, all you have to do is use.
AI. And I promise you, your presentation will be accepted. Everybody wants to hear about it. It's so new. And of course, there's the flip side of that, which is all of the AI. You just simply don't know what's going on in your organization. And I think that's what we're going to have a little bit of a chat today about.
[00:02:56] Marco Ciappelli: Yeah. And I, when we were kind of preparing for this, not that we needed to, because we kind of had, like, already knew what we were talking about. Because history repeats itself, as Sean said. At the beginning, right? It's kind of like when we were talking, I thought about, you know, bring your own device, bring your own shadow, bring your own AI.
And, uh, and now we're back here a few years after. And, uh, and it's not just that. So, you know, let's look back a little bit into. Into this trend that may be a cool one or not, depending on what side you're looking in, maybe from the corporate side security. Not so much.
[00:03:32] Terry Ray: Yeah, if I look, if I look back at what I was talking about a year ago, a year ago, I was talking about shadow APIs.
And API security is super hot right now. So many organizations have a gap around APIs. Not what we're talking about today, but two of the three letters are still there, right? So the reality is it hasn't changed a lot. It's just what you're really trying to protect here. And I think that's the big case.
You know, if you rewind a couple of years ago, we were talking about shadow data, shadow databases, shadow. There's always this, I've got something new and this is what the organization wants to use. The way they want to use it. And then there's this other stuff that either I want to use or makes my job easier or benefits me in some other way or the organization.
And if I use it, maybe I don't have to tell everybody because I don't think it's a risk. I don't think it's a big deal. I just do this to make my job. I'm benefiting the company. Uh, and, you know, introducing some element of risk. That, uh, that, that wasn't there previously. And now has been introduced in the organization because let's face it.
Most organizational employees, contractors, and partners are not security focused first. They're functional focused, they're efficiency focused, they're help me do my job. Wow, that's easy focused. And so I think organizations are starting to wake up to all of these shadows that exist in their organization.
And I promise you there'll be one after API. I don't know what it'll be yet, but it's the flavor of the day for sure.
[00:05:02] Sean Martin: I know, but I'm not going to tell you, Terry. I'm going to the next one.
[00:05:06] Terry Ray: Okay. Thanks, Sean.
[00:05:07] Sean Martin: Well, I want to go back to some examples, um, where we look at, we look at the world today and we say, of course we use that, um, shadow it, uh, a number of years back, we can look at Dropbox and Box, for example, and the driver behind some of those tools and services that end up putting.
Data from within the organization out on, on a, uh, on a, on a web server somewhere that isn't on your own premises. Uh, the need for those was driven by business, right? Users needed to share data with each other and with their partners and with their customers and with their prospects. And sometimes those files were too large to send through email.
So existing systems couldn't cope with the new information being created and developed and shared. Um, How does that relate to what we're talking about here in terms of AI? Are we seeing similar drivers? Are current systems not capable or is it or is it really just that it's the shiny new thing and everybody wants to play?
[00:06:17] Terry Ray: I think across the board, the answer is probably yes, I think, but the examples that I've seen have been around efficiency, making my job easier or saving the company money in some cases. Take the example of, uh, there was an article on it about a marketing department and the marketing department had presentations.
That they traditionally had to outsource to third parties to translate or even transcribe and translate those presentations into other languages or just transcribe it down to, you know, some format. To do that, I need to put that presentation and all of its content. Into somebody else's environment and have them go do that work and pay them now with AI.
You may have to pay a service fee for the AI, but you don't need to pay the transaction fee or the have them go do this process piece. Just give them all the data and without realizing it. All of a sudden you have a marketing department. Who feels like they've done a great thing. I've saved potentially thousands of dollars of all of this transcribing fees and translation fees, and now I've got exactly what I need, but now I've also taken what may very well have been a very private or maybe, maybe internal, or maybe an investor or pre investor presentation.
Out translated it, but now it's not available for anybody who's going to be able to have access to it. So I've now put my entire organization and expose them to risk. It was about efficiency. It was about making my job easier and about saving the company money. But at the same time, I didn't realize that by putting my information out into an AI world, That that now becomes a learning material that now becomes material that that AI can answer back and say, Tell me about this business and what they're doing today.
And it comes back and says, Well, here's the businesses. Here's what their numbers are for this quarter, and they haven't even been released yet. And now you've got an issue.
[00:08:03] Marco Ciappelli: So here's the thing. We talk about this in education. We talk about this in work, movies, writing. People are on strike right now. And the big question is, it's always like, Are we gonna use it? Are we gonna not use it? We're gonna ban it. We're not gonna, maybe can't ban it because it's there anyway. You're not gonna stop this train. You already left the station. So what do the corporate environment need to do to, right now without being too late? That's what I'm saying. And we know that they're gonna adopt it.
I, I mean, I think there's no questions about it. But right now, before they're ready, what, what should be the policy there?
[00:08:48] Terry Ray: Yeah, I think, I think organizations, and this, this is my opinion, obviously, right? But I think organizations really I think they really need to have a policy in place that dictates how that organization wants to leverage or not leverage API or AI, API, AI.
And I think one of the things that they go into, I think they have to be careful of saying, you just absolutely cannot use it. If you want to say that, that's fine. And there, there's your policy. And if you show proof, then you can take what action you want to. I think most organizations today are taking a little bit of a gray area approach in saying.
We would prefer you not to use it without coming to a specific group or specific person or team or what have you first, tell us how you want to use it. Tell us what information you intend to put into it. Let's work together between you and what you're trying to achieve and and risk and making certain the organization is safe.
We're happy to have you do it. That's perfectly fine. But let's have that conversation first. Don't do it on an island. Let's come together and have that conversation. I think that's what most organizations are are. Or looking to achieve today. And I can't say that that's what successful organizations are doing today.
Cause I think it's too early to say that, but that's what I'm seeing from most organizations, my own organization included, right. Being able to say we're open. We're innovative. We want to use new things. We want to do it safely and as risk averse as we can.
[00:10:11] Sean Martin: We keep using API. I think, one, it's on our tongue and two, it is connected to this as well.
And maybe some thoughts on perhaps using APIs. Clearly, somebody can go directly to a web interface and enter the prompt. share with it information and get, get a response back. But there are also APIs to do the same thing, which could be driven by some other tools or automation. And I'm just wondering, is there an opportunity for organizations to say, let's not use the open interface?
Um, perhaps we build our own interface that leverages the API using our own API calls that we then add some, some filters or some rules on that says. If we see anything that looks weird, we're going to either prompt. To confirm or, or prompt to say, we're not letting this fly or we've redacted or some other action in there.
Is there, do you think there's an opportunity for organizations to use it in a way with some guardrails or is it too early to do that yet?
[00:11:21] Terry Ray: Yeah. Those are the words I would use, right? It's, it's guardrails. It's a lot like the geo fencing and stuff you would use and other stuff. And when you want to keep things in an area.
And, and I think the same thing happens here, right? So as an organization, if you're going to create a contractor, if you're going to say, yes, let's use some AI to your point, either have your own corporate access interface tool or, or through VPN or SSO or however you want to achieve your people getting to that asset or that, that, that resource, let them go do that.
Your contract with whatever AI vendor or AI type or large language model you're going to have when you're paying that contract needs to set certain guidelines in place. You need to make certain those guidelines are in place and such that if I have employees putting data in, I do not want my data opted into your large language model.
I want to be able to do what I need to do and use what your large language model has learned and what it knows from public domain, possibly in my answers. Fully recognizing that my answers Regardless of what my, my, my limitations or my, uh, my guard rails are, those answers may be questionable and we have to validate answers regardless, but I want to make sure that whatever we put in does not go into the bucket of public domain information.
It stays within our bucket. I think as long as you have those kinds of controls and you trust your AI vendor to make certain that if that information is brought in, it's not added to it, then I think there's some safety that could be there, but that that's got to be that trust that has to be built. And, uh, And built in and agreed upon, I guess, or paid for.
[00:12:55] Marco Ciappelli: So let's feature some, some scenarios, some, uh, case study, let's say on what could represent this, you know, kind of like hybrid line where it's public is not public, meaning we got to use it somehow. Because we always go back there. You can't just lock everything, right? There are certain things you need to use to run your business.
Your people need to use it. So the department may need to say, Can I have a special permit to access this and use it with AI? Do you have some example in mind in the way that this is presenting itself right now.
[00:13:35] Terry Ray: Well, I mean, the thing that pops into my head coming from a product side of the business is about code review.
So I need to use the expertise. I need to use the experience of the AI having looked at a lot of other code, a lot of other people's development tools and everything that they've done. And I want you to look at my code, therefore, I need to give you my code, but I definitely don't want you putting my code out there.
I just want you to use your expertise from other people's code and apply it to mine. Right. And so I can see, I have seen, and I continue to see more and more, um, development organizations taking advantage of this. Especially for bug fixes and other pieces that can be very time consuming normally. If they can put that code directly and let the, let the, the, the large language model, the API, the GPT, bring that information back to me and say, here's the correct code.
This is where you're, it's probably not something as simple as syntax, right? But being able to make my code more efficient by doing the following things or leveraging this algorithm versus the one you have, I think those, those are pieces that have to be there, but it goes back to, back to that trust. And I think.
One of the things on the security side of this, there's the whole do I use it? Don't I use it? And what do I put in? What information do I put into there? But on the backside of it, from an organizational perspective, it goes beyond just, just saying, I'm going to give you permission, or I'm going to give you a policy or a process.
All of those are fine. And those are actionable if somebody violates them in one way or the other, but it takes us down this path of how do you know? When somebody actually violates your policy or says, I didn't go to the council, I used it anyway, and I did put information out there that I shouldn't have had, I shouldn't even had access to this information in the first place.
How do you try to identify that? And that's where we start to, I think, really start to have the problem. We hope that we trust all of our employees and contractors and everyone else. But if history has given us multiple examples, it's not that you don't trust them, it's that people are not security minded and therefore they can get you into a lot of trouble just by trying to do something great for the organization.
And then you have the malicious people that we don't, you have to worry about, but they're, they're frankly, you're fewer of them. The problem is that you run into is you run into some statistics, and these statistics came from Forrester and a few other services, but one of them is 64% of organizations today already believe that they have enough in place to be able to detect this kind of stuff, be able to detect what we would consider an insider threat, risk, a risk threat, if you will.
Um, the other case that comes with it is 82% of organizations. In fact, have no insider risk management strategy or program. So I've got 64% of organizations that say they've got exactly what they need. They, they don't need anything else to go solve this problem. Yet 84% of organizations admit they actually don't even have a strategy or, or a program.
These just don't mat. They don't add up, right? The math doesn't add up when you start to look at that.
[00:16:35] Sean Martin: Well, it does, but not the way you want, or we would hope, I think. Um, let's talk about, and this may be a little philosophical . Um, cause you mentioned there, there is the option to use the shared data and not contribute to it.
And, and then you also talked, touched on the point that you have to validate. The response or the answer that you get back. And all I'm thinking here is garbage in, garbage out. If, if we have a common denominator that, that everybody's using and not, and not contributing to, we end up with this world where everything looks the same as my, my philosophical view on technology in general, everything looks the same.
We end up in a world where, where everything comes and goes like everything else. And. Good or bad. So I think to Marco's point earlier, you need to leverage technology if you're going to get ahead. Those that don't are going to fall behind. But if the technology hits a plateau, everybody who's advancing, it's that plateau with the technology, and we end up not moving forward.
So I don't know, in back to the business context and the use cases you're experiencing, do you find organizations correct at all? Trying to figure out a way to leverage and advance, but recognize that it might have some of these limitations, not just errors in the response, but limits in what they can do with it.
[00:18:04] Terry Ray: Well, you and we, the three of us were talking about travel earlier, and you mentioned it's pretty good about being able to say, go to point A, B, C, D, and E, go look at these things. But the quest first. First question you ask is, which one, which is the best route? Which is the best hotel? Which is the best one?
Well, that's an opinion, right? That's, that's, that's maybe being creative and it may change day to day. And that's based on each individual's rating, whatever. Right. I think the same thing applies here to, to business, right? When you're, when you're thinking, okay. Well, I want you to compare my web application firewall or my data security to my competitor's data security program.
And then I want you to tell me what we could do to be better. What, what feature am I missing that we need to create to improve security, not just for my customer, but over my competitor? I don't think AI is remotely ready for that yet. They're not there yet. I think that's not, you know, maybe AI creators would disagree, but from what I've seen.
That's not where it's at. I think it would come back. And as you guys noted, if you say, tell me the best path on this trip, it's going to come back and say, can't do that. I think it's going to come back and say, I don't, I don't have the level of context or, or even intelligence, I think, to be able to say, here's what you're missing in that environment.
If. What is missing is something nobody's ever thought about before. It's only missing if somebody else has already done it and you simply don't have it. And that's fine. That's a comparison. That's different.
[00:19:30] Marco Ciappelli: To be honest, and I don't want to drop a bomb here, but I don't think us humans know really what is best in general.
So let's put that in there too. So that's a lot of pressure on AI here. In my opinion. Exactly. Um. How about security versus convenience, but also not just convenience in terms of, you know, buying stuff or being able to do things faster, but actually crossing the line where, look, I'm thinking, um, in medical field, right?
I mean, the more, you know, the more you feed the AI, we already know is way better than human to aggregate, find. The datas that, that, that, that, that is going to probably give solution that we never even thought about. So again, what is best, um, yeah, what is the level of risk versus what, what is the advantage that you have by taking this risk?
So some thoughts on that.
[00:20:33] Terry Ray: Yeah. No. Well, I mean, I think the, it's. I think the healthcare is going to be like any other industry where, you know, they're going to use it. They are using AI and they're using AI to do some very interesting things. I think one of the things about the health field, um, unlike say financial services where financial services a little is a little bit, um, Uh, what's the word?
Uh, I, I suppose in financial services, you have to be able to bring all the information together and interpret that information in ways to understand the economy and everything else and how that's going to impact dollars and cents. In the healthcare industry, and I'm simplifying it. I get that. So if you're in healthcare, I apologize, but you know, when I, in the healthcare industry, There's a lot of information that we already know.
It's about putting these pieces together and recognizing where they fit and where they don't fit in some cases. And there's just such a vast amount of information. If you look at the whole. Um, genealogy of this and the genealogy of that or the genome, I should say, not genealogy, but the genome of all of these different pieces and all of the elements that go in there.
I think that's a great model for analytics and more importantly, AI to go in and figure out how each and all of those pieces all fit together and where the similarities are, whether it be antiviral epidemiology or whatever it happens to be in that, in that space. So I think there's a lot, a lot of use for it.
There's a lot of research out there that also is very private research. Um, but I think a lot of that research out there as well is, is pretty well shared that research, most, most medical research is very private right up until you need it to be peer reviewed. And then you really want everybody looking at it.
You want everybody to see it. You just don't want to see it. So show it to them ahead of time. Pharmaceutical is a separate thing, right? How do I solve and make a drug and all that sort of thing? So each different facet and element of healthcare is a little bit different. I think in that regard, I know, I know physicians for sure that are already using it.
Not just for research, but simple tasks like helping them make their email sound a little bit smarter than what it did before. And that's a, that's a very, you know, it's not a simple task and it may not be, there may be nothing private in there, but at the end of the day, there may be something HIPAA related, especially if you're doing something where you have patient information, names or other things, they simply have to be aware of it.
And I think one last statistic, which is interesting as we start to think about these controls that we have and recognize that it really. Doesn't matter which industry you're in, AI has the ability to streamline some aspect of someone's job or someone's performance in that organization. Statistic is 55% of insiders.
Have created ways to circumvent data protection. And so we know organizationally, there are people who are going to say, I'm going to use this because otherwise I just, there isn't enough time in a day for me to do what I need to do. And I think what I'm doing with AI. It's not, I know it's not risky because I know, and, and I think you're going to find people doing that.
And that's why it's so important for organizations. They have to be looking at their data. I had to summarize, you know, what, what, what organizations need to be targeting here. You need your policy, but your data security program, those. 64% of people who say they have everything yet fully admit that at the 84% level, they actually don't have a planner strategy.
Need to recognize that if you don't know where your most critical assets are, i. e. the data that you don't want to be put in AI, if you don't know where that is, you'll never know if somebody uses AI. If you're not watching where people gather that data and pull that data, wherever that data is, again, You won't know if they're using AI.
And lastly, if you don't have some ability for those things to pop up and say, Hey, by the way, security, somebody just took a large amount of, let's say code is the example, a large amount of code. They, they have, they they're fully authorized to use it. They, they look at it every day, but they don't typically pull it down to their system.
And they pulled it down to their system and used it somewhere. We saw it go out somewhere. How do you know about that? How do you go find that? You have to be watching the activity on the data. And this is, I think, the big gap that organizations are having today is, is even before AI, organizations couldn't answer this.
This is just another reason, another driver, another pain point, which I talk about all the time. Organizations typically don't do data security unless less data risk is painful for them. And, and AI is going to begin to make it painful. Has it been painful for a lot of organizations yet? Not as far as they know. ,
[00:25:10] Marco Ciappelli: Sean, can I use an acronym for once?
[00:25:13] Sean Martin: Go for it.
[00:25:14] Marco Ciappelli (2): D.L.P.
[00:25:14] Terry Ray: Look at that.
[00:25:16] Marco Ciappelli: Ah, I'm prepared.
[00:25:18] Sean Martin: Data leak. Data loss prevention.
[00:25:19] Marco Ciappelli: Data leak prevention. Let's talk about that.
[00:25:23] Sean Martin: Well, let's, yeah. And maybe, I don't know if, I certainly don't want to expose any names or scenarios that would put any, any particular company at risk, but some of the conversations you have, Terry, where the executives realize they need to.
Take a stance here. Take a stand in the on data protection. What are some of the drivers for that? I mean you you mentioned HIPAA in the healthcare space if they recognize they're gonna get a huge fine that might be a driver if they If something pops up where IP is exposed and they found out about that that might be a driver What are what are the conversations like with some of the companies you're talking to or they say?
All right enough is enough We need to do something. Help us kind of get a handle on this. Where do you take them?
[00:26:14] Terry Ray: So it's interesting. This, this is a really, this gets right down to the core of every industry's data security problem. It gets down to the core of why do we still see organizations that we think are spending tens of millions of dollars in security?
Why do we still see them lose data? It just doesn't make sense. Why do we see organizations that we trust with our private data that... Are multi hundred billion dollar companies still lose data? Why do we do that? We do that because there are really two primary reasons why an organization says, I need to do something different.
in data security than I'm doing today. The first is the most primary reason, most primal reason an organization would do it, which is somebody else told me I had to compliance. So the only reason I'm doing it is because somebody else walked through my door and said, if you don't do this, mom and dad, if you don't do this, here's what's gonna happen to you.
You're not going to take credit cards anymore, or it's going to cost you more money, or I'm going to fine you or whatever. If you don't go do these basic rudimentary things, you're going to get slapped pretty heavy. Or we won't do business with you. Or we won't do business with you, right? That's right.
Sadly, that's even, that's, that's a little less actually, I'll still do it. Like if I don't do business with you, I'm going to lose money. Right. So that's a whole separate issue. Um, but yeah, you're right. And I think the, the other piece which follows behind pretty largely behind compliance is I got hacked and I got hacked can include.
I got hacked or my neighbor got hacked and I don't want to be like him. My board is asking me, how do we make sure we're not like company XYZ because they're all over the paper. They happen to be literally down the street and they're in the same industry. What are we doing not to be like them? And then the security department all of a sudden gets a little bit of budget.
They get a little bit of energy and they go and do a little bit of work. What I will say about this across, just about across the board for every industry and every organization today that are not doing it right. There are some that do it right. The ones that don't do it right, what they do is they say compliance.
What do I need to protect to be compliant? Is it credit cards? Okay. Well, I have two credit card servers. I'm simplifying this. Two credit card servers that have house credit cards out of the thousand of databases that I have. And so I'm going to put my controls where? On the two servers where I have credit cards and everywhere else doesn't really matter because at least the guy who walked through the door, the lady who walked through the door who said you have to go do it, they're going to be happy because they're not going to look at the other 998.
They're looking at those two. From a security perspective, they're going to say, what kind of data did they lose next door? Uh, they lost, uh, I don't know, names, addresses, and phone numbers. What are we doing about names, addresses, and phone numbers? Let's go protect where we think, or we know we have names, addresses, and phone numbers.
Point is, is they're very, very siloed in terms of where they apply their controls and, and in 20 years it still drives me nuts is probably why my hair's gray. It's, you can't see it on the podcast, but it is. And so when, when I think about, when I think about, this is the thing that drives me nuts.
Organizations will spend a ton of money to solve for compliance and for these knee jerk. Hack. You know, I got hacked. Let me go solve a problem. And that's why it looks like they spend a lot of money on this security, but why they still get hacked is because they have holes all over their infrastructure when it comes to data.
And that doesn't exist. You mentioned DLP. That doesn't exist in network security and network security. They have a network firewall at the perimeter of every single segment and every single network. They have endpoint security, DLP, anti malware, everything. Why would you put it on five? And not the other 10, 000 laptops.
You put it everywhere. You put it on everything. And then we get to the data security. We get to the most critical asset in the organization and we say, eh, we'll just kind of put it on a few little things over here. That's good enough. And that's the problem in data security and what we're trying to achieve here.
That won't cut it. If you're trying to detect shadow AI because you have no idea What people are going after. It's not just about private, sensitive, otherwise information. It can be IP. It can be just an email that somebody's doing. You just don't know. You have to be looking at what people are using when it comes to the information in the organization.
[00:30:41] Sean Martin: Which, by the way, has access, uh, uh, APIs and tons of no code app builders out there. Where pretty much anybody can build their own app tapping into your data sources. And the AI to do their own.
[00:31:00] Marco Ciappelli: And I have a feeling, Sean, that the, the AI is gonna find stuff that you don't know that you lost under the couch.
So it's kind of like the, the data police show up and, oh, you called us, what did they steal? And you're like, I don't know. I know that the theaves were here, but I have no idea what they took.
[00:31:19] Terry Ray: That's the worst answer of all .
[00:31:21] Sean Martin: I'm missing my half penny.
[00:31:24] Terry Ray: Because the next question, if you say, I don't know, you know, the next question is, what do you have?
I have a billion records. Then if you can't tell me it was one, two or five, then we assume it's a billion. And that becomes a very expensive fine. And in fact, it takes me, I know we've got to go here, but it takes me to, there used to be these databases out there where you could see breaches and you still can in some cases here and the number of records.
You will almost always see a beautiful round number on the breaches. It wasn't 1, 555, 226 records. It was 2 million records. Why? Because they honestly don't know.
[00:32:02] Sean Martin: Well, Terry, I'm in, I'm going to make you do this because you. You so eloquently described the failures of organizations where I'll summarize it is they trim the scope to only what matters to check the box, kind of leaving everything else exposed.
Um, so that that's the companies that kind of missed the mark there, even though they tick the box, um, described to us what a scenario looks like where they get it right. He said there's some that do. Um, I presume some of the work that you offer to help them get it right is, is how they actually do that.
So describe those scenarios and how Imperv and what you do helps them protect data at all and all paths to it.
[00:32:49] Terry Ray: So look, at the end of the day, it comes down to three really simple things. It's not hard anymore. I get 20 years ago, data security should and accurately should have been perceived as pretty difficult.
Data security today is really without comparison. Very simple. All you need to do is leverage automation. Automation exists in our organization. Yes, we, Imperva, certainly offers these solutions. There are certainly others out there. If you, you know, don't want best of breed, but that's okay. When we come to Imperva from an Imperva perspective, at the end of the day, it's three simple things.
Know where your data is. That's called classification and discovery. It's automated. That's our system. Anybody's system is going to scan through your system and say, you've got credit cards here. I know you knew you had credit cards there, but did you know you had credit cards in these other 25 places and in dev and in test as well?
Did you know that? Great. Well, a credit card in dev and test, by the way, spins just as well as a credit card from production. They're the same credit card number, by the way. So. Same scope, same security. Know where your assets are and data is an asset. Secondly, everywhere where you have those types of sensitive information, certainly you need to monitor.
Yes, absolutely you need to monitor that. But because of you, or even Imperva or someone else says that is defined as sensitive information by a government or by a regulatory entity does not mean it's the only information you need to be monitoring. And I'll tell you a story about it in just a second.
There's a lot of other information you have to think about and decide what else you need to monitor. And if you don't want to think about it, that's okay. Have a, have a technology like Impervis that can scale to just simply say, Monitor all access to all of my data. I don't even I don't I don't want to decide.
Just look at all access to it and tell me the last third thing. Tell me when somebody is using that data in a way that their peers are not using it in a way that the APIs are not using it or in a way that they don't normally use it. And let me give you an example. So, uh, not not necessarily super recent, but over time and I'm obfuscating the the organization and the country and all that.
Consider this. It's not considered private data when you think about the prison sentence of a prisoner. He's, you know, okay, somebody got sentenced to 20 years in prison. And now somebody says, what if I were to pay a DBA to decrease that sentence one year? What would somebody pay to have their sentence decreased by a year or two?
That is in a database, by the way. That is stored in a database. And if you've got 10, 000 or 100, 000 inmates, is one person going to really pay too much attention to a year that doesn't matter 18 years from now or 15 years from now? Are they going to really pay attention? Maybe they will. Maybe they won't.
Will somebody pay for it? I will tell you without question, somebody will pay a DBA to make those changes. And if you're not looking at something that was perceived as not really sensitive, a year. How many years is this person? It's not even personally identifiable in any way, but by changing it, somebody was able to monetize.
That to change it. And this goes back to even phone cards years ago. Remember you had your cell phone and you had a plan that was 300 minutes for a month. Well, we had people in the phone companies that if you paid them a little bit of money or their buddies, they would just unlimit your phone number.
Give you, I'll give you another 500 minutes for five bucks. Right? And so it's all that kind of stuff. Things you, the organization doesn't perceive as sensitive information, but the organization's losing money, or, or people are getting out of prison years earlier, way earlier than what they would have expected otherwise.
It's, it's hard to, for hard, I think it's hard for a security organization to wrap their heads around, well, what should they protect and what shouldn't they protect? My opinion, best practice? You can classify your data all day long. It is useful to know what's regulated and what's not, but at the end of the day, you have to put your controls around it all because you're not going to predict what in fact is important to not only you and your organization, but actually monetizable to anybody else in the organization or anybody outside the organization.
So it's better. And the successful organization, just say, I'm going to look at all of it and I'm going to dump it all into analytics. And let analytics weed it out and say, well, just tell me when somebody is doing something weird. Nobody changes those years. Those years pretty much stay the same unless it goes through a specific federal law enforcement API.
And when it goes through that API, then it's approved because they're on parole or they've done something or otherwise, but just a direct database user admin changing a year. That doesn't happen. Nobody does that. That should be. Easy to see. And in fact, it is easy to see if you're looking.
[00:37:43] Sean Martin: And I would imagine pick your favorite data source.
You talked about code earlier. I'm sure there could be some really interesting use cases there where the code acts a certain way when it's, uh, presented to a certain, uh, interface or device or person or identity or whatever I'm thinking of shopping examples. I'm thinking of, I guess. If somebody wants to be malicious, the use of AI to determine and, and perhaps even change the way things work within the organization, which, which ways could benefit them the most?
Uh, The, the opportunities are endless, right?
[00:38:24] Terry Ray: They're, they're, you mentioned guardrails earlier, right? So all of these tools that we, we've talked about today, your chat, GPTs and GPT fours and all that kind of stuff, they all, as, as we all know, they have guardrails already in place, right? To not give you certain kinds of information that it certainly has access to.
And so you have fraud, G P T, and you've got some other GPTs that exist on the dark web as well. Right. And those guardrails are off, so you can ask those whatever you want to, and it can get pretty hairy in terms of the response that you can get back. Now, a lot of those tools as well. And the interfaces to them, full caveat, have their own malware and everything associated with it.
So you have to be very, very careful about how you do it. I don't recommend anybody going there and doing that. All I'm saying is the information. For AI, if it has access to it can be very damaging, I think, in a lot of ways. Right. I could ask AI, you know, give me the list of, uh, give me the list of websites that might be susceptible to cross site scripting or cross site request forgery, or have this very specific set of code in their API, and it can go back and, I mean, frankly, you can do what used to be called, I guess it's still called Google dorking, right?
I could go into Google and I could do a Google dork and I could type within Google exactly what I'm looking for and it will go pull it up. There's no reason AI can't do that on steroids and go tell you exactly where the vulnerabilities exist. And then you can target exactly what you want to target in the way you want to target it.
[00:39:47] Sean Martin: Yeah. I'm looking to show Dan information as it comes to mind. Um, I think it's a. Super serious topic. I mean, we, we've had a few jokes here and there, but I, I think the, the point remains that the technology exists, people will use it, companies likely need to embrace it to stay ahead. Um, it, it might introduce, uh, hallucinations and data and, and limitations that they need to be aware of.
The bottom line is awareness and kind of to your point on some of the stats, uh, that. To me, that's an equation of, of, of being an unaware, ill informed, um, so being informed and aware of what's... Could happen and how it could happen and and Understand that some guardrails need to be put in place regardless of the level of how you embrace it is critical And I think this isn't a call to be afraid of your own shadow, right?
It's it's a call to recognize you have a shadow Know what it looks like, uh, in, in different shades of light from different perspectives and the fact that others can see your shadow. You have to know that. So, uh, great conversation, Terry. I think, uh, we've given, given folks a lot to chew on here and, uh, a few tips and advice for how to start taking steps to, uh, to put those guardrails up.
So any, uh, any final thoughts before we wrap?
[00:41:21] Terry Ray: The only thing I would say is, you know, Don't be late to the game. If you haven't started, I'll tell you, you're late to the game. It's never too late to get started. You've got to start looking at your data. You've got to know what's going on. It's that simple.
[00:41:36] Sean Martin: Very good. And speaking of data, we'll, uh, we'll include some, some notes in, uh, or links, I should say, in the show notes for, uh, resources that Terry and the Imperva team. Think might be useful for those who want to have that visibility into what and where and how their data is being accessed and, uh, help put those guardrails in place.
So thanks, Terry, uh, for great convo. Thanks, Marco, for shadowing me on this one or me shadowing you, whatever. And, uh, thanks everybody for listening to this brand story here on ITSP magazine.
[00:42:11] Terry Ray: Thanks y'all.
[00:42:12] Marco Ciappelli: Bye everybody.