ITSPmagazine Podcasts

Building Resilient Applications and APIs: The Importance of Security by Design to Ensure Data Protection | An Imperva Brand Story with Lebin Cheng

Episode Summary

Join Sean Martin and Marco Ciappelli as they engage with Lebin Cheng from Imperva to explore the complexities and solutions surrounding API security. Discover how real-time monitoring, behavioral analysis, and innovative technologies are transforming how organizations protect their sensitive data.

Episode Notes

In this Brand Story episode, hosts Sean Martin and Marco Ciappelli welcome Lebin Cheng from Imperva to discuss the ever-important topic of API security. As the head of the API security team at Imperva, Lebin Cheng offers a nuanced view into the challenges and solutions involved in protecting sensitive data facilitated by APIs. A central theme of the discussion revolves around API security's complexity due to APIs' role in digital transformation, cloud migration, and data integration. APIs act as a gateway for data interaction and integration, offering flexibility but also introducing significant security risks.

Cheng underscores that as APIs provide open access to critical data, they become prime targets for sophisticated cyber threats. These threats exploit vulnerabilities in API deployments, making robust security measures indispensable. Cheng highlights the importance of securing APIs not as a one-time effort but as an ongoing process. He discusses how Imperva employs real-time monitoring and behavioral analysis to enhance API security. By establishing a baseline of what constitutes normal behavior, Imperva can quickly detect and respond to anomalies. This approach goes beyond traditional, static security measures, which often fall short against dynamic threats that evolve alongside technology.

Additionally, the conversation touches on the notion of 'security by design.' Cheng advocates for integrating security considerations from the earliest stages of API development. This results in more resilient applications capable of withstanding sophisticated attacks. The discussion also notes the growing trend of DevSecOps, which emphasizes the collaboration between development, security, and operations teams to embed security throughout the software development lifecycle. Real-world applications of these principles are evident in various sectors, including open banking.

Cheng explains how open banking initiatives, which allow smaller financial institutions to access larger banks' data via APIs, highlight the necessity of strong API security. A breached API could expose sensitive financial data, leading to significant financial and reputational damage. The hosts and Cheng also explore how Imperva's innovation in API security involves leveraging artificial intelligence and machine learning. These technologies help in identifying and mitigating potential risks by analyzing vast amounts of data to detect unusual patterns that might indicate a security threat.

In closing, Cheng emphasizes the importance of continuous innovation and vigilance in the field of API security. He invites organizations to adopt a proactive stance, continuously updating their security measures to protect their data assets effectively. This episode serves as a compelling reminder of the critical role API security plays in today's interconnected digital world.

Learn more about Imperva: https://itspm.ag/imperva277117988

Note: This story contains promotional content. Learn more.

Guest: Lebin Cheng, VP, API Security, Imperva [@Imperva]

On LinkedIn | https://www.linkedin.com/in/lebin/

Resources

Learn more and catch more stories from Imperva: https://www.itspmagazine.com/directory/imperva

Are you interested in telling your story?
https://www.itspmagazine.com/telling-your-story

Episode Transcription

Building Resilient Applications and APIs: The Importance of Security by Design to Ensure Data Protection | An Imperva Brand Story with Lebin Cheng

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: [00:00:00] Marco.  
 

Marco Ciappelli: Sean,  
 

Sean Martin: I remember that one time,  
 

Marco Ciappelli: that time,  
 

Sean Martin: that one time where, uh, you were in a museum and you, uh, you decided you wanted that painting.  
 

Marco Ciappelli: I wanted to touch it.  
 

Sean Martin: You wanted to touch the painting. I think, I think he had superglue on your finger or something. He walked off. They tried to take it back for me, but, uh, you  
 

Marco Ciappelli: know, that just brings me some memories when I was a kid in Florence where we have a lot of museums and, uh, I've been yelled a couple of times by the The security guards of the time where maybe at the time wasn't even like many cameras or, or who knows what kind of, uh, technology that we have nowadays, but you have actually the The guy or the, or the security person that would just yell at you. 
 

Sean Martin: So there's sometimes security technology and I've, I've been yelled at, uh, of course I'm not there maliciously, but you're curious, you're, you're looking [00:01:00] in and you across the line, the laser yells at you or the person across the hall says, get away from there.  
 

Marco Ciappelli: Yeah,  
 

Sean Martin: voice of God comes up on the, on the, because they're watching from behind the scenes too. 
 

Anyway, I,  
 

Marco Ciappelli: the point is that you need to be there. To watch the art that they promised you is going to be there and you want it to be the real one I don't want to go and see that's right The the botticelli that is a fake just because security reason i'm going there to see the real one  
 

Sean Martin: It's e l l e e Botticelli here 
 

It's the real ones. Oh boy. Alright, so we're having too much fun here. Which we'll continue to have, but we're going to make some sense as well along the way. And that's why we have our guest, Levin Cheng from Pervan. How are you?  
 

Lebin Cheng: Fine,  
 

thank  
 

you. Thank you for having me.  
 

Marco Ciappelli: It's good to have on, he's the one that brings the, the, the, the [00:02:00] seriousness to this,  
 

Sean Martin: the, the, uh, the reality to, uh, to our craziness. 
 

Yeah. So yeah, for folks who are trying to figure out what the heck we're talking about, we're, we're looking at APIs and apps and how do we know that the, they're operating properly and not manipulating data or leaking data or being misused. And it's very difficult to do that. Um, and the, the reason we're talking about the, the museum is. 
 

The whole purpose of an app is to give people access. You want them to come in and explore and experience and be part of it. But. That doesn't mean they have free reign and have a free for all, uh, chip to do whatever they want in that environment. So, um, we're going to talk about what that means from a technical perspective. 
 

Before we do that, though, uh, Levin, a few words from your role at Imperva and all the cool things you get to, uh, be part of there.  
 

Lebin Cheng: Sure, sure. My pleasure. So, uh, my name is Levin Cheng. I, uh, I'm the [00:03:00] head of API security. Part of Improva app security team and Improva actually is now part of a big organization called Talus Group. 
 

And I also come into Improva through another acquisition. So it's almost like, you know, Improva acquired my company and then it got acquired. And it's called Cloud Vector, which is a specialist in API security in the early days. And so I joined Improva about three years Integrating API security as part of the Improver absolute. 
 

Sean Martin: You've seen a few APIs.  
 

Lebin Cheng: Yes, I do. And talking about a museum actually, I myself actually like Botticelli and Michelangelo and those and I was actually planning a trip to Italy and France and I was exploring and I found out there was an app. Uh, to for you to actually look at, you know, hot pieces and where they are, and, you know, give you a kind of a [00:04:00] virtual tool. 
 

And funny enough, that, uh, that app is running completely on web APIs. 
 

Sean Martin: Interesting.  
 

Lebin Cheng: Yeah.  
 

Sean Martin: Yeah. Cause I, I would imagine. I don't even know. Maybe some of, some of the data is, uh, sourced from public sources. Maybe some of the images and whatnot, but then all organized through an app that brings it all together, I suspect.  
 

Lebin Cheng: Exactly.  
 

Sean Martin: Interesting. So let's talk about first going to be level set here. 
 

Uh, I imagine a lot of the folks that listened to ITSP magazine, certainly the, uh, the redefining cybersecurity show know that. Apps are primarily or heavily driven by APIs and data sources that may or may not be your own and be on premises or not. Um, let's talk about API security and how. How that looks, uh, current state of [00:05:00] things, um, given the explosion of a number of API APIs and the services that are available for organizations to pull together to give that experience, right. 
 

To their partners and their customers.  
 

Lebin Cheng: Yeah. So, uh, you know, as we start talking things about, you know, museums and all that, you know, API is a little bit like, you know, opening up the museum for people to get closer to your data. And, and a lot of times actually people adopt API is for integration, for Third party to be able to come in and, you know, create their own experience, right? 
 

So that's why API is part and parcel of the digital transformation or digitization or the cloud migration. So that's why API, because API is so flexible. It's kind of like the lingua franca for, uh, for the automated component to talk to each other. You know, as I, Found out that, uh, you know, even the mobile app for you to appreciate art was actually built on top of APIs because you can, you know, fetch images, you know, [00:06:00] fetch, uh, you know, the museum's location or exact location from, you know, Google map and all those interaction is almost like the museum and you have different pieces, but now everybody put together and everybody. 
 

Right. And just like museum pieces, you know, a lot of the API enable you access to very pressure data, just like, you know, you know, multimillion dollar painting, and you can get extremely close. But then the problem is, because, you know, there's a conflicting of priority here, right? That the reason you open up the museum, the reason you open up API is for people to get Flexible assets to get, uh, you know, um, get value out of it, but at the same time, bad actors can come in and take advantage of that environment. 
 

If you're not careful, and you want to keep the maximum flexibility, and that is so sometimes run in conflict with, you know, how to secure that valuable piece, you know, uh. Which is data in in in in the digital world. [00:07:00]  
 

Marco Ciappelli: So I love this this metaphor of the museum because I'm also imagining in my head those old movies where you see the the person that wants to steal the painting or the jewelry, right? 
 

That goes mixed with the crowd a few days before to plan. Exactly. The day of, how am I going to execute that? Exactly. And it's the same thing, like you, you have this open environment where the bad guys are going to poke around as well. So how do you draw the line there and what, what is the main, let's say, you know, the main rule of, Executing a control of the app and be sure that he's protected that he's doing the job and at the same time leave the door open for people to come and use the API. 
 

Lebin Cheng: Exactly. Exactly. And, and, you know, I, I've been doing a lot of my past life [00:08:00] before the API security. We are looking at web application, firewalls, firewall, etc. Right? So it's kind of in the old days. You know, uh, the robber, the, the bad actors are very, uh, very apparent, right? You know, like, uh, say secret injection attack, you know, you would look at the request is very apparent, you know, it's a bad request and you just need to collect signatures. 
 

Almost like, you know, the, the bad guys look like the bad guy, you know, but. Of course, you know, if you watch movies and not just using the museum analogy, you know, the bad guy is smart enough,  
 

you know,  
 

they, they sometimes do kick down the doors or brute force these trying to grab the pieces, but the successful one, usually they, they now learn, they want to blend in. 
 

Right. They, they always appear as if they're just a tourist or just a curious, you know, art, uh, uh, you know, uh, somebody appreciate the art and, uh, so that's the danger of API is that API have shared some of these characteristics because it opened up the SS [00:09:00] and you cannot actually tell, you know, a certain actor, sometimes they just broke the API come in, you know, they find, you know, Specific vulnerabilities to that application. 
 

You know, usually, even you can think about certain movies, right? You know, no matter what kind of, uh, you lock the door and you, but if there's any vulnerability to that specific location for that specific piece, you know, sometimes, you know, the bad guys can blend in. And just like a tourist or whatever, they, you know, snatch and, and to get your data away. 
 

And this is actually, you know, not a empty threat. You know, we actually know about a lot of incidents. You know, you can hear it from the news. Oftentimes bad actors comes in and make normal calls, API calls. Just like, you know, people come in as a normal person, but find a vulnerability and get there. And that's the challenge for security person, because you cannot just close down the door. 
 

You know, it'd be closed. Completely closed down the door. Then nobody come into the museum. They cannot see anything. But when you [00:10:00] open it, then how can you tell? How can you tell you what is the right way to appreciate it? And what is the wrong way to kind of get too close? And that's the balance. And I think that's, uh, that's the unique challenge where, you know, I, myself, in a few years ago, see the challenge. 
 

And then, you know, that's, that's when we come in and we actually, uh, um. Uh, identify a few places where, you know, uh, we potentially can make a difference by introducing new, new way to address security, you know, not just trying to identify who's the bad guy and, and, and stop the bad guy at the door.  
 

Sean Martin: Yeah. 
 

And I. I'm, uh, in New York. I'm a couple of blocks away from a fairly, fairly, uh, well known museum. And I spend a lot of time there. I'll go for an hour one day, just because I feel like getting filled up with, with goodness. And I look at the environment because I'm going to connect it back to APIs because back in the day when I was [00:11:00] testing software, you could easily define What the objectives of the application were, the paths to achieve that objective, what the transactions are, who's going, who's using it. 
 

And you could write stories and test cases, pretty finite. Right. And then you could look for some of the edge cases, either through, actual hands on keyboard or through, uh, software testing in the back. And so I look at the museum and I'm thinking there's one or two entrances in, you kind of get funneled through the place where you're supposed to enter, but there, there might be other doors like the VIP door or the accessible door or, and then once you're in, you might not go through the main Ticket checking, you might go through the store instead, right? 
 

So what's normally the exit, but they want you to go to the store. So they let you in the store and there, there's some, some spots that are marked off with some, some ropes and things like that. I guess my point is, [00:12:00] uh, a closed environment, like the museum, you could even perhaps social engineer and move your way around the museum with never paying. 
 

And similarly. With APIs, you can maneuver throughout the application and enter spots that you're not supposed to without paying, right? The challenge is with the explosion of APIs, it's almost impossible to get a handle on all of the paths, all the entry points, all the checkpoints along the way. So what are you hearing from customers as they realize this is the case for them building their apps? 
 

so much.  
 

Lebin Cheng: Yeah, so actually, that's a great points about, you know, the testing and then building things, right? Because, uh, you know, a very natural reaction to that as well. If I have a variable things and I open it as I want to kind of build check and balances build around it. Right. And when I talk to a lot of customer who [00:13:00] expose API's and concern about API security. 
 

The root cause for some of that actually that some of those challenge actually come from actually a very interesting phenomenon, which I kind of want to stress the museum analogy a little bit. It's not just. Uh, you know, it actually may be easier if you put a piece into a museum purpose built to display this piece, you know, maybe a Bocelli or a Statue of David, right? 
 

You know exactly where that academia, uh, environment is, but a lot of, uh, the, the customer I talk to. Using this analogy, it's like their, their, their API is not built on completely brand new applications. Actually, a lot of the API actually majority customer who actually become our customer, uh, because they have that challenge is they actually, the API was introduced when they, uh, embark on something called the digital, uh, digitization journey, or digital transformation or cloud migration.[00:14:00]  
 

So a lot of those applications. They are private applications at the beginning, you know, they was built internally for, you know, some internal CRM applications. And now, because they need to move to the cloud, they need to open up for partners, right? You know, everybody doing the cloud thing or mobile assets. 
 

They need to enable mobile application. I imagine maybe that, you know, move soon application I use. Maybe it's the result of that, right? You have something in the back. So now imagine it was a private collection at the beginning. Yeah. So all the checks and balances are just limited to maybe, you know, in my, in my room, you know, you, you have guards and maybe only a few, you know, guests can come in, you know, who are trusted guests, right? 
 

You know, and look at this piece. And now I donate to a machine museum, exposed to a completely new environment. Now, you know, the, the, all the checks and balances. All the, uh, uh, um, [00:15:00] uh, security measure is inadequate, right? Because it's a completely new environment. You almost, almost like open your door up, you know, almost turn the private collector immediately into a museum, into, into public museum. 
 

And that is the biggest challenge because now you don't even know your application and how many of those exposure actually potentially can be dangerous. So always the first, uh, uh, And that's one thing that we help our customer do. It's just like you put a, before you put a private piece in, into a public, the display was discovery. 
 

You know, you actually have a survey of the land, you know, what is coming in, what's the entry point and what kind of, you know, we recorded API endpoints, what you expose, but it's not just in enough. I think that the analogy kind of stopped there because now, you know, museum piece, you know, exactly where you are, but now you have to know the flow as well. 
 

Because you know what customers comes in what API call tend to tend to extract [00:16:00] sensitive data, right? Not all transactions are touching the sensitivity, right? So not not all museum visit is touching the most valuable pieces. Maybe you concentrate. So we first started discovery. And then we actually collect real behavioral data and applications. 
 

And then we go back and use those data. And a lot of the security testing folks now are interested in that because their testing used to be based on, uh, static information, almost like based on that private collection, what your house look like, and try to just see whether all the windows are locked. But now it's impossible to check the windows are locked. 
 

So now let's simulate. Let me simulate, you know, who can get closer, like, like your experience, you know, maybe we can put a razor there. And when you get too close, it, it alarm you. Right. So, so that is the thing that, uh, uh, is kind of the state of the art today was, you know, try to [00:17:00] address that. That problem is trying to kind of have a survey of the land, kind of like creating a kind of security camera or a CCTV system around your, around your application. 
 

Marco Ciappelli: So clearly, it's, I love the analogy again of turning something that you may. For private consume to something and then it become public, not only public, but extremely popular where you're allowing so many people in there and all of a sudden you're like, this wasn't, this wasn't supposed to be a museum there with let's start it from scratch, an entire new building that you can somehow make it secure by design. 
 

So you guys go in there, you need to assess. Like in real life, what are all this problem? But what I'm understanding here is that while you can now secure eventually [00:18:00] the consumers that comes and consume the art, then you have another level of problematics that you need to look at, which is how do you trust the pen tester, the people that come in and they are accessing data that are not necessarily data that they should. 
 

Right. So explain a little bit more.  
 

Lebin Cheng: Yeah. So, uh, yeah, using that analogy, it's almost like somebody coming to just, you know, in the past, a pen testing is based on very static data based on like, okay, your API exposed this, you know, uh, endpoints and I just bombard this endpoint with known attacks and maybe Somebody come in with a hammer and try to smash, smash the door and see how many times you can break the door and, and see if the door is breakable. 
 

So that is taken care of. But again, you know, using that analogy, the smart, you know, smarter or more sophisticated bad actor may not [00:19:00] just use a hammer to try to smash down the door and smash the, the, the, the door. The enclosure of an art piece and grab it, right? That's a brute force away. And there will be more sophisticated way people trying to touch it and try to test the system. 
 

And those are behavior that is, there's almost infinite amount of possibility. So the best way actually we, you know, as a runtime application, you know, almost as the CCTV, the monitor and the actual guard running, walking around is to actually provide that baseline of normal and abnormal behavior of the API, and that's what we can provide. 
 

And there's also a fine line here, right? In actual cases for API security, you cannot just say, okay, here's the footage and run with it and simulate it because the footage can contain private information, you know, can contain, you know, actually in real life, you know, your face and all, you don't want to give the raw footage to the [00:20:00] tester because for fear of privacy concern and in, in API's world, the same thing, right? 
 

If you, there's actually a couple of, uh, uh, kind of. A little bit like brute force kind of, uh, API security testing, uh, proposal out there. Uh, it sounds okay, but you know, you, you, you capture runtime data and just directly use it to simulate. It kind of worked in a simulation, but you know, in reality, a lot of customers find it difficult to opt it. 
 

So one thing we, I found actually useful. Is, uh, to take those data, but digest those data, you know, because you actually do not need to simulate exactly somebody's face, right? You know, uh, because that's private. But if you can simulate the behavior, so there's a fine line. So, uh, if you, uh, do it correctly, then your runtime system will collect the behavior data. 
 

But then you, um, hash it, or you kind of mask it in such a way that you will not mask [00:21:00] away the actual behavior, you know, the dangerous behavior or the normal behavior. So you can baseline at the same time. Provide enough data to the tester so that the tester can use that almost like use a robot or use something to assimilate it and test the system, then do it in a cyclical manner, right? 
 

Because people's behavior, assets, behavior change, and even the application itself change. And when the application change, you see the behavior, you see the interaction change, then you can, you know, keep feeding the data and feed it into a very natural CICD loop. Then we have a hope of something that can be, you know, uh, secure by design and, and, uh, save from those sophisticated, you know, exploit that readable data. 
 

Sean Martin: Can we, can we pull on a bit of real world scenarios on, uh, open banking is one that we talked about previously. [00:22:00] And I think, uh, might shed some light in this because it's, it's fun to talk about the museum, but now let's talk, you touched on a few things here. The context changes, how the app works, who's using it, changes how the app works, perhaps how they're using it. 
 

It changes how the app works, where it's running. Countries, different regulations might determine the flow of, of logic and whatnot. So, I don't know, a few examples of what happens today, how APIs and the growth and some of the challenges you talked about. Oh yeah, certainly. Yeah. So  
 

Lebin Cheng: I can use one real example. 
 

I, I will not name names, but. Uh, you know, uh, there's, uh, a very active initiative called Open Banking API. Everybody, uh, probably heard about it, uh, or at least in the industry. And one of those initiative enable was that, uh, for the large financial institution, right? You can think about a major financial institution, uh, banks or the multi role bank, right? 
 

And they, [00:23:00] they have a lot of customers. And through this open banking initiatives. Uh, some of the smaller, you know, you can think about them as boutique, you know, investment firms, right? Uh, or consultants or somebody's advisor helping a smaller subset of those customers and they can through API integration, integrate, you know, extract information, for example, enable a transaction, right? 
 

Allow the smaller advisors to. be able to manage the accounts for on behalf of the customer with the customer's permission. So there was such a requirement that, um, if a customer consents, right, authorize their, uh, their own account to be, you know, assessed by the, uh, small advisors, the small financial institution, then the bank is kind of obligated to open up API access for those customers. 
 

But then it put a lot of burden on the [00:24:00] larger financial institutions because they have thousands or millions of customers. And now let's say a, the API is exposed in such a way that, you know, uh, is it doesn't keep the boundary close enough than an advisor having, say, 10 customers can come in, but somehow bridge the boundary and get to other people's data. 
 

So in an API security world, uh, that's called broken object level authorization, where you've broken the object level boundary. And that's, that's a very common threat to, to, to, uh, to this API world. And it's very apparent in, you know, it kind of worried a big financial institution. You can think about. If one rope advisor or smaller institution take customers data or do some bad things and run away The customer the the end consumer had no [00:25:00] recourse against that small guy and Oftentimes they come back and they look, they try to catch the bigger guy and say, Hey, you know, how come your API, you know, expose the data in the wrong way? 
 

So that is a real threat. And that's, that's actually, uh, that's drive a lot of the demands for larger organization. It's trying to protect themselves while they try to support the open banking initiative and especially opening a banking API initiative. And. That actually is a, is a very, very, uh, real use case for the banks want to actually track the behavior, track the behavior, not only the end consumer, but once authorized, what is this, you know, smaller financial institution, what kind of data they can access and what kind of data they should not have access to. 
 

Sean Martin: And of course, the, the organizations that are building the platform and the APIs and then, and then hopefully the, People building their apps using those APIs do the same, but [00:26:00] there are well known test methods for looking at all the API secure as the app or using or presenting the API is, um, secure.  
 

Lebin Cheng: So from what I, uh, what I can see, unfortunately, is not as streamlined and because a lot of the pen tests in the past, right? 
 

It's very manual driven. And some of the one that can be automated are very simplistic, right? Because sometimes you can just say, do fuzzing test that can fully. So there's this, uh, uh, contradiction of if you have, um, if you have something that is so simple that can be automated, then it must be brute force attack. 
 

Then it's actually quite easy to fan off at the same time. The sophisticated ones are usually very targeted. And when it's specific to API and it's very hard to automate. So if those, uh, unfortunately, because it's hard to automate, [00:27:00] then sometimes it's costly and sometimes it's time consuming and it's because it's oftentimes it's manual driven. 
 

So that balance, when it's not struck, then oftentimes it, it gave. Uh, some of those, uh, um, you know, customers or enterprise, a false sense of security. You just run some basic fuzzing tasks or some automated tasks or a little bit of, you know, kind of good enough scanning and you think, Oh, my API is safe. 
 

But unfortunately the most sophisticated, you know, book and object authorization type of, uh, uh, uh, exploring sometimes can, can catch people, uh, by surprise. And that is when this type of new. New methods, uh, you know, uh, should be tried and, and, you know, to be completely honest, this is because API applications, uh, development itself is also picking up pace. 
 

You know, a lot of organization now talking about monthly release cycle or sometimes some [00:28:00] more aggressive one, like biweekly or even daily release cycle. So you're also talking about the target also change all the time. And because of that is that you call for new methods and new automation to be tried and and that I don't think that's a bad already one or that's that's also even like common best practice being followed at this time,  
 

Marco Ciappelli: so I'm going to go back on the museum because I can't visualize in this. 
 

Well, you just said and talking about the runtime. So. If you do a scan and an assessment and a certain period of time. Right. Between the time that you do the assessment or the test, you don't know what happened. So I'm thinking, it's like, yeah, every night and every morning we have somebody to walk around the museum and check that all the painting are where they're supposed to be. 
 

But maybe they stole it three hours before. Exactly. The runtime automation is exactly that. The camera that you [00:29:00] talked at the beginning. Right. But you also, again, I go back to, you need to have access to everything that happened. During the period of time, is there an alternative to this, like in artificial intelligence training, the machine learning with artificial data? 
 

Can you eventually do something like that? Is that going to be effective as? Real data. What? What's your thought on that?  
 

Lebin Cheng: Yeah. So actually, uh, you know, not to oversell an AI because sometimes we would tell me that they're tired of people say, Hey, I use AI. Therefore, you know, uh, it's better.  
 

Marco Ciappelli: But,  
 

Lebin Cheng: uh, but, uh, there's, there's some promising signs there because a lot of this is behavior. 
 

A lot of this are not deterministic. Right. And, uh, each API is a different, you know, just like art pieces, each art piece are different. And that's why, um, Uh, it requires a little bit of, uh, [00:30:00] adaptive. So one thing that AI or even the advanced AI or machine learning is good at is, uh, take some general rule, right? 
 

And, and then adapt to, to specific, uh, uh, application. And I see some success there. And one of the success, That I, I am encouraged by is that if you collect the real time data and you classify it, you know what, what the data is, you classify it and you actually sometimes you use machine learning or some, some statistic, uh, to statistical model to actually Properly classify the data and then you properly baseline the user behavior. 
 

So, uh, some, some security term called anomaly, right? Anomaly detection and baselining and that, um, um, that is being used heavily. And once you establish that baseline model, then actually the, the anomaly actually come out, you know, really quickly, actually without human intervention. And then you can look at the opposite. 
 

You can then, [00:31:00] if you can detect anomaly. Then the next step, you know, in terms of more effectively testing would be actually use the data use and normalize data. Then you use it to actually generate anomaly almost like, uh, better than somebody walking around or even have a passive camera. If you have something that can simulate the behavior and then suggest. 
 

New kind of anomaly and then simulate it before it even happened. Then you know your, your system actually is kind of in new, right? Almost like a immunization idea, right? Almost like, uh, uh, making your, uh, security by design better. So before your application next revision is released, you simulate those anomaly against it. 
 

And that's when, uh, sometimes some, uh, statistical models, uh, AIs are, uh, are being used. 
 

Sean Martin: All I know is every time we, uh, we get to chat with somebody from Imperva, I'm always impressed by the levels of, of [00:32:00] innovative thought and action. And I know you've also, uh, been taking some of the things you're doing and working with others in the industry and working directly with a lot of customers to kind of find a new way to. 
 

Look at this problem with the simulation with the hashing and the abstraction of private data so you can still identify The behaviors but not put anybody's privacy or sensitive data at risk and give the customer ultimately a view into what's going on a View into how to build better products geared by design Um, based on runtime, real world examples of, of behaviors and activities taking place and across all the organizations you work with, right? 
 

Not just the one in particular. So I don't, I mean, I'm always impressed. How do you, is there anything you can share with us with respect to, uh, the continued innovations you're working on [00:33:00] and. How customers or even non customers prospects could, uh, engage with you and, and help shape this future with you. 
 

I think you're driving a lot of things forward, but I think continuing engagement with the community of customers would be fantastic as well.  
 

Lebin Cheng: Yeah, to conclude, I just want to really quickly share a trend that I'm really pleased to see is, uh, in the past, you know, uh, developers develop something, you know, somebody tested in your store against the wall and then one time security guy responsible for locking the doors and security kind of like this kind of separation of duty. 
 

And, you know, sometimes developers never talk to security guys. And there's this trend. Oftentimes, you know, a few years ago, if you talk to CISO about DevSecOps, you know, developer working with security, hand in hand, and have security by design. A lot of CISO, uh, we talk to say, oh, that's visual thinking, you know, we're not getting there. 
 

But, uh, actually it's being very actively pursued nowadays. I'm seeing more and more organization. Just, you know, [00:34:00] this kind of automated test is actually one byproduct of that is the developers are now more security minded and security guy willing to work closely with a developer in the development cycle. 
 

And kind of build in security and development at the same time. So the DevSecOps trend is definitely active, kicking, and actually there's an increasing appreciation and organization working in that manner. And I see that and, uh, you know, Improva and, you know, we continue to try to provide automation tools, provide, you know, all the best practices to help our customer kind of in their journey to achieve. 
 

A very smooth desktop so that they can secure their very dynamic applications.  
 

Marco Ciappelli: Well, we are very proud to have been part of the storytelling of Imperva for, you know, years at this point. And, uh, I agree with Sean. I mean, you guys always thinking ahead, thinking innovation and not be afraid to say, [00:35:00] what if like a lot of company out there will say, Hey, we got the solution. 
 

Everything is cool. We don't need to think anymore. We resolve the problem. But in cybersecurity, I think if we learn something, the problem, it's always coming because as we evolve, also the bad guys do. So that attitude, it's it's very, very, very important, I think, in this industry. And I sometimes it just blows my mind. 
 

I learned so much when I talked to people like you.  
 

Lebin Cheng: Thank you. Yeah. Glad to be here. And I enjoyed your conversation very much.  
 

Sean Martin: Always a good time chatting and a pleasure to have you on, uh, Levin. And for those listening and watching, of course, uh, we'll link to some resources, uh, to learn more about, uh, Imperva, the API, all the app security product line, the API security, uh, products within it. 
 

And, and everything else that Imperva does that rounds out the full [00:36:00] picture of protecting data at all places, uh, wherever it travels. And yeah, I I'm excited to keep chatting with you and the rest of the team to learn more and, and, uh, hopefully people stay tuned and follow us and connect with you and have that, have that conversation too. 
 

Understand how they can leverage real time data in a safe way to better design apps through a Secure by Design DevSecOps program. So thanks Levin.  
 

Lebin Cheng: Thank you.  
 

Marco Ciappelli: Levin, I'll see you in a museum in Florence one of these days. Oh yeah, that would be a great question. We'll walk around together.  
 

Sean Martin: Certainly you do that. 
 

You're welcome to New York as well. I'll take you through. I know a secret path through there. 
 

Lebin Cheng: Uh oh, sudden.  
 

Sean Martin: Alright, thanks everybody. See you on the next one.