ITSPmagazine Podcasts

Leveraging AI for Effective Healthcare Solutions | A Brand Story Conversation From HITRUST Collaborate 2024 | A HITRUST Story with Walter Haydock and Steve Dufour

Episode Summary

In this new HITRUST Brand Story we explore advancements in healthcare through AI technology. Broadcasting live from HITRUST Collaborate 2024, Sean Martin leads a conversation with Walter Haydock of StackAware and Steve Dufour of Embold Health.

Episode Notes

The Emergence of Innovative Partnerships: As AI becomes increasingly integral across industries, healthcare is at the forefront of adopting these technologies to improve patient outcomes and streamline services. Sean Martin emphasizes the collaboration between StackAware and Embold Health, setting the stage for a discussion on how they leverage HITRUST to enhance healthcare solutions.

A Look into StackAware and Embold Health: Walter Haydock, founder and CEO of StackAware, shares the company's mission to support AI-driven enterprises in measuring and managing cybersecurity compliance and privacy risks. Meanwhile, Steve Dufour, Chief Security and Privacy Officer of Embold Health, describes their initiative to assess physician performance, guiding patients toward top-performing providers.

Integrating AI Responsibly: A key theme throughout the conversation is the responsible integration of generative AI into healthcare. Steve Dufour details how Embold Health developed a virtual assistant using Azure OpenAI, ensuring users receive informed healthcare recommendations without long-term storage of sensitive data.

Assessment Through Rigorous Standards: Haydock and Dufour also highlight the importance of ensuring data privacy and compliance with security standards, from conducting penetration tests to implementing HITRUST assessments. Their approach underscores the need to prioritize security throughout product development, rather than as an afterthought.

Navigating Risk and Compliance: The conversation touches on risk management and compliance, with both speakers emphasizing the importance of aligning AI initiatives with business objectives and risk tolerance. A strong risk assessment framework is essential for maintaining trust and security in AI-enabled applications.

Conclusion: This in-depth discussion not only outlines a responsible approach to incorporating AI into healthcare but also showcases the power of collaboration in driving innovation. Sean Martin concludes with a call to embrace secure, impactful technologies that enhance healthcare services and improve outcomes.

Learn more about HITRUST: https://itspm.ag/itsphitweb

Note: This story contains promotional content. Learn more.

Guests: 

Walter Haydock, Founder and CEO, StackAware

On LinkedIn | https://www.linkedin.com/in/walter-haydock/

Steve Dufour, Chief Security & Privacy Officer, Embold Health

On LinkedIn | https://www.linkedin.com/in/swdufour/

Resources

Learn more and catch more stories from HITRUST: https://www.itspmagazine.com/directory/hitrust

View all of our HITRUST Collaborate 2024 coverage: https://www.itspmagazine.com/hitrust-collaborate-2024-information-risk-management-and-compliance-event-coverage-frisco-texas

Are you interested in telling your story?
https://www.itspmagazine.com/telling-your-story

Episode Transcription

Leveraging AI for Effective Healthcare Solutions | A Brand Story Conversation From HITRUST Collaborate 2024 | A HITRUST Story with Walter Haydock and Steve Dufour

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] Sean Martin: Here we are. You're very welcome to a new brand story here on ITSP Magazine. This is Sean Martin, host of the Redefining Cybersecurity podcast. I'm coming to you live from HITRUST Collaborate 2024. An amazing week of great conversations and collaboration, of course. A ton of innovation, ton of partnerships, and we're going to talk about one of those partnerships today. 
 

And I'm thrilled to have Walter Haydock from StackAware and Steve Dufour from Embold Health, and uh, we're going to talk about how they came together and leveraged HITRUST to deliver great outcomes for the healthcare community. So before we get into what you actually did and how you did it and why it matters, a few words about, uh, what StackAware does and then your role in the company as well. 
 

[00:00:47] Walter Haydock: Sure. So StackAware helps AI powered companies measure and manage their cybersecurity compliance and privacy risk. I'm the founder, CEO, head of engineering. Janitor for Stack Aware. We are, uh, all the above. Yes, we are. We currently are a one employee show. You're looking at 'em. And we, we have contractors that we work with. 
 

Some advisors, uh, external the company, but yeah, that's us. And, and you use ai? We do. We do. We, we are heavy users of AI and we are one of the first ISO 40 2001 certified companies in the world. Look at that. So our AI management system has been formerly audited and certified.  
 

[00:01:24] Sean Martin: Fantastic. Fantastic. Steven. 
 

Steve Dufour: Yeah, I'm Stephen Dufour. I'm the Chief Security and Privacy Officer of EmEmbold Health. Um, EmEmbold Health rates physician performance. We rate providers on the appropriateness and effectiveness of care. The idea is if we can get, um, patients to higher performing doctors, they will have better outcomes. Uh, and the idea is to save money, save time, and save, um, And to stop, uh, needless procedures. 
 

[00:01:58] Sean Martin: And remissions and all the other crazy stuff that comes along with it, right? That we all pay for. Exactly. The other thing we pay for indirectly is getting compliant and raising the security posture and all this other stuff that's necessary because as a, as a patient, we want our information protected. 
 

We want, but we also want to get the care that you're talking about. Um, in your presentation. It struck me that you led off with what are the outcomes we're trying to achieve with this partnership. So can you, can you start us off there? What, I think you touched on a little bit here, but, um, if you can maybe expand on what the objective was by coming together and leveraging HITRUST to deliver this solution. 
 

[00:02:43] Steve Dufour: Sure. Um, at the very beginning, um, and it's mission to improve health care outcomes, right? And we wanted to leverage. Generative AI to help in that mission, um, and we wanted to do it responsibly. So part of the, um, business.  
 

[00:03:03] Sean Martin: Can I pause you there? Sure. Because there's, we want to use technology to do something or there's, we need to do something and the best way is to use technology. 
 

Can you describe that thought process?  
 

[00:03:18] Steve Dufour: As far as like the business problems versus objectives? Okay. Yeah. So the original business problem we had, we solved it through technology and data. So we developed a methodology for rating providers. We developed a product called Provider Guide to allow people to log in and find highly rated positions. 
 

But we also realized there was another problem to be solved. Our end users needed to know what type of provider that they need. So. We wondered if we could use generative AI to help them find providers to solve that business problem.  
 

[00:03:58] Sean Martin: Yeah, cause there's tons of different practices, right? Right. Which one's going to help solve and maybe you need more than one perhaps. 
 

Right. And they all have very funny names. That's right. That's right. So there's, on my show, I have this vision of 
 

What we want to do, we can actually leverage security to achieve it, versus not understanding security and looking at it only from a risk perspective, we might not ever achieve the outcome we want because we're too afraid. But it sounds like in your case, you saw the desired outcome. You found a way to not just leverage technology, but in a way that's secure, which will come to you as Walter here. 
 

Um, to embrace it in a way that could get you to that outcome in a rapid time. So I don't know, can you talk to your engagement with Enbold and how that process came together for you?  
 

[00:04:59] Walter Haydock: Sure. So Steve as Chief Security and Privacy Officer has an incredibly wide remit. He's responsible for everything, security, privacy at Embold. 
 

And, you know, for, for a small team, that's a huge, huge domain of things that he needs to tackle. He's responsible for HITRUST certification. He's responsible for relationships with clients. Um, so what Stackware does is we are specialists in AI powered technologies. So when they were developing the Embold virtual assistant, which is their, their AI product. 
 

Steve got in touch with me to discuss doing a risk assessment and penetration test of their application. So what we did was do a top to bottom review of its security and make sure that it aligned with their business objectives and risk appetite.  
 

[00:05:53] Sean Martin: Steve, can you, can you paint a picture of, I don't need the, the sausage making description, but kind of a view of, of this AI enabled app? 
 

Um, did you? You can roll your own LLMs, uh, what's the data source, uh, that maybe a picture there so folks can kind of get a grasp of what you actually delivered. So maybe somebody who's looking at this as well can line up with what you're looking at.  
 

[00:06:18] Steve Dufour: Sure. Um, our data source is our data, our proprietary data. 
 

So we subscribe to a national data set that allows us to use our proprietary methods to rate all of those physicians. Thanks. I was going to say on top of that, but it really it's a separate thing. This large language model using Azure OpenAI, our product, the Embold virtual assistant. It's an interactive tool that an end user can use. 
 

Like, so for example, Hey, my knee hurts or something like that. Then we'll ask it questions until it gets enough information. to recommend a provider. And that's really important, uh, wording. Recommend. Like you might want to go see this particular, uh, physician based on the symptoms that, um, that we just, just talked about. 
 

[00:07:11] Sean Martin: Does it, does it keep the history of the conversation? I'm assuming it's conversation like with the, with the app. Does it keep a history? So a collection of queries might change the recommendation path. Based on, oh, there's a new set of symptoms, this actually leads us to a different, different area of treatment or  
 

[00:07:32] Steve Dufour: Yeah, one of the things of concern is a history of conversations, as well as interactions within our tool. 
 

Um, we chose to use Azure OpenAI and a particular model that doesn't learn. And it doesn't store that information long term. So, protecting our patient's confidentiality as far as like the conversations go, Very, very important in the development of this product. So our tool does not learn.  
 

[00:08:00] Sean Martin: And so was that, to me, that's a product requirement. 
 

Yes. Yes. So was that something you defined or did StackAware help kind of frame the scope of what's possible? 
 

[00:08:13] Steve Dufour: In the beginning, we had a discussion and we ended up talking about risk. Yes. Yes. Right? Like, is it worth it for these particular, um, models that can learn versus can't, uh, can learn? And what does that look like? 
 

What kind of third party dependency does that create?  
 

[00:08:36] Sean Martin: Anything to add there?  
 

[00:08:37] Walter Haydock: Yeah, I think the approach that Embold takes for, from a data security perspective, makes a lot of sense. They, they don't train on the inputs from the patients who are seeking providers, while at the same time using the Azure OpenAI service, they can keep the context window open for the conversation. 
 

So within a conversation, the AI model can, can have short term memory like Steve mentioned, you know, but not long term. Essentially, you know, after, well, Azure has a 30 day retention period after that, it's gone, completely wiped. And, uh, I think, you know, I agree from a, from a security and compliance perspective, that's the way to go. 
 

[00:09:17] Steve Dufour: And it's really about managing your risk and what you want to deal with. So when companies gather information and data that don't meet or are justified by business requirements, it makes no sense. It puts the company at risk.  
 

[00:09:33] Sean Martin: So how and where did the risk conversation come in? It sounded like early on, which is great. 
 

Um, did that continue throughout the development and delivery process? And your role, um, at the beginning and throughout the assessment as well.  
 

[00:09:48] Walter Haydock: Yeah, so we took a two level approach. One, we did a comprehensive AI risk assessment for the entire company, looking at policies, procedures, vendors. I mean, today, every vendor is trying to integrate some sort of AI technology into its products, which can introduce risks. 
 

Productivity gains, but also risks if they're not managed appropriately. So we did that high level review across the entire company to make sure that Embold's stance with respect to AI matched those business objectives and was within their risk appetite. And then more specifically for Ava itself, we conducted a penetration test against it. 
 

Attempting to inject malicious prompts, attempting to misuse it from a, from a requirements perspective. So, you know, making it do things it wasn't designed to do, evaluating some of the functionality. For example, um, there are off ramps built into the system for folks who may be facing a medical emergency. 
 

You know, that's not the type of thing you want to set up an appointment for. If you have numbness in your left arm, shortness of breath, blurred vision, you know, that's, that's an emergency. So the system is designed to end the conversation and say, this is an emergency. Call 911. Don't, don't try to set up an appointment with your doctor. 
 

Uh, if, if that were to happen. So we evaluated those situations, um, you know, from kind of a, a malicious attack perspective, but also from a, you know, innocent, uh, user who simply didn't understand the gravity of what he or she was facing at the time. And then we also did a, uh, application penetration test of the underlying infrastructure to make sure that there were. 
 

No vulnerabilities that could be exploited easily there. And 
 

[00:11:30] Sean Martin: so I want to get to the uh, the reporting and the demonstration of protection and security that you did this work in a second. But I want to stick with the product requirements for a second because every time I have a conversation around AI and we're going to build something using it, I used to be a product manager, and I could fairly well define the scope of what I wanted my thing to do. 
 

And if you just blindly throw AI on top of that, it's almost like an endless set of scenarios, right, user scenarios. So how did you come up with just even that example of, depending on the symptoms, you need to off ramp them and get them, get them some help immediately, right? Are those scenarios that you had in mind, and how do you contain them? 
 

The now AI enabled app to not stray from scenarios that you don't want to exist, or I don't know if I'm making sense or not, but yeah, to me it seems super complex.  
 

[00:12:31] Steve Dufour: You know, like with any product, despite being artificial intelligence or not, you have to define what it is there to do. So first and foremost, you have to describe the problem that it's trying to solve. 
 

Second, Okay, we're going to build a product and it's going, it's going to be designed to do, let's say three things, and then you should be able to measure if it's doing those three things. And then an outside of those three things should probably stop, right? It's going beyond the scope of what it was designed for. 
 

So in the, for the case of the mobile virtual assistant, um, we have, uh, two scenarios and only two scenarios. Anything outside of that. Um, the large language model on the backend is designed to redirect the user back to its intended purpose. The two scenarios that we, um, grade ourselves on are emergency versus specialty. 
 

So the tool is designed to recognize emergency scenarios, right? My left arm is numb. My vision is blurred, right? And it will prompt you. to seek emergency help and end the conversation. The next are specialty based. So if you describe headaches, blurred vision, but it doesn't recognize it as a scenario, it might recommend that you go to a neurologist. 
 

And so based, based on those two specific scenarios, one emergency recognition, and two specialty recognition, we can actually grade the effectiveness of the tool. And because we did that, we can also improve over time.  
 

[00:14:08] Sean Martin: So how much of this did you test and validate internally? How much of this, um, clearly you did some of the risk stuff as well, but, uh, how much of this were you involved in, in kind of the user scenario? 
 

[00:14:21] Walter Haydock: So what we did is Steve and the engineering team gave us a detailed set of their business requirements and that's a hard requirement from, from Stackware's perspective to do any sort of engagement, because if we don't know what the app should look like, then it's going to be very difficult to test it. 
 

So he gave us a very clear set of requirements from a business perspective as to what the outcome should look like. And then we ran it through scenarios attempting to push and break the limits of the system given these constraints.  
 

[00:14:55] Sean Martin: And talk to me about HITRUST's role in all of this. From your perspective, I don't know how much. 
 

I trust you have involved in your organization, but maybe you want to describe that too, but specifically around the app and what you guys worked on together.  
 

[00:15:11] Steve Dufour: Sure. And Embold Health is a HITRUST certified company. Well, we do the risk based assessment, so we have much larger assessment one year, and then we have the interim assessment the next year. 
 

So, uh, this is our fifth year. And because of the nature of our new product, In the announcement that came last year, which is the AI Risk Management Framework from HITRUST, we wanted to include this product in our scope. So I'm not sure when, when this is going to air, but on Monday we submitted and Wednesday, um, they, they approved it so they can start going through the, uh, certification process. 
 

So we might be one of the first, uh, AI companies.  
 

[00:15:49] Sean Martin: Right,  
 

[00:15:52] Steve Dufour: right. Exactly. With that in our scope, uh, which is a big deal. So Last year when they announced all the risk based items for managing AI risk, we had a very long list to go through. So we added all of those risk assessment requirements to our risk assessment and started to take those requirements for the in built virtual assessment. 
 

And we used those, um, to have conversations. Like internally, um, within engineering as well as, um, the executive team.  
 

[00:16:30] Sean Martin: And what are those? I'm always interested in the conversations with engineering. How did, how did the HITRUST risk assessment help you have that conversation? Did it include the language to help guide them on how to architect and design and code and whatnot? 
 

[00:16:49] Steve Dufour: Um, as from a technical perspective. Not as much, but, um, from a risk perspective, yes, because some of the HITRUST requirements are, um, identify the internal and external, um, just forgot the word, so I'm going to start again, uh, internal and external context. There we go. All right. Um, from a technical perspective, not as much, but from a risk perspective, absolutely. 
 

Where one of the requirements is the identification of internal and external context For your AI system, right? Um, as well as some requirements for bias testing like the bias testing, um Really boils down to uh underrepresented groups, right? We're talking about age ranges like from very high like elderly Where they're very young or African American or any of our underrepresented groups and I gave them the criteria to test for. 
 

Interesting.  
 

[00:17:56] Sean Martin: And, um, let's talk about the ultimate outcome here. And I don't know, you're involved in some of the conversations, but getting buy off from executive. Thank you. Leadership, and perhaps even the board to make an investment to do security by design from the beginning and to make the investment to know you want to achieve the certification and demonstrate with reporting that this is part of our scope now. 
 

Difficult conversation in a lot of cases, right? To make that investment in money, time, people, partners. Um, What was that conversation like internally? And how did you help perhaps, um, make that case?  
 

[00:18:44] Steve Dufour: And so, so internally it was very easy. Um, yeah, I didn't have like a doomsday, um, story or anything like that. 
 

It's easy.  
 

[00:18:54] Sean Martin: It sounds like you have a mindset and a culture of doing this. Yeah, absolutely.  
 

[00:18:59] Steve Dufour: So Embold Health is a relatively newer company. They've been around, I want to believe like six years or so. Um, the executives are open to conversations about risk and how to mitigate it, right? And it's pretty easy whenever you classify something as, let's say, a 200, 000 risk, and then you lay out the plan to reduce that risk by half. 
 

It's like a very easy conversation to have whenever we're talking about dollars. And also, I take the methodology, I don't present risk to the executive team unless I have a plan to remediate it. You know, then they can, they can potentially approve or ask questions or, um, for the, for the most part, it's just informational, um, me conveying information to the executive team. 
 

[00:19:44] Sean Martin: Common story for you, uh, working with Enbold or is, do they differ, uh, kind of the cultures and whatnot?  
 

[00:19:51] Walter Haydock: I'd say uncommon, uncommon, but good, but uncommon in a good way because something that Steve is very good at is mapping. security projects and outcomes to business outcomes. I've literally seen him do the mapping. 
 

And the risk quantification piece comes into play because if you're talking to a CFO, a CEO, that person understandably is concerned about the bottom line. And that makes sense. You know, that's, that's his or her job to optimize for that. And risk management plays into that calculation because. Obviously, these people understand that, you know, there can be damage from a customer trust perspective, a regulatory impact. 
 

Uh, you know, even if you're completely heartless and don't care about the individuals whose data might be impacted, there's still a business impact to be had. So, um, you know, that's of course not the case for Enbold, but there is a way to express the risk and the risk mitigation achieved quantitatively, which I think is very important. 
 

And then also something that Steve said in expressing You know, talking to executives about risk and then presenting options. But in my opinion, and I think Steve, Steve can weigh in here, I think those executives who own, uh, you know, a business unit or, or, or a, a profit and loss statement, those are the people who, who should be making the calls about risk management, not security, uh, and compliance cooks, because they have, you know, they have expertise. 
 

They are advisors and can make recommendations about the most cost effective way to tackle a risk. But at the end of the day, they're specialists in a niche, whereas CEO has a much broader remit worried about cost, worried about, you know, dealing with clients, partners, the public, things like that. So those people are best equipped to make risk decisions, in my opinion, with the support of experts who can quantify and explain those risks. 
 

[00:21:50] Sean Martin: You want to weigh in on it? I agree. That's a very good explanation. All right. As we wrap here, I want to speak to, uh, directly to the folks watching and listening. Um, perhaps we'll start with you, Walter. Organizations looking at AI, um, maybe a, either a piece of advice you've had, you can share based on conversations you've had or a lesson learned from, from this experience for a good way to take that first step. 
 

[00:22:22] Walter Haydock: The basics apply. whether you're using AI or not. And if you have a firm foundation in terms of data governance, classification, transparency, you know, vulnerability disclosure programs, things like that, that are not even, they don't require fancy tools. They don't require a lot of real expertise per se. 
 

They just require discipline and, um, follow through and commitment from business leaders. If you get those things right, then the AI governance piece comes much more easily. So, bottom line, focus on the basics, and when you move into deploying AI systems, the security compliance privacy picture will be much easier. 
 

[00:23:11] Sean Martin: I love it. I want to be a little more pointed with you, Steve. We're focused on the AI security cert. And your experience working with that and with the HITRUST team, you described a two day turnaround on approval to begin the formal process. Um, having worked with HITRUST and getting a certification for the company at large, what was the experience like with the AI cert? 
 

Did it, like, kind of fit in well? Were there some disconnects? How does that look?  
 

[00:23:44] Steve Dufour: So, are we talking about the new AI certification they just, um So, uh, in Embold Health, that certification is different than the one that they started doing last year. So the AI Risk Management Framework that HITRUST released last year, that's what we're doing there. 
 

I'm excited about the HITRUST. Because we haven't started the CERT yet. Correct. Okay. Sorry, sorry, sorry. Yep. Um, I'm excited about the certification because it does present a very specific opportunity to get our product, uh, certified in the context of our artificial intelligence. That's pretty cool. And if, um, HITRUST and Microsoft partner together, then we could potentially use, uh, that certification, uh, to help our, our customers. 
 

So then the risk management piece then, how was that experience? Microsoft Mechanics www. microsoft. com It was good. It was very clearly defined, um, from the very beginning, uh, the wording such as the internal and external context, the individuals, um, society, the business and the groups, uh, that are at risk for artificial intelligence bias. 
 

Very clear cut, very clear path forward and really helped, uh, in Embold Health improve our artificial intelligence product.  
 

[00:25:05] Sean Martin: So, safe to say you got the outcome you wanted.  
 

[00:25:07] Steve Dufour: Yes.  
 

[00:25:08] Sean Martin: With the app, and the two of you, along with HITRUST, achieved that outcome together.  
 

[00:25:13] Steve Dufour: Yes.  
 

[00:25:14] Sean Martin: I love it. Well, thank you both for joining me and sharing this story. 
 

I, uh, really appreciate it. I'd love to hear how technology can be used in a safe way to deliver better health. I mean, that's ultimately what we're all trying to do here, right, for our customers. So, thank you everybody for listening and watching, and, uh, good luck. Please connect with these two gentlemen and their organizations and stay tuned for more here on ITSPmagazine. 
 

Thank you.