This episode explores the critical role of human factors in secure software development, featuring Dr. Kelsey Fulton’s insights on integrating security into the development lifecycle through collaboration, education, and thoughtful tool design. Discover how developers can overcome common challenges and shift towards a “security by design” mindset to build safer, more resilient systems.
The latest episode of Redefining CyberSecurity on ITSPmagazine featured a thought-provoking discussion about integrating human factors into secure software development. Host Sean Martin was joined by Dr. Kelsey Fulton, Assistant Professor at the Colorado School of Mines, and Julie Haney, a computer scientist at the National Institute of Standards and Technology. The conversation explored how human-centered approaches can strengthen secure software practices and address challenges in the development process.
A Human-Centered Approach to Security
Dr. Fulton shared how her research focuses on the human factors that impact secure software development. Her journey began during her graduate studies at the University of Maryland, where she was introduced to the intersection of human behavior and security in a course that sparked her interest. Her projects, such as investigating the transition from C to Rust programming languages, underscore the complexity of embedding security into the software development lifecycle.
The Current State of Secure Development
One key takeaway from the discussion was the tension between functionality and security in software development. Developers often prioritize getting a product to market quickly, leading to decisions that sideline security considerations. Dr. Fulton noted that while developers typically have good intentions, they often lack the resources, tools, and organizational support necessary to incorporate security effectively.
She highlighted the need for a “security by design” approach, which integrates security practices from the earliest stages of development. Embedding security specialists within development teams can create a cultural shift where security becomes a shared responsibility rather than an afterthought.
Challenges in Adoption and Education
Dr. Fulton’s research reveals significant obstacles to adopting secure practices, including the complexity of tools and the lack of comprehensive education for developers. Even advanced tools like static analyzers and fuzzers are underutilized. A major barrier is developers’ perception that security is not their responsibility, compounded by tight deadlines and organizational pressures.
Additionally, her research into Rust adoption at companies illuminated technical and organizational challenges. Resistance often stems from the cost and complexity of transitioning existing systems, despite Rust’s promise of enhanced security and memory safety.
The Future of Human-Centered Security
Looking ahead, Dr. Fulton emphasized the importance of addressing how developers trust and interact with tools like large language models (LLMs) for code generation. Her team is exploring ways to enhance these tools, ensuring they provide secure code suggestions and help developers recognize vulnerabilities.
The episode concluded with a call to action for organizations to support research in this area and cultivate a security-first culture. Dr. Fulton underscored the potential of collaborative efforts between researchers, developers, and companies to improve security outcomes.
By focusing on human factors and fostering supportive environments, organizations can significantly advance secure software development practices.
____________________________
Guests:
Dr. Kelsey Fulton, Assistant Professor of Computer Science at the Colorado School of Mines
Website | https://cs.mines.edu/project/fulton-kelsey/
Julie Haney, Computer scientist and Human-Centered Cybersecurity Program Lead, National Institute of Standards and Technology [@NISTcyber]
On LinkedIn | https://www.linkedin.com/in/julie-haney-037449119/
____________________________
Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]
On ITSPmagazine | https://www.itspmagazine.com/sean-martin
____________________________
View This Show's Sponsors
Imperva | https://itspm.ag/imperva277117988
LevelBlue | https://itspm.ag/levelblue266f6c
ThreatLocker | https://itspm.ag/threatlocker-r974
___________________________
Watch this and other videos on ITSPmagazine's YouTube Channel
Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:
📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq
ITSPmagazine YouTube Channel:
📺 https://www.youtube.com/@itspmagazine
Be sure to share and subscribe!
___________________________
Resources
Kelsey Fulton Biography: https://kfulton121.github.io/
___________________________
To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-cybersecurity-podcast
Are you interested in sponsoring this show with an ad placement in the podcast?
Learn More 👉 https://itspm.ag/podadplc
Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.
_________________________________________
Sean Martin: [00:00:00] Here we are. You're very welcome to a new episode of redefining cyber security here on ITSP magazine. I'm Sean Martin, your host. And as you know, I get to talk to all kinds of cool people about cool topics designed to help security teams and leaders build better programs and practices and teams that, uh, That support the business and generating and protecting revenue and, and customer information, all, everything else that goes into running a business and doing that as the, as a forethought, not as an afterthought.
And I'm very thrilled to continue this series, the human centered cybersecurity series with my good friend, Julie Haney. Julie, good to see you.
Julie Haney: Good to see you, Sean.
Sean Martin: Yes, we, we get to, uh, dig a little deeper into one specific area around human uh, human-centric research and how, how that work impacts programs and the technologies and the teams that, uh, are responsible for all this stuff.
So it's great to have you on. You brought another [00:01:00] special guest today.
Julie Haney: I did another another research friend. Um, so we're very excited to talk today with Dr. Kelsey, who is an assistant professor in the computer science department at the Colorado School of Mines, which I'm honored to be a part of. Very jealous about because Kelsey gets to live in the beautiful Rocky mountains.
Although she is reporting quite a bit of snow falling right now. So maybe I'm not quite as jealous at this moment. Um,
Sean Martin: It is beautiful golden area. The golden area is just spectacular. Absolutely.
Julie Haney: Um, but Kelsey's research centers on the human factors that impact secure software development. Very important topic. Um, so welcome to the podcast, Kelsey.
So to start, how did you become interested in researching secure software [00:02:00] development in the first place?
Kelsey Fulton: Yeah, so, um, I owe my entire interest to this, uh, to the, um, Human factors in security and privacy course that my advisor actually taught during my I knew I liked security when I came out of undergrad, the kind of puzzly problem solving nature of it. But I was not even aware that you could study people and security at the same time.
And so when I took the course at University of Maryland with my, with my advisor, she pitched a bunch of really cool projects and one of them was around looking at, um, This transition from C to Rust. And I was like, that sounds super cool. I love the C programming language. Um, at the time I had done some operating system development, so I had spent some substantial time in C, uh, and I was very much a.
See believer and a rust non believer. Um, and so I took on that project almost out of spite in a way to prove that I was correct. [00:03:00] Um, and I've been converted. I am our rust believer. Um, but, uh, I, I started to realize that it's really cool to be able to not only do the fun security puzzle part, but actually be able to think about the way people think about those things.
Um, especially as it pertains to. software development where you have a lot of moving factors and it's not just about how good someone is at writing code or thinking about security, but there's a lot of other factors at play that make this a very hard problem to solve. Okay.
Sean Martin: in one part of my role responsible for What now we affectionately refer to as AppSec, but basically I was, I was responsible for the quality of the outgoing product, but I had to [00:04:00] look at the code and the inputs and the outputs and the APIs and all the functions and things like that.
And we had different development models, a lot of different technologies. This is 20. 30 years ago now, 20 years, at least 25 years ago. Um, so you talk about C and rust and there are a lot, a lot of other languages as well. How have we as an engineering entity, if you will, um, transformed over the years? And maybe, maybe you can look at the last few years that you've really been involved.
Do we have the environment and the culture to do this, even if it's hard, which we're going to get into, but do we, do we have a mindset at that says we want. To do better in this area.
Kelsey Fulton: Yeah, I think, and it's hard for me to say too, um, because I've only really been around for, for seven years in this space. It's not a long, long time. Um, but [00:05:00] I think, I think things are getting better. And I think part of it comes from the just sheer magnitude of security incidents, uh, on a regular basis.
Right. I mean, 15 years ago, while there certainly were security incidents, I think the everyday person was less exposed to it. Um, but now like you'll be pressed to not find somebody who hasn't been impacted by some type of security incident, whether like it's something like Equifax, right. Um, uh, and so I think that the general mindset is getting better.
Um, but I do think from an organizational perspective, We still lack a lot of support for making something like this happen. So I always say my, my tagline, if I had to say one thing about the work that I do is that I don't think developers don't want to do the right thing. I just think they don't always have the support or knowledge to do the right thing.
Um, and so I don't know that it's a lack of one, just the lack of being able to [00:06:00] actualize the stuff that they care about.
Julie Haney: Um,
Sean Martin: Yeah. And one, one, one follow up here, cause the, the, my experience and I'm sure it's changed a bit as well over time, but my experience by function was an engine quality assurance, which typically came after. Delivery from engineering. So they've tossed something away the fence, you pound on it a bit, you send it back over and hopefully, hopefully we get through the big, huge list before, uh, before release date, uh, time for the holidays, whatever, whatever it is you're trying to reach.
Um, the. I guess that back and forth, there is a lot of tooling now, um, that helps move some of these things closer to the front end of development and hopefully shorten some of the cycles and the timing of the cycles, uh, between the test and engineering. Do you feel that engineering is also picking up some of the quality assurance [00:07:00] aspects in their development such that they're not relying on another team and to, to toss stuff back and forth?
Kelsey Fulton: the Research that I am aware of says no, um, for the most part, uh, when you ask developers, if. Security is their responsibility. Oftentimes they will say, no, that is the responsibility of somebody else. I am here to make the product and get it out the door. Right. Um, and in studies like live coding studies and things of that nature that we've done, um, I have found that people will prioritize functionality over.
Security, even when they know that security is something that they should possibly care about. Um, so I think that. Ultimately, some of this has to do with the mindset of like it's not my job, and I have 1000 other things on my plate so why would I add 1000 [00:08:00] and you know the thousands of one thing to my plate that I just don't have time to do.
And I think another piece of it is, is that People don't think about security from the beginning, usually. What'll happen is part way through or at the end is when you're like, oh, we should have done this in Rust, for example, but now it's all written in C and that is a non trivial problem, right? To go backwards and do it in another language.
Um, or I fundamentally designed my system in this way and now I found out that, well, The better option would have been to design it the other way, but I've already built the entire system and that's going to undo eight months of work and I just can't do that, right? And so one of the things that we found and I think would help with this problem, but I don't see happening yet is kind of security by design.
So from the very beginning, the first day you build your, you start designing your product, you have someone who's security knowledgeable there, right? Tell you, Oh, you know what? You shouldn't do it this way. Should do it this way. Do you use this [00:09:00] language? Don't use that API. Right. Things of that nature, um, which then softens that process along the way.
And you get less of that ping ponging back and forth that you talked about where quality assurance is like, Hey, you can't do that. And the dev team's like, but I don't know how to not do that. And then you end up in this weird spot. Right. And the deadline's approaching. And sometimes the deadline, unfortunately does overshadow the importance of security, right?
So my inclination is no, I don't think that that is inherently like getting better. Um, but I, again, I don't know that it's like, for a lack of not wanting to, like, they don't sound like they don't want to do it. I just think that right now, the way we've built software and continue to build software and products just does not necessarily support that model.
Um, I don't have a good 100 percent solution how to change the model of how we build software because I don't have that kind of power in the world. Uh, but you know, ultimately, I think if we kind of start to shift that dynamic, it'll help improve things.
Julie Haney: Yeah, I wanted, [00:10:00] I wanted to, you mentioned some of the research that you've done, Kelsey, and I wanted to get a little more into that. Um, how are you and your team going about studying some of the issues that, um, developers are having? Are you, uh, I think you mentioned like live, live coding or, you know, what are some of the methods that you're, you're really getting at trying to identify the challenges?
Okay.
Kelsey Fulton: to let me embed, uh, either myself or one of my PhD students and they watch everything that happens and they get to take really good notes. And then we get to talk about that later. Um, in a paper and ideally we can do this at like every company ever.
Right. And that would be the perfect world. Um, unfortunately, uh, companies are not super amenable to that. Uh, it has happened. It doesn't mean it won't ever happen. Um, but ultimately from [00:11:00] their perspective, I'm sending in someone from the outside and then we're going to publish a paper that everyone's going to read.
That's going to talk about how bad they are at security and privacy. Right. Um, even if we don't mention their name. That's just not a great look to them. And I, I understand their, um, hesitancy when it comes to things like that. So we do our best to try to emulate these environments, um, as closely as possible.
So sometimes we might interview professional developers or security professionals to kind of ask them, Hey, how does this work at your company or the environment that you work in? What types of experiences have you had? And these are primarily going to be more from a retrospective sense. So like, what have you seen?
What has worked well? What has not worked well? Um, you know, in your ideal world, what would you change? Things of that nature. This allows us to kind of really drill down and ask really good follow up questions. Um, and really good at like the nitty gritty of what's going on on a small perspective. Sometimes might want to be able to make [00:12:00] very generalizable statements about the things that people are are and are not doing. So we might use a survey to do that again with developers or security professionals. Um, where again, we're going to ask them those retrospective questions, but we lose a little bit of that flexibility to kind of ask the really cool why and why not.
Um, and we just get more of the data of like, here's the things that happen. Um, and you know, we might be able to say with some statistical significance, here's the things that happen. The other option that we can do, um, is something akin to a lab study. Uh, so traditionally, pre COVID especially, um, folks would literally come into a lab and sit down and do some type of task, and you would watch them and collect a bunch of data about the things they did, um, the security mistakes they made or didn't make, and then you'd be able to analyze all of that and, and write, um, papers about it.
And then, uh, with the kind of explosion of, uh, The internet and being more widely available and [00:13:00] things like coven and wanting to be able to recruit people outside of the area that you live in, because obviously lab studies are very prohibitive. I'd only be for me. I'd only better really talk to people in Colorado.
Right. Um, we started designing like remote infrastructure to conduct these. Um, and so. This allows us to be able to collect a lot of that data, but someone anywhere in the world can sit down and take our study. Um, and we often try to do this with, uh, as diverse a population as possible. So again, in the ideal world, it's developers, it's security professionals, but they're not always easy to reach.
Uh, they're hard to recruit because they don't just exist, generally speaking on things like Amazon Mechanical Turk or Prolific, which are crowdsourcing platforms for, for study participants. And so we can try to blend a little bit of a mix. So we'll often talk to like freelance developers and some professionals and sometimes students, because it turns out they're actually a pretty good proxy for early [00:14:00] career software developers, because they're about to be early career software developers.
I guess it's not all that surprising, but there is research to back that up. So we can have them complete a number of tasks. We can collect their interactions with the system. Um, and then we can write some really cool. Um, so I actually had a student build a system that does this, um, while I was at University of Maryland, and, uh, we have effectively created a plug and play study environment where we can kind of change what the study looks like, but the infrastructure remains the same.
Sean Martin: That's super cool. I'm going to take this moment. Um, Call to action to everybody listening and watching connect with Kelsey, be part of the research and the work that her, she and her team are doing. I want everybody to bring their programs to you and their, their teams to you and help you help you do this work.
Human factors is so [00:15:00] fascinating. Can you share some of the nuggets of what you look for as. Characteristics, characteristics of how people interact with the system, how they might think about the steps that take them to this, what's, what's innate, what is where people pause and think what, what's some of that work look like
Kelsey Fulton: Yeah, for sure. So, um, this is gonna vary widely depending on what kind of question you're looking to answer. Um, and so I'll pick some of the, I don't know, cooler things I think that I've done, um, and, and talk about those. So one of the, um, kind of first, uh, Um, Classification papers that that we wrote with my colleagues at University of Maryland is we wanted to look at we know developers struggle with security, but there wasn't a good understanding of what exactly they struggle with.
Is it like they just do not understand at all that they need to implement security, or is it. They try and [00:16:00] just don't do a good job. What does that look like right. So, uh, in 2016, uh, my colleagues at University of Maryland built this infrastructure called Build it, Break it, Fix it. And so the core of Build it, Break it, Fix it, uh, was this idea of we have lab studies where we have a lot of control of what goes on.
And then we have things like field studies where we go out and look at what people are doing. We have very little control as to what goes on. They both yield interesting insights. But what if we took the two and kind of blended them and we made a lab field study basically. So what they did is they built this infrastructure, uh, where people would come, they would work in teams, which is pretty normal for software development, and we'd give them something to Some type of, uh, system could be, uh, a secure electronic health records database.
Uh, we also had like an ATM bank communication system where you had to kind of protect communication between the two entities, [00:17:00] something small and standalone, but not so small that it's like inconsequential, right. Um, and then they could do it however they wanted. any language, any tools, we didn't care. Um, and so things we were looking for is like one, what choices do they make?
We didn't tell them what language to use. We didn't tell them what tools to use. Do they make good choices? It turns out no. Uh, a lot of people, a solid number of people wrote their systems in C, uh, and then produced less secure code than everybody else who wrote it in other programming languages, which not a surprising result, but it's cool to be able to quantify those, those types of things.
Did they use fuzzers to test anything? Did they use static analyzers to test anything? No, they just used the test that we provided them and that was about it. Um, and so those are the types of things of like, given free reign to make decisions, what decisions do they make? Are they good for their development?
Who knows? And it turns out, generally speaking, no, they're going to go towards convenience for them, of what they [00:18:00] know, which is reasonable, um, and not make necessarily good decisions about security. Now, why would they make good decisions about security? Well, part of this, uh, infrastructure is that once you build it, you give it to everybody else, and they look for vulnerabilities in your system, and the more vulnerabilities they find, the more points you lose.
People with the most points at the end won a cash prize, so there was motivation to do well. Um, so, So they were motivated to write secure code, we explicitly told them like communication needed to be secure, for example in the ATM bank situation, or in the electronic health records version we told them you need to employ some level of access control like everybody shouldn't have access to everything right.
And part of that was to see if we told them they needed security, do they implement it? The answer is sometimes. So what we saw, uh, all said and done, we were able to take all of these submissions and look for, we looked for the vulnerabilities in them. Uh, it took us quite a while. [00:19:00] We looked at nine, 900 different vulnerabilities, I believe, somewhere in there.
Um, and so it took us a while to look for these. We did it all manually. Um, and, uh, we classified them kind of into three broad categories. We basically wanted to see, does it fall into the category of, I didn't try security at all? Does it fall into the category of, I tried but didn't know what to do? Or does it fall into the category of, I made a programming mistake, and is it like, I messed up my control flow, right?
Um, and the reason we wanted to do this, is that, feasibly, the last category, you can find with tools, right? Like control flow issues, relatively easy to find with tools. Well, decent testing, but also things like fuzzers, right? Not too hard. Um, the first category is Technically fixable because if we teach them the need to know security right in theory we should be able to get them to find security and if we know data needs to be encrypted and we test it and it's in plain text, pretty easy to figure out the middle category is [00:20:00] where things start to become difficult. And so one of the things we wanted to know is like, are the things fixable by what people think is fixable because for a long time people have just said well we just educate them better. Or we build them new tools, or we won't have any problems, right? It'll solve all of our security problems. Obviously not the case, or we wouldn't still be here.
Um, and so, the middle category is where we were particularly interested, because it shows people at least know they need security, but didn't know what to do, or failed to do it correctly, right? And so examples might be, I know I need encryption, but then I pick a insecure algorithm, Or I pick an insecure mode for an algorithm, right?
So I tried, but I just couldn't get there all the way. Or I used encryption, didn't randomly generate my IV, right? And so now it becomes more breakable. Um, so a lot of really cool stuff came out of that paper. Uh, we also did a follow up study where we wanted to see kind of the whole process that people pursued where they build software, um, [00:21:00] But we largely picked looking at the vulnerabilities because we wanted to see what went wrong. On the inverse, on an interview side study, I became very passionate and still am relatively speaking about Rust. I am certainly not a good Rust developer, but I think it's an awesome language and since my background is in systems I'm really like stoked on it being a viable replacement. Um, and one of the things we wanted to look for, uh, and wanted to measure is basically why isn't everybody using Rust?
And why are they still using C? Because it is still far more popular than Rust. Um, and in theory, if Rust is supposed to be this, like, catch all replacement, we shouldn't have this problem anymore, right? Um, and so clearly something's going wrong. So, uh, one of the things we wanted to look for is basically what, where are those inflection points where people either turn away or keep going?
Um, and why are those things like the, the mediating factors? Um, and so we actually ended up interviewing people at large companies that, [00:22:00] uh, adopted Rust or tried to get Rust adopted at their company. Um, and the things we wanted to measure there were like, what was your experience like? What issues did you run into?
Are they technical? Are they organizational? Turns out it's a mix of both. Um, and are there things we can fix, right, as a security community? And what do those look like? Um, it's a really long winded answer to your question, is that the way we decide what to measure largely depends on what we want to measure, right?
Um, and what we look for. So, if we're looking more for, like, experiences, then we might go about asking them questions, and those questions are going to be largely directed towards, um, what kind of experiences they had, right? Um, whereas if we want to measure what they're actually writing, we might actually make them write something, um, and then we can collect the end result, um, the tools they used, the language they used, right, things of that nature, uh, to be able to kind of correlate those results together.
And I'm really sorry for that very long winded answer. I hope that answered your question.
Sean Martin: it's, it's so super [00:23:00] cool for a nerd like me. So I love it. Julie.
Julie Haney: that's fantastic. Um, so, you know, I was thinking, um, when you're describing some of the, the, the research findings, um, and the different buckets. So some of it is, is kind of about. Motivation or attitude, right? Are they, are developers motivated to even put security in in the first place? Some of it is about, uh, uh, kind of education or do they just, they don't know.
Um, some of it is about, uh, availability of tools that can help them, um, And I, and I know that some of the recent, um, kind of big vulnerabilities that have come out have been embedded in, in, uh, libraries, right. Kind of these, these kind of open source libraries. And I was, you know, wondering like, what, what is the breakdown?[00:24:00]
When people are using live these different libraries or um, you know, are these usable for for people? Um, do the libraries have vulnerabilities or are they allowing the developers to introduce vulnerabilities?
Kelsey Fulton: Yeah, I think about this a lot. Um, Because I think traditionally in this field, which is Human centered software development and human centered security professional workers. And I don't need to tell you Julie relatively young, right, there's not a long history of people exploring this, and I think, rightfully so we've spent a lot of time building foundations for for why stuff.
happens what people think about it. Um, and I think just now we're starting to a little bit make the transition to like, how do we fix it? Um, so how do we build better libraries? How do we build better documentation? What does that look like? Right? Um, and it's something I've been spending a lot of time thinking about, uh, recently.
And I, I think [00:25:00] the answer is that ultimately, um, For better or worse, I think when we're taught to code and especially in a unit, I'll pick on universities cause I'm in one, um, that we've really prioritized. Here's a set of tests. If your code passed the test, great. 100%. If not, you lose points. Right. And so we are very, um, test motivated, I will say, right.
As developers or functionality motivated as developers and where this comes into play with things like APIs, um, and I'll also pick on like. Large language model generated code or stack overflow suggestions is that if I can get it and it runs in the first try, like I set it up, everything compiles. I passed my tests.
I think there's just no encouragement to look deeper. Um, which sounds weird to security people because we're always looking deeper. That's like the whole part of this job, right. It's like find new things, um, and go past what is, what is required. [00:26:00] But I think that there's just little motivation to kind of explore that.
And so when you have things like. Um, the software supply chain where I have a library that calls a library that calls a library that calls a library right and then three libraries down as vulnerable like I as a developer I'm never gonna go three libraries down to look at that code and vet it right.
And frankly, like who's responsible for that? I don't know. Is it the library that calls the library or is it me as the developer who's now using this? I don't have an answer for that. Um, and so I think some of it's that, but there's no motivation to dig deeper and when you have this kind of nested structure, you're not going to dig deeper, right, because it works.
It does the thing I want it to. And unless someone at my company's making me look deeper for compliance reasons or anything like that, then I'm, I'm good. Right. I'll move on. Um, And then the other piece of that, where is it that people use [00:27:00] libraries, and don't think about it or the library is kind of almost setting people up for failure.
There's been a long history in this community of encouraging libraries to remove insecure defaults, and I do think it is getting better. I think that inherently things are improving. Um, but I don't think we're all the way there yet. And I think it's very hard, um, to again, make, make developers dig deeper when they know, Oh, well, I can use this crypto function.
I'll just use the defaults because I'm assuming that whoever made this knows more about crypto than I do. Uh, and then it turns out the default values are. You know, bad modes or bad algorithms, right? Um, and it still stems back to that. Well, I'm not going to dig any deeper than I have to, uh, because why would I, right?
I am using this thing that somebody else made, presumably somebody else vetted, um, and so now I don't. Why would I spend the extra time that I don't have because I'm already super busy, [00:28:00] right? A thousand other things going on. Um, so I do think it's getting better. Um, but I do. Yeah, I think a lot about it of of how can we automate this process to almost Remove some of the human factors in some sense, right?
Like they can still make mistakes. I still have to give them room to use the API, however they see fit. Um, but maybe I just like, I increased that barrier a little bit and now they have to read my official documentation to do that, which they're probably not going to do right. Uh, versus I make them read my official documentation to do the right thing.
But they're definitely not going to want to do right. Cause official documentation is a drag most of the time. Silence.
Sean Martin: Yes. I mean, my head's exploding here. The. So how, I guess the question I have is, can we expect the pure engineers to actually, from, from a human [00:29:00] psychology perspective, get this, I mean, we, we wouldn't expect somebody in, in HR to come in and test, right. A product. And. So are we pushing on the wrong thing here?
And so you touched on both of you touched on this a bit in terms of, can we, can we leverage a platform or systems and tooling to help provide guardrails and a nice base that finds these insecure APIs for them. And so we take some of that burden off of some of the automation you spoke to. And, or are we relying on the wrong role to do this work?
Julie Haney: [00:30:00] Okay.
Sean Martin: somebody who does the work with the knowledge first, does the work with the knowledge to fix, and then the functional team does the function stuff.
I don't know. Are we pushing on the wrong pegs here? Trying to fit the, fit the square in the round.
Kelsey Fulton: Yeah, no, I, I, I think, um, This is also something I spend a lot of time thinking about because I think often the way myself included, um, that we talk about software developers is that we make some base assumption that they have almost like a formal CS education, um, that I can assume they've taken, you know, programming languages and that they've taken algorithms and if I explain something to them in security, they at least have the background knowledge.
To utilize that and understand like, oh, this language versus that language garbage collection versus not right things like that.
background
And I think that that's incorrect, because if anything we know that, first of all, many people who write code for a [00:31:00] living, do not have CSS. degrees, right. And in fact, some of them do not have degrees at all and they're very good at writing code, they learned a different way and that's, that's fine.
But I think, you know, pushing them to think about, I don't know, understanding why you might use. Asymmetric versus symmetric encryption and what that means from a mathematical standpoint and why your key has to be large enough from a mathematical standpoint. And, um, when they took, you know, a six week coding boot camp.
I don't know that I can make that expectation that they have the formal math right there to understand that maybe they do. Maybe they don't. I don't know what their background is. Um, and so, um, I think, ultimately, there is room for automation, and certainly there are lots of things we can automate, um, and I, I think that we should use that, because the less work that we have to do as people is probably better for everybody, right?
If we can, we can take some of that and automate it. Now, [00:32:00] ultimately, what it comes down to is even with automation, automation's really good at telling you when something's wrong. It's less good at telling you how to fix it. And it's really just doesn't fix it right like I don't, I don't know for 100 percent sure but I'm almost certain there are not a lot of automated bug fixing tools in the world right they can often tell you what's wrong but they can't change the code for you, because there is some.
need to understand how the code works, right? It's not just about changing one line. A lot of the times if that's the line that's vulnerable, it's about figuring out how that line interacts with the rest of the system and understanding all of that. And that requires context, which is why, um, AI is not particularly good at this because context is not its area.
It's better pattern recognition, right? Um, And so I think there's room for automation for finding things. But ultimately all the tool can do is tell me something's wrong. I still have to address it. And it's much easier to address things when I know what's going on in the [00:33:00] code base. Right. If I know what's going on in my code, I know this line interacts with line A, B, C, and D.
So I need to change all of those lines. Um, and I, as a security engineer who does security for the whole company may not know that. I may have enough knowledge to know like, oh, this should do whatever, like the functionality requirements, but I'm not going to know how the code was written, how it was engineered, what lines interact with each other, unless I sit down and learn the entire code base, which I don't have time to do, because there are already not enough security professionals, right?
Um, And so I think, I think that we can teach the engineers enough to make good decisions from the beginning. I'm not expecting that they're going to be expert vulnerability hunters. I think we can leave that to the security professionals. But if we put someone with security knowledge from the second we start building the code and they work with the team to build the code and think about security stuff, the next time an engineer that was on that team goes to write code.
I don't have data to back [00:34:00] this up, but I, I guess, or I, I, um, proposed that they would think more about security than had they never worked with that security engineer, and there's a little bit of data to back this up. Um, there's a paper that did a, what they call it a co creation model. So basically they embedded a security person in the software team.
Um, and they found that the people started to think a little more about security basically at the end of working with this person. So there's some data to back this up, right? Just not over a long period of time. Um, And so I think the, I think the engineers are trainable, is my, is my, uh, hypothesis, um. But I think without giving them direct support or we put someone side by side with them to work with them over time, we're not going to see improvement, um, over time in their security understanding and then thinking about security as I think like relatively universally agree, it's much easier to bake in security from the beginning than try to add it at the end.
And so if we have them thinking about it from the beginning, um, with the help of automation, uh, [00:35:00] hopefully. By the time they get to the end, the stuff that needs addressed is relatively small and not like you need to redesign this entire system because you didn't think about how the security pieces should interact or things of that nature.
Julie Haney: Okay.
right? Like, I don't like maybe it's part like security culture of the organization that this, this You know, this mindset that we're going to build secure code and our products are going to be secure.
Um, and that doesn't exist everywhere, obviously. Um, so, I mean, do you have any recommendations for organizations to, to kind of shift into that mindset and also, um, create. this value proposition to their leadership so that they can, you know, decide to [00:36:00] allocate resources and and see the importance of Trying to to promote this secure development throughout the whole process
Kelsey Fulton: Yeah, I think ultimately that might be the hardest problem to solve in this space. Um, I, I, you know, I joked, I think the developers and the engineers are trainable. I'm not so sure about the CEOs, but, um, I think, uh, it's One of the things that we saw, so I'll specifically call back to the, the Rust research that I mentioned, um, was that they had to get buy in right from upper management.
If you want to convince your upper management, you're going to switch this entire code base that's in C to a completely different language that already works feasibly, right? Um, there's some legwork there. And one of the things that they said that I think makes a lot of sense is you have to demonstrate value, right?
Ultimately, you have to show them. that it's better, but that can be inherently difficult in [00:37:00] security because things aren't just better, right? It's not like I'm taking them a vulnerable code base with a thousand vulnerabilities and then I'm addressing it and then usually your code base isn't vulnerable till it is, right?
And so if you say to yourself, well, I have this code base in C and I want to switch it to Rust, for example, um, demonstrating that Rust is better is going to be hard. Right, without finding vulnerabilities in the code base to be like, Oh, this is exploitable. Right. And then we'll lose all this money or things of that nature.
Um, So one of the things that, uh, folks did in the rust paper is that they took rust to their upper management upper management said no. So they on their own time rewrote parts of the code base, took it to them and showed that it was like faster, the testing was much quicker, because The unanimous view is that once Rust code compiles, you're pretty sure that it's like, at least it's at [00:38:00] least memory safe, right?
Um, and so they basically just went rogue and rewrote something to demonstrate that it was inherently better than the thing that they already had. Not that I'm suggesting everybody go rogue. That's just one, one solution that we saw. Um, But I think ultimately it comes with bringing data, some form of data and backing it up, um, which is hard to do, uh, because we don't necessarily have a lot of quantifiable metrics for like, how much security is going to cost you if something goes wrong, right.
Or how many people hours security is going to cost you when something goes wrong versus investing it up front. Um, and so I think it's a calculus of talking about, um, what is the Will go wrong or could go wrong versus what we could invest up front and what that looks like and why that would be better.
And balancing that equation is a hard problem. Um, but taking that motivation in and showing them like the demonstrative [00:39:00] value of it, whatever that may be. Um, even if Rust is slightly faster than the current language you're using in the example that I'm giving, or it turns out we test much faster if we employ fuzzers and static analyzers than having to do it manually.
Um, Sometimes you don't even have to mention the security part. If you can sell it in another aspect, right. That they are going to care about, which is like time to delivery, right. Total cost, um, quality of the product and a non security sense. Right. And so, um, I think trying to figure out what those other pieces of value are, other than like our stuff will be more secure, um, because that's kind of an abstract concept to a lot of people.
And what does that mean? Uh, can kind of help sell, sell security. Um, and then just be loud and annoying enough that. Eventually they listen to you. You sometimes you just got to keep trying until somebody listens to you. Right. Um, and so, you know, keep advocating, um, keep pushing and eventually maybe they'll tell you, give you something to get you to shut up, uh, which is, you know, step better than it was [00:40:00] before, before you had that option.
Silence.
Julie Haney: Yeah, that's that's great. Great advice. And uh, and hopefully the research you and your team have done can also provide some evidence for For people trying to make that value proposition, they can point to some of that. Um, so, um, we're gonna this has been a fantastic conversation. Um, one more question to wrap up and and also just to let everyone know will will provide some ways to reach Kelsey, um, so you can, uh, follow up with them and, and hopefully participate in some studies, um, and, and get some more, um, information on the great research they've done.
Um, so, uh, last question for you is what's next for your, for your research? What are, what's kind of the big topic that, uh, you, you want to look at next?
Kelsey Fulton: Um, yeah, [00:41:00] so I'm going to be cliche and say large language models, uh, which is what everybody is thinking about right now. Um, but I'm more on the side, uh, of almost resignation. They're here, they're not going anywhere. People are definitely going to use them to write code. Um, and so how do we improve them?
Right. How do we make them better for developers as far as security? Uh, they like them. They feel more productive with them. Um, and we know that they provide vulnerable suggestions. Um, and so right now we're looking at, um, how developers think about trust when it comes to security and privacy suggestions from large language models.
So do they trust them? How does this trust compare to other resources we know have been problematic? Stack overflow, um, things of that nature. And then how can we leverage what they do and do not trust to actually build better suggestions and build better, um, mechanisms? So whether that's [00:42:00] highlighting possible vulnerable snippets, right?
In a color that's bright and obnoxious so that they pay closer attention to it. Um, So I think, uh, just ultimately trying to improve large language models. Um, and also I, I have a couple of other tools on my radar of things we can possibly improve, uh, such as stack overflow, uh, that is, is where I kind of see my research heading next and, and things that I'm currently working on.
Sean Martin: I love it. We didn't, we didn't talk about OWASP top 10 and how the top 10 is always the same over and
Kelsey Fulton: Right. Buffer overflows forever.
Sean Martin: So there's a lot of lessons to learn and hopefully we can, we can solve some of those. I love the research you're doing, Kelsey. I would encourage everybody to connect with them to be part of it.
Help, help them help you do this work. That's so, so important. And, um, Yes, we'll link to your page where some of your research is available. And hopefully people reach out to you. And Julie, thank you for bringing another [00:43:00] fantastic conversation. Um, I really love this topic personally, and I think it's an important one beyond my personal preference.
And great conversation. Hopefully people take a lot with them today, and, uh, we can Continue to redefine cybersecurity in a way that's better for business. So thank you both.
Kelsey Fulton: Thanks for having me.
Julie Haney: Thanks.
Sean Martin: And thanks everybody for listening and watching. Do stay tuned for more on redefining cybersecurity and more episodes of the human centered cybersecurity series with Julie.
Uh, thank you all. See you soon.