ITSPmagazine Podcast Network

AI's Role in Cybersecurity and Society | An Infosecurity Europe 2024 Conversation with Ian Hill | On Location Coverage with Sean Martin and Marco Ciappelli

Episode Summary

Explore the transformative impact of AI on cybersecurity and society through an in-depth conversation with Ian Hill at Infosecurity Europe 2024.

Episode Notes

Guest: Ian Hill, Director of Information and Cyber Security at Upp Corporation [@getonupp]

On LinkedIn | https://www.linkedin.com/in/ian-hill-95123897/

____________________________

Hosts: 

Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/sean-martin

Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

____________________________

Episode Notes

At Infosecurity Europe 2024, conversations were electric, diving deep into the intersection of AI and cybersecurity and its profound impact on society. Industry experts Marco Ciappelli, Sean Martin, and Ian Hill explored these pivotal changes, offering sharp insights into the digital revolution.

A Casual Start

The event kicked off light-heartedly with Marco Ciappelli and Sean Martin, setting a relaxed, talk-show-like atmosphere. Despite minor technical hiccups, this informal start paved the way for an engaging discussion.

“We’re messing with physical technology and digital technology,” remarked Sean Martin, perfectly capturing the complex interplay between human users and their increasingly advanced tools.

From Keynotes to Key Concerns

Ian Hill shared his journey from Director of Information and Cybersecurity at UP Corporation, now part of Virgin Media O2, to his current advisory role. He emphasized the freedom and reduced stress of stepping back from frontline cybersecurity.

Hill’s keynote at the event centered on AI’s implications for the future of work and society, countering the exaggerated narratives often associated with AI.

The Mislabeling Issue: AI vs. Automation

Marco Ciappelli voiced a common frustration: the overuse of “AI” to describe mere automation. Hill stressed the need to differentiate true AI from sophisticated automation systems that lack adaptive learning capabilities.

“We need to distinguish between what is automation and what is AI. There’s a lot of automation going on at the moment,” Hill noted.

Western Society’s Dependency

Hill warned of AI’s subtle yet significant impact on Western societies, likening it to the industrial and agricultural revolutions but with a more profound effect due to AI’s ability to replace cognitive tasks.

“AI is different because AI is actually replacing our thinking, our creativity,” Hill cautioned, highlighting the potential for job displacement and challenges to human creativity and learning.

The Drive for Profit

A recurring theme was the economic drivers behind AI advancements. Hill critiqued the relentless pursuit of profit and efficiency, which risks lowering the quality of services and products in favor of mass production.

“The nature with all these technological developments, the primary driver is profit and money,” Hill asserted, reflecting on the commercialization of AI.

The AI Arms Race in Cybersecurity

Hill and Martin discussed the escalating AI-driven war between cybersecurity defenses and attacks. They emphasized the need for rapid, machine-learning-based responses to evolving cyber threats, as traditional human-led security operations struggle to keep up.

“You need machine learning, lightning-fast machine learning, to predict and react to events before the human even knows about it,” Hill stated, hinting at a future where automated systems dominate the cyber battlefield.

The Trust Dilemma

The conversation turned philosophical as the speakers pondered the reliability of AI-generated content and the impact of deep fakes and misinformation. Hill addressed the issue of AI “hallucinations”—erroneous outputs—and the dangers of blindly trusting AI.

“We’re losing a sort of grip on reality… because it’s becoming harder to distinguish between what’s real and what isn’t real,” Hill commented, expressing concerns about a future rife with misinformation.

Concluding Thoughts

Infosecurity Europe 2024 highlighted AI’s dual nature: its potential to revolutionize industries like healthcare and cybersecurity contrasted with its capacity to disrupt societal norms and personal authenticity.

As Hill succinctly put it, “Those that own the AI, you know, OpenAI and all their sponsors, and what influence could be exerted on AI, political or otherwise, to bias… dangerous.”

The dialogue underscored the need for evolving our understanding and ethical governance of AI to ensure these powerful tools enhance rather than undermine our societal fabric.

Be sure to follow our Coverage Journey and subscribe to our podcasts!

____________________________

Follow our InfoSecurity Europe 2024 coverage: https://www.itspmagazine.com/infosecurity-europe-2024-infosec-london-cybersecurity-event-coverage

On YouTube: 📺 https://www.youtube.com/playlist?list=PLnYu0psdcllTcLEF2H9r2svIRrI1P4Qkr

Be sure to share and subscribe!

____________________________

Resources

Learn more about InfoSecurity Europe 2024: https://itspm.ag/iseu24reg

____________________________

Catch all of our event coverage: https://www.itspmagazine.com/technology-cybersecurity-society-humanity-conference-and-event-coverage

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcast

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Are you interested in sponsoring our event coverage with an ad placement in the podcast?

Learn More 👉 https://itspm.ag/podadplc

Want to tell your Brand Story as part of our event coverage?

Learn More 👉 https://itspm.ag/evtcovbrf

Episode Transcription

AI's Role in Cybersecurity and Society | An Infosecurity Europe 2024 Conversation with Ian Hill | On Location Coverage with Sean Martin and Marco Ciappelli

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] Marco Ciappelli: Can I get comfortable?  
 

Can I get comfortable, like, pretending to be on a talk show? Sit back, have our mug with water, or whatever in it.  
 

[00:00:14] Sean Martin: I don't need those on at the moment. I can't see with my glasses on. Ah, here we are.  
 

[00:00:22] Marco Ciappelli: Here we are.  
 

[00:00:23] Sean Martin: Technology's fun, isn't it?  
 

[00:00:25] Marco Ciappelli: I'm addicted to it. I'm addicted to that, to the AI. Any blinking light I see, I'll just follow it.  
 

[00:00:32] Sean Martin: We're messing with physical technology and digital technology. Trying to get it to work. Sometimes I blame the cables for this stuff.  
 

[00:00:42] Marco Ciappelli: I think you gotta blame your head.  
 

[00:00:44] Sean Martin: You blame me, I know. You like to blame me.  
 

[00:00:46] Marco Ciappelli: It's a human thing.  
 

[00:00:48] Sean Martin: Ian was blaming me too.  
 

[00:00:48] Marco Ciappelli: Who is this person in the middle here? 
 

Oh, you're me.  
 

[00:00:51] Sean Martin: He's fallen asleep.  
 

[00:00:52] Marco Ciappelli: I heard this before. I heard this before. Well, Sean, let's make a proper introduction here where we are, which is clear InfoSecure Europe.  
 

[00:01:03] Sean Martin: Well, Ian and I are, you're in the black hole, it looks like. 
 

[00:01:08] Marco Ciappelli: I'm not sponsored. It wasn't long enough. Uh,  
 

[00:01:13] Sean Martin: yeah. No, we are in London and, uh It's a great event every year. 
 

Great event every year. A lot of people. The hall is bigger, it seems. Uh, more vendors, more people. Much more spread out. And, uh, I hear the conversations are good. I know we've done quite a few already.  
 

[00:01:30] Marco Ciappelli: Mm hmm.  
 

[00:01:31] Sean Martin: Before, uh, talking about ransomware and cyber insurance. We're gonna have more chats in the next couple days. 
 

I'm thrilled, though. This guy, Ian Hill. Thank you. When was the last time, was it two years ago? Yeah, we had a good chat. We didn't believe we existed each other.  
 

[00:01:50] Ian Hill: No, no. When I got here I just needed to make sure you weren't a deep fake.  
 

[00:01:54] Sean Martin: That's right. I am fake. I'm not deep. I'm very shallow. Very shallow. Ah, so Ian, uh, I know you had a, you had a keynote this morning, you have a keynote tomorrow if I'm not mistaken. 
 

And, uh, lots of fun conversations you're having. On topics and then we put a few bullet points together. We'll see where we go with those, but For those who didn't listen to our chat two years ago and haven't heard you or seen you speak I don't know who you are. Maybe share with our audience what Ian is up to at the moment. 
 

[00:02:25] Ian Hill: Ah, so if I remember right when I was talking to you the last time I was working for a company called BGN insurance Since then, I was Director of Information and Cyber Security for UP Corporation. That's right. Which was a very fascinating story because UP Corporation, um, uh, was owned by sanctioned Russians. 
 

And so the UK government decided that they must sell UP Corporation. Um, after doing a very intrusive security audit to evidence that there had been an infiltration or anything like that. So I was on the receiving end of that as director of Information Cybersecurity. So I had to sort of see all that all the way through. 
 

It all went relatively to successfully. I got sold to Virgin Media O2. Uh, I was out of a job, done the job. So I'm now, um, working really as a advisor and consultant. So part of a, a collective we call ourselves of sort of senior security people who sort of just sort of work together and. Do advisory and consulting is where I'm at the moment. 
 

So not a full time job and I get to turn my phone off at night. You know, that was the big thing. And I could actually turn my phone off at night and not worry about getting calls in the middle of the night saying we've got ransomware or something. I should have had those before.  
 

[00:03:49] Sean Martin: Sure. Sure. So you have, you had a talk this morning on AI. 
 

Yeah. And tomorrow's talk is AI. As well. Look at that.  
 

[00:03:59] Marco Ciappelli: Hmm, I wonder what the buzzword is this year.  
 

[00:04:02] Sean Martin: I don't think AI was part of this conversation.  
 

[00:04:05] Ian Hill: You can't get away from it. It's, yeah, it's um, it's, it is quite fascinating. It was really good this morning, some good questions. Um, but I'm, maybe I'm just cynical or old or something. 
 

But A more realistic view of the expectations of machine learning. I'm one of those that doesn't like to call it AI. It's not intelligent. It's machine learning. You know, um, GPT 4 and those, you know, those language models. They're glorified word association models.  
 

[00:04:45] Sean Martin: Automated content generation. Automated content generation. 
 

[00:04:51] Ian Hill: That's it. I mean, they have the uses. No two ways about it. What's been called AI, whether you call it AI, whether you call it large scale neural net, it is going to significantly change work. It's going to change society, certainly Western, certainly developed society. It's going to change developed society radically in so many ways. 
 

I'm not really sure anybody quite knows where it's all going to go. At this stage, we can see now, even at this level, the impact it's having, if you go into the future, as it starts to evolve, it's going to impact all elements of our lives in the developed world. It's just the nature of the beast.  
 

[00:05:42] Marco Ciappelli: So, you said we don't know where it's going, but we're going to all jump on board anyway. 
 

That's right. Because it looks, looks fun. It's a trip we want to have. And I think it's very human to do something like this. We've done it in the past. I mean, any innovation, we don't look into the future, we don't know. But this time, we were talking a little bit before we started, it's different. When we talk about automation from industrial revolution perspective, reallocation of jobs, this time you said it's different. 
 

[00:06:11] Ian Hill: It's interesting. I had this conversation this morning because a lot of people compare particularly when the context of the concerns about, you know, AI is going to cause job losses because it's going to replace jobs. Yes, it will. Absolutely. And I had this conversation this morning and I said, well, isn't that just the same as the industrial revolution or the agricultural revolution where machinery, you know, replaced humans? 
 

Doing a lot of manual work. AI is different because AI is actually replacing our thinking. Our creativity. Um, so, back in the industrial revolution, yes, you've got a, uh, cultural revolution, yes, a tractor could replace X number of men or women um, with horses and plows. But that was a very specific thing. Um, and those people went on to do other things, admittedly, because there are other things, uh, you know, other industries and other work that they could do. 
 

But where AI is so different is that it is going to be inventing and creating and replacing things that we would have naturally moved on to in a traditional sort of agricultural revolution. Pathway, if that makes sense. Um, so, you know, when you look at, I'll still use AI just for the point of it. AI writing books now, creating pieces of art, creating music. 
 

Um, you know, now we've got people using, you know, CHAT GPT to write management reports, to do their homework, to do, you know All sorts of things. They're not having to think for themselves. The machine is thinking for them and telling them what to write, and that, that, that, that brings a whole, this whole philosophical question that highly concerning that, you know, it has the potential to null or dumb down abilities to, to learn and create because we don't have to. 
 

'cause there's a thing that will do it for us.  
 

[00:08:34] Sean Martin: Yeah. Well I dabble in some music and Marco and I just. Got some new technology that allows us to create some music. He has his own, I have my own. Um, I was messing around with some lyrics. And I, for a split second, I thought,  
 

Should I get some assistance in the lyrics that I wanted to write? 
 

And two things immediately happened. One is, no, because I'm feeling the lyrics that are starting to come into my mind. And two I didn't want to not be authentic. I didn't want not to be authentic. And I think, and I don't know how this plays out in all of business, but I think, uh, we've had a lot of conversations where humans will want something that makes them feel that there's a connection to it. 
 

Now there may be some things that it doesn't matter. I think we were talking before, like, you get the Do you get the handcrafted artisans shoes or do you get the, that are made of leather, right?  
 

[00:09:42] Ian Hill: I think you're right, but I would argue as well that humans potentially are inherently lazy. Um, and why do it yourself if you can get something else to do? 
 

Why think for yourself unless you can do, I mean it's not all, I mean, being very, very generalistic is very unfair, being very generalistic.  
 

[00:10:03] Sean Martin: Well, that's another point. Um, that everything is generalistic. What's good enough, right? If it's good enough to wipe everything with that one  
 

[00:10:13] Marco Ciappelli: For the majority, it may be good enough. 
 

And then there's still going to be the artist that is going to want to do that. But you also said before, and I'm kind of thinking about that, because I've always said, you know, money is the real evil in general. But you also pointed out that money is what is driving all this transformation. And maybe when it's a business, we are okay to settle for something. 
 

Less crafty, less artisanal, and you pay a premium for that, eventually. And for the mass production, we're just going to lower the standard.  
 

[00:10:51] Ian Hill: Well, and, and yeah, I think that's a very valid point there, because it is the nature With all these technological developments, the primary driver is profit and money. 
 

So how that profit and money is achieved, even if it's, you know, uh, compromising, it's, it's about money.  
 

[00:11:17] Marco Ciappelli: Yeah, we've got to go there as a society.  
 

[00:11:20] Ian Hill: Yeah, yeah, and that will take us, so we will, we will be led by the money. And when you look at who owns OpenAI, Um, you know, the Microsofts of the world, they have vested interests, they have shareholder's plead, they have profit to make. 
 

So they will, you know, monetize as best they can, all of this technology, in their interest to make money. It's, uh, it's an interesting world. As 
 

[00:11:50] Sean Martin: a provider of the services, and then obviously there's the users of the services to provide their own services that are enabled by them. Um, I'm wondering. I gave an example of, uh, music. 
 

You have your own example if you want to share it. You've used a, uh, a large language model to help with something. Um, but I'm interested in what the conversations with business leaders sound like. Because I think when we say money, um, there's revenue generation, there's cost, there's the profit. And I, I think it can touch all of those things, where mundane tasks, where we don't need a level of quality or artisanship, whatever, creativity. 
 

Uh, we can reduce costs, right? Yeah. Um, in other areas we might find ways to innovate. Yeah. And create new things that we weren't able to do without, or at least at scale. And, um, then I think in the middle is where everybody's going to be. Kind of battle it out. And if you're not using ai, you might lose out on things. 
 

Um, 'cause you can't, you're not using it to cut costs. You're not using it to, I think what's happening is we were  
 

[00:13:08] Ian Hill: talking about, um, I think the bar AI is slowly raising the bar as far as of the, um, levels of, um, skill and capability that it replaces. You know, we're here at an information security event, and obviously using AI tools from a defensive perspective to counter what will certainly be AI based attack. 
 

You know, we are seeing more and more tools being developed, you know, and this is part of a necessity, you know, I was saying this morning at one of my talks that, you know, As the threat actors get more involved with using AI tools to enact attacks, the human response is just not going to be fast enough. 
 

You know, the old school days of Soxene where you got a bunch of analysts sitting in front of screens waiting for something to pop up so that they can react to it, by which time now it's far too late. You, you know, you need machine learning, lightning fast machine learning. To predict and react to events before the human even knows about it. 
 

And as this evolves, the need for cyber analysts, I predict, is going to become less. You know, you won't get rid of cyber analysts altogether, no way. But you won't need so many cyber analysts sitting in socks, um, watching their SIEM system, and doing sort of the level one, possibly the level two type tasks. 
 

Because the machinery will be reacting and doing that all automatically as it evolves. And there is this argument that says where you potentially will end up is effectively a war of the machines. You're going to have threat AI systems battling with defensive AI systems, each vying to get one upon the other. 
 

And I think that's a reasonable argument. That will evolve there.  
 

[00:15:15] Marco Ciappelli: So, here's what I'm thinking. When I talked about, on my show, defining society about application of AI in medicine and healthcare. And, the point there is that when you do a tunnel task, when it's focused AI on a specific task, like it could be in automation, in this case detecting ransomware, it does an amazing job. 
 

And it does it better than, I mean, it doesn't need to be creative. Right? You just need to have the data and based on that data, this is a diagnosis for your disease, or this is the potential cure. Then there could be the person, the doctor, that can dedicate himself to, or herself, to more complicated things. 
 

So that's great, because we're missing people there. Right? I see a parallel with what you're saying now for infosecurity. Like you, you can automate. And in that case, it's very, very useful. So I think we need to make a distinguish between AI is bad for the future, AI is good for the future. It depends what you use it for. 
 

[00:16:21] Ian Hill: I think one point you brought up which I think is very relevant is we also need to distinguish between what is automation and what is AI. There's a lot of automation going on at the moment.  
 

[00:16:32] Speaker: Yeah.  
 

[00:16:33] Ian Hill: And One of the problems I have at the moment with a lot of the vendors, security vendors here, they're jumping on the AI bandwagon and saying that they've got some, you know, AI engine. 
 

When actually, when you dig down to it, it's really just automation. Yeah, yeah, yeah, yeah. It's not AI.  
 

[00:16:50] Sean Martin: Not even, not even large language model.  
 

[00:16:52] Ian Hill: Not even large language model.  
 

[00:16:53] Marco Ciappelli: It's not even A.  
 

[00:16:56] Ian Hill: No, because for AI to work, it needs the feedback. Yeah. You know, it, it needs constant feedback so it can learn and adapt. 
 

[00:17:03] Marco Ciappelli: Right?  
 

[00:17:04] Ian Hill: Automation really is just following a series of predefined tasks. Mm-Hmm. based on cri, uh, input criteria.  
 

[00:17:11] Speaker: But if you put ai, you sell it.  
 

[00:17:13] Ian Hill: Yeah. Well,  
 

[00:17:14] Speaker: yeah. That,  
 

[00:17:15] Ian Hill: yeah. If you, if you put the AI and you sell it, but no two ways about it. You know, the, where we're seeing in all aspects of medicine, you know, is a fantastic one. 
 

You know, I, I, I, I genuinely believe that. AI systems, machine learning systems could be a fantastic benefit to medicine and people's well being. But then you have to be careful again because it's that old profit thing, you know. There's an old saying, um, I saw it crop up every so often, you know, uh, I think it goes something like that. 
 

Those that profit from you being ill are not trying to cure you. Right.  
 

[00:17:55] Speaker: Yeah. Perpetrating the problems that you can keep selling things. Yeah. And I don't like to think that way, but I mean, I hope that we don't, we're not like that, but I know we are.  
 

[00:18:09] Ian Hill: So it's  
 

[00:18:09] Speaker: alright.  
 

[00:18:09] Ian Hill: And AI will just be fed into that same equation. 
 

Right. Um, but it's a, you know, it is going to make a huge difference to all aspects of our lives. Um, I live in a, I live in a rural community. Um, when you look at the automation now of, um, tractors and things like that, I've started seeing, um, they're now producing AI or machine learning, uh, capabilities in tractors and agricultural machinery whereby, you know, just drop the thing at the, at the, at the field and let it get on with it. 
 

Mm hmm. Um, you know, and, uh, again, you know, it's, it's You could, you could potentially envisage a second agricultural revolution. Where the AI takes over the farming and food production.  
 

[00:19:06] Sean Martin: Where I think, where I think this gets really interesting is, cause we talked about the industrial revolution. Where innovations in farming were dedicated to farming. 
 

And yes, maybe some, maybe some things started to reach out and affect some other industries. Um, but when we look at the innovations in farming now, especially with AI, it's, it's using technologies that are touching all other types of industries. So, we're talking about tractors that, in order for that tractor to be set off in the field and do its thing, it needs to know where it is. 
 

It GPS, satellite tracking, what's going on, where's the field, what crop is it. It has to be fed a bunch of data. Um, it's using analysis from, from weather reports and things like that for when it should start to take off. I guess my point is the, the spread of the technology is cross industry, cross sector. 
 

Each one leveraging innovations and knowledge and learning from the others. And so my, I love your perspective on this. So we're also talking about scaling and automating. Taking the human out of it, or enhancing the human to scale and do things better, faster, uh, larger.  
 

[00:20:31] Speaker 3: And  
 

[00:20:32] Sean Martin: so, sorry for the rambling, but to cyber security specifically, do you think we've been kind of focused on the 80 20 rule? 
 

We're tackling 80 percent of the problem with the technology we have now. And the threats are really focusing on Attacking us at the 20 percent where we haven't been successful in protecting ourselves. And is AI a way for us to kind of scale out our protections and our mitigations and our response and everything? 
 

[00:21:06] Ian Hill: I think, absolutely. I mean, if we look at the classic, um, attack vector being phishing, um, we know The AI, I mean, there's a lot of evidence that AI systems or LLMs are being used to generate phishing attacks. We know, particularly from very, very strong evidence published recently in a defence paper, that the Russian threat actors are using the likes of CHAT GPT to create more convincing phishing emails in English, which is not their natural language, because a lot of the time, you know Because it's not their first language, there's certain nuances in it, in these emails that don't feel right. 
 

So they're using these AI or LLM models to write much more convincing phishing emails. And at the same time, on the defensive side of things, you know, using large language model, AI, whatever you want to call it, to detect phishing emails, it's going to become more critical. As they get more convincing, you're going to need that. 
 

Layer of defense, something that's reading them and doing cross referencing and checking and things like that. Before it even gets to the human, because the human very much is the weak link. You know, we still, you know, a number of successful attacks as a result of somebody clicking on a phishing link or whatever, it's still way, way, way too high. 
 

But using machine learning capability to Learn the nuances of phishing emails and understand, um, you know, particularly spear phishing and things like that. Understand the nuances of how they're wording and what they're asking for and what they're saying.  
 

[00:23:09] Sean Martin: Let me press you a little bit because that's an existing problem. 
 

Yes, we're still challenged by it. Um, and there are a lot of technologies that are Incrementally getting better at protecting us against some of those phishing and ransomware attacks. But that's still an existing challenge that we face. And I guess I'm wondering, is there something big that we'll see? A real revolution in cyber security that I'm asking you to predict the future of it here. 
 

[00:23:45] Marco Ciappelli: You could use AI for that if you want.  
 

[00:23:48] Ian Hill: Well, you say that interestingly. There's some interesting work going into a concept called causal AI, which is future predicting. Given enough information and powerful enough capability, being able to predict, or predict, create a probability prediction on future attacks and events and things like that, but that's a whole different subject. 
 

Um, but, I think where the problems are going to be, when you look at all, the problem that is evolving is going to be all around this whole, um, fake, deep fake, um, misinformation, malinformation. We're getting to a point now where we just don't know what's real and what's, what's fake and what's real. It's almost like a semi likening it to the Matrix Trilogy. 
 

In that the internet, the world that we live in now, obviously, the virtual world, we're losing a sort of grip on reality, if you like, because it's becoming harder to distinguish between what's real and what isn't real. What's, you know, what's real. What's fact and what's fake? What's real? What's unreal? and I I'm I've really no idea where it's all going to go, but it's sort of leading towards some sort of weird sort of dystopian Future whereby we're We become sort of subservient to it Yeah, we could end up talking for hours on that subject. 
 

That's something I've sort of been thinking about. Um, but I think the big problem is going to be, um, the whole, can you trust the information? I mean, AI, AI, there's a lot of issues with, um, uh, hallucinations. There's a lot of issues with, you know, it's, it's, the information it gives is based on it's learning. 
 

Where does it get it's learning from? How was that learning influenced? If you remember, I think a few years ago when, um, Yandex had some problems where, um, it was learning from people's input and it became slightly, um, nationalistic over its views because it was just learning from people. And this  
 

[00:26:27] Sean Martin: troubles me because you say hallucination and when I see them it's very obvious, right? 
 

Right. I'll ask it to say analyze this conversation or whatever and it'll, it'll say that in this chat, Jim said, and I'm like, I don't know who Jim is, he's part of this, this is Marco, you, and Ian, and Sean. Um, so that's, to me that's a hallucination, it's obvious. Where I get a little more worried is it, is in when it says, you said something, and it's creating a sentence that isn't exactly what you said, or the context is different or missing. 
 

Um, so Or the intent is not understood. Um, so yes, the words were put together. It becomes more difficult for me to understand. But you see, you,  
 

[00:27:13] Marco Ciappelli: I'm gonna make a point here, is that you, you catch it right away because you're knowledgeable of the conversation that you had. Right? Right. But the people that use it to just gain knowledge, I mean, like if they were reading a book, right, or going to the library, or going on the dictionary and say, What's the meaning of this world and, and, uh, generative AI is just gonna tell you something that is completely off, but you have no clue what you think is true. 
 

[00:27:41] Ian Hill: Well, this is very true and I'd like to, um, actually, you know, it does get it blatantly wrong. We had the conversation earlier on, uh, one of my hobbies is a brewer. I brew beer. I've got the paid for GPT I asked GPT a very, very simple question, which was how much Does 5 millilitres of dextrose powder weigh? 
 

So, volume, weight, straightforward, 5 millilitres. And it came back with 62 kilograms. Rather than about 4 milligrams. It literally just came back 62. 5, I remember it now, I've still got it on my phone. 62. 5 kilograms. I asked it very simple, very plain. Uh, there was no ambiguity in the question. 5 milliliters of dextrose powder. 
 

How much does 5 milliliters of dextrose powder weigh in grams? It's about 62 kilograms.  
 

[00:28:35] Marco Ciappelli: And that's weird because that's actually, that's a fact. That's math. That's conversion that you expect a computer, not even an AI, like a computer will be able to do.  
 

[00:28:46] Sean Martin: What was it fed that gave it  
 

[00:28:48] Ian Hill: that? That's the point. 
 

[00:28:50] Marco Ciappelli: It's like when sometimes you find that, it wasn't that long ago that said that, I don't know, he came up with something completely wrong, and, oh yeah, um, he's asking, he's telling people to eat pizza with stones, or making cake with glue, because it was something where some people are just making fun, but he didn't get the nuances that it was somebody being sarcastic and funny, and he get it as, if he's on the internet it must be true, and then he's regurgitating it to everybody.  
 

[00:29:22] Ian Hill: But isn't it dangerous, though, if you blindly believe everything that comes back from GPT 4? 
 

My little incident with the dextrose powder. If you were using that, say, to calculate, um, Medicine or something like that or a chemical that you're doing for an experiment things like that and you blindly Accept the the answer.  
 

[00:29:44] Marco Ciappelli: Yeah,  
 

[00:29:44] Ian Hill: you could have all sorts of problems No bad beer  
 

[00:29:50] Marco Ciappelli: The answer there and and maybe that applies to cyber security to go back to that is that I think you need to create a knowledge base that is vetted for this kind of thing. 
 

You can't just ask everything is not a generalistic No, I mean, I like the search engine, for example, that they just consult academic paper that are verified and that's the knowledge base that they get. Right? It doesn't get the knowledge base of Somebody that just made up with a joke, a joker article. So you just, again, uh, you give proven content that is true. 
 

Otherwise you're going to put garbage in, gonna come garbage out.  
 

[00:30:38] Ian Hill: You say that, but you know, yes, I mean, to a certain extent, there's the proven, you know, academic papers and things like that. But my concern is, you know, because. AI is going to be such a big part of the future. Who controls the AI, controls the future. 
 

[00:30:58] Speaker: This sounds very George Orwell, right?  
 

[00:31:00] Ian Hill: Very. So, those that own the AI, you know, OpenAI and all their sponsors, um, and what influence could be exerted on AI, political or otherwise, to bias,  
 

[00:31:19] Marco Ciappelli: to  
 

[00:31:19] Ian Hill: bias, to bias its answers That are, you know, and this is, because this goes back to the whole philosophical, you know, question over there. 
 

People's interpretation of the truth very much depends on their own ideologies and biases and context and things like that. Same thing with AI, you know, AI, those AI systems, those machine learning systems will learn from the input they're given. If you manipulate. I wouldn't use the word poison. I mean, that's a, you know, we often talk about poisoning AI, but just subtle influences on the AI could have a potentially a butterfly effect through it.