ITSPmagazine Podcasts

Hello From the Dumpster Fire: Real Examples of Artificially Generated Malware, Disinformation and Scam Campaigns | A SecTor Cybersecurity Conference Toronto 2024 Conversation with Ashley Jess | On Location Coverage with Sean Martin and Marco Ciappelli

Episode Summary

In this episode, Sean Martin and Marco Ciappelli explore with Ashley Jess, Senior Threat Intelligence Analyst, how AI is being weaponized for disinformation and scam campaigns, and the pressing need for stronger regulations and detection methods. Ashley sheds light on real-world examples, including deepfake propaganda and AI-generated scams, underscoring the complexities and urgent challenges in protecting against these evolving threats.

Episode Notes

Guest: Ashley Jess, Senior Intelligence Analyst, Intel 471 [@Intel471Inc]

At SecTor | https://www.blackhat.com/sector/2024/briefings/schedule/speakers.html#ashley-jess-48633

____________________________

Hosts: 

Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/sean-martin

Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

____________________________

Episode Notes

As part of their Chats on the Road for the On Location series during SecTor in Toronto, Sean Martin and Marco Ciappelli had an engaging conversation with Ashley Jess, a Senior Threat Intelligence Analyst from Intel471.

The discussion centered on the intricacies of artificial intelligence (AI), its uses, and its abuses in the realm of cybersecurity. Ashley's upcoming presentation titled "Hello from the Dumpster Fire: Real Examples of Artificially Generated Malware, Disinformation, and Scam Campaigns" sets the stage for an in-depth exploration into the dark side of AI. Ashley gives a glimpse into how AI is being utilized for nefarious purposes, highlighting the connection between generative AI and disinformation campaigns. She explains how AI has been used to create politically motivated fake graffiti, deepfake videos with celebrities, and even entirely fabricated news websites.

She emphasizes that the lowest barrier to entry for generating such content is lower than ever, making it easy for bad actors to create and spread false information swiftly. She mentions a particularly interesting case during the Olympics, where an entire propaganda movie starring a deepfake Tom Cruise was produced for political purposes. This example underscores the potential of AI to convincingly spread disinformation on a massive scale. She also points out how scam campaigns are increasingly leveraging AI, making them more believable and harder to detect.

One crucial topic Ashley touches on is the matter of responsibility in combating these threats. She discusses the need for more robust government regulations and the role of various technology vendors in detecting and preventing the misuse of AI. She highlights the importance of technologies like Web3 and blockchain for content provenance.

According to Ashley, integrating such measures into platforms used by everyday people can help mitigate the risks posed by AI-generated disinformation. Marco Ciappelli adds to this by reflecting on how easy it is to create misleading content and target vulnerable populations. He points out that ordinary citizens, who are not as vigilant or technologically savvy, are at greater risk. On this note, Sean Martin questions who should be responsible for protecting individuals and organizations from AI-based threats.

The discussion also touches on the ethical aspects of AI and its dual-use nature—where technological advancements can be both beneficial and harmful. Ashley emphasizes the need for a balanced approach that considers both the legitimate applications of AI technology and its potential for abuse. Ashley Jess is enthusiastic about her upcoming talk at SecTor where she promises to delve further into these critical issues.

The session aims to provide a realistic, frontline view of how AI is being used maliciously and to encourage more proactive measures to combat these emerging threats. For those attending SecTor, her insights promise to be both enlightening and essential.

Be sure to follow our Coverage Journey and subscribe to our podcasts!

____________________________

This Episode’s Sponsors

HITRUST: https://itspm.ag/itsphitweb

____________________________

Follow our SecTor Cybersecurity Conference Toronto 2024 coverage: https://www.itspmagazine.com/sector-cybersecurity-conference-2024-cybersecurity-event-coverage-in-toronto-canada

On YouTube: 📺 https://www.youtube.com/playlist?list=PLnYu0psdcllSCvf6o-K0forAXxj2P190S

Be sure to share and subscribe!

____________________________

Resources

Hello From the Dumpster Fire: Real Examples of Artificially Generated Malware, Disinformation and Scam Campaigns (Session): https://www.blackhat.com/sector/2024/briefings/schedule/#hello-from-the-dumpster-fire-real-examples-of-artificially-generated-malware-disinformation-and-scam-campaigns-41161

Learn more about SecTor Cybersecurity Conference Toronto 2024: https://www.blackhat.com/sector/2024/index.html

____________________________

Catch all of our event coverage: https://www.itspmagazine.com/technology-cybersecurity-society-humanity-conference-and-event-coverage

Are you interested in sponsoring our event coverage with an ad placement in the podcast?

Learn More 👉 https://itspm.ag/podadplc

Want to tell your Brand Story as part of our event coverage?

Learn More 👉 https://itspm.ag/evtcovbrf

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcast

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Episode Transcription

Hello From the Dumpster Fire: Real Examples of Artificially Generated Malware, Disinformation and Scam Campaigns | A SecTor Cybersecurity Conference Toronto 2024 Conversation with Ashley Jess | On Location Coverage with Sean Martin and Marco Ciappelli

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Marco Ciappelli: [00:00:00] Arca, Sean,  
 

Sean Martin: Robert, Rodigan.  
 

Marco Ciappelli: Yeah, you are. You're going. I heard the news. You're going to  
 

Sean Martin: Toronto. So, I'm going to get, get to the East coast first and then make my way up to Toronto.  
 

Marco Ciappelli: You're not driving from, uh,  
 

Sean Martin: I'm not going to drive from LA. I might, I might drive from Manhattan. We'll see. I don't know. It could be a fun, uh, fall color drive. 
 

Marco Ciappelli: We'll see a lot more feasible and it could be beautiful. Actually. I'm sure it's beautiful. 
 

Sean Martin: The camera will be, um, uh, at the ready. 
 

Marco Ciappelli: I'll take you forever to get there.  
 

Sean Martin: I think it's about eight hours.  
 

Marco Ciappelli: No, but you'll stop.  
 

Sean Martin: Oh, true. It's eight hours driving, eight hours photography, probably.  
 

Marco Ciappelli: All worth it. All worth it. 
 

Sean Martin: It is all worth it. And the best part, all colors are cool, but the best part is to meet amazing people in Toronto. And today we get to meet virtually an amazing person, Ashley Jess, who has a speaking spot there [00:01:00] in Toronto at Sector, uh, led by Black Cat. Ashley, thanks so much for joining us.  
 

Ashley Jess: Yeah, thanks for having me. 
 

Sean Martin: This is gonna be fun. It's a topic that uh, caught my attention as I was going through the agenda of uh, Sektor, which by the way is In Toronto, if you haven't figured that out, October 21st to the 24th, and, uh, yeah, we'll be covering it and we have a few chats. We're lining up and Ashley's, uh, our first one. 
 

So, uh, excited to have you on. And just to get the session out there as well. It's called Hello from the dumpster fire. We love that dumpster fire. Don't we?  
 

Marco Ciappelli: I know  
 

Sean Martin: the title makes, makes a big difference.  
 

Ashley Jess: I mean, title makes a big difference. Yeah.  
 

Sean Martin: Uh, real examples of artificially generated malware, disinformation and scam campaigns, uh, lots of fun stuff to talk about, um, to start though, maybe a few words about what you're up to, the research you do. 
 

How this [00:02:00] whole, I'm sure it's formed by all the fun stuff you get to see day in and day out. You get to interact with the dark web, you get to cruise and all that fun stuff.  
 

Ashley Jess: Yeah. So my organization, you know, we monitor the dark web, we do cyber threat intelligence, and, you know, me being a senior threat, you know, Intel analyst with Intel for someone, we. 
 

You know, it's a wide range of things that you can research. So I've got this unique little pocket that kind of landed in my particular desk that is gen AI, but also disinformation, propaganda, elections. I had the Olympics while those were going on, which also, you know, these particular areas have a lot of overlap. 
 

Um, so I've done a lot of research into where those specific, um, kind of topics intersect with each other. So namely for this presentation, How AI is being used in these campaigns. And then of course, there's also the financially motivated, you know, for actor side of it all with scam campaigns and malware lures. 
 

And are they using it for malware [00:03:00] development? And what are we seeing on the front lines? And, you know, spoiler alert, it's a, it's a hot mess. Hence the, uh, the title of the presentation.  
 

Sean Martin: I have to ask this question. I know Mark, you want to jump in, but you mentioned the Olympics and the one thing, and Mark, when I play around with, uh, the Olympics. 
 

Him more, more so than me, I think, with, uh, multilingual, uh, Gen AI, and I think the, just the language thing, so I'm thinking about Paris, and obviously, they do language French, um, scans would be in French, and so I'm wondering how, how things change, and I don't know if you have any nuggets from that. 
 

Interesting things that you saw during the Olympics that that were specifically AI oriented that caused concern or perhaps even some havoc.  
 

Ashley Jess: Yeah, so we saw, um, like, especially in the disinformation space, we saw some generative AI images. We saw some that had to do with, like, politically motivated graffiti [00:04:00] on walls. 
 

That didn't exist that had to do with like the Israel Palestine conflict, trying to perpetuate a narrative there. Uh, there was also an entirely, um, generated film called the Olympics has fallen after the film Olympus has fallen, you know, they're getting punny, I guess, um, that was, you know, Russian propaganda. 
 

And that one had an entire narrator with a deep fake Tom Cruz as the main narrator there. Um, it had. You know, an entire, you know, the Netflix logo was actually pretty well produced, um, but used Gen AI to kind of have that sort of product and that, that really showcases even now is like Gen AI gets better. 
 

Um, and you know, there's all these tools out there. There's one I'll talk about in the talk that can, you know, deep fake you with a single image. Now, you know, that threshold used to be that you needed. at least 500 images to have even something remotely believable. And even then, you know, the mouth still looked funky. 
 

So as this technology improves, the ones that get even stronger [00:05:00] are still those people that have really high, um, amounts of source material online, which is high profile individuals like celebrities who there's a lot of samples of Tom Cruise's voice, face, all angles, photos. Um, so it shows as it gets better to deep fake, you know, your average Joe Schmo, it gets. 
 

10, 15, 20 times better to deepfake a celebrity. So  
 

Sean Martin: I'm waiting for Marco to get deepfaked. He's out there all over the place.  
 

Marco Ciappelli: I'm deepfake right now. I don't know. I have a joke. I said a few times that it, the deepfake, uh, it wasn't a deepfake, but I had my, my voice clone in one of the first, uh, uh, opportunity I had just to do a deepfake. 
 

try, you know, it's easy. I get the podcast just dump a few there. I mean, I, of course, um, I'm not a target or we all are target, but, um, definitely if they want to get my voice and Sean voice is pretty easy. Um, [00:06:00]  
 

Sean Martin: I like to say I'm not a deep fake, I'm a real shallow.  
 

Marco Ciappelli: Yeah. But the funny story is that it was too perfect because I have this Italian accent and the fake actually had a perfect English and my wife said, yeah, it sounds like you, but it's not you. 
 

Um, so, you know, kind of a joke of, and I'm going somewhere with this, uh, does it become too perfect that we can actually spot things? Or, I mean, are you, we passed that threshold that. You know, maybe we can spot the imperfection before and now it's too perfect and unreal at the same time  
 

Ashley Jess: I do think it depends I feel like the sphere of deep fakes is pretty split in its capabilities right now with video deep fake versus audio deep fake um audio is Definitely the easier, cheaper, quicker one to do with fewer source material still, um, that will change over time. 
 

It might even change by the time Sector actually comes around. Who knows? It's, it's so quickly [00:07:00] evolving. It's every day's news story, but, um, It is possible and least on the threat actor side, though, um, that perfection is actually what makes it pretty powerful because not necessarily when they're impersonating someone, you know, well, but when they're doing a cold call, pretending to be from, you know, a particular company and, you know, some of those red flags you might have looked for before that you get trained on is, you know, accents are hearing, you know, a lot of chatter from, you know, they're in this big call center. 
 

All that's gone. So, um, it is possible. Um, Uh, they are now starting to add in those vocal imperfections as well. You know, those ums and hmms and, you know, a quick little like, ha ha, like as they're talking, um, to try to re put in that human inflection and not have those, like, when you don't have those pauses, it does sound a little too perfect. 
 

So now they're adding it back in to be even more human because there are, the whole problem when we dive into this in the presentation is, you know, There are legitimate [00:08:00] uses for all this technology, too. It's a two sided coin. It's the same with any sort of technological development. There are legitimate reasons why we're moving this way, but as it gets better, the other side of the coin is that there are illicit uses of it as well, right? 
 

So, um, you know, as they make it more realistic, that's great for its legitimate purposes, but illicitly, it means it's still getting easier and easier. And  
 

Marco Ciappelli: I'm going to, I want to say something about this because I finished not too long ago to read the The singularity is nearer by Ray Kurzweil. And one of the thing is like the moment that you get there, AI is going to have to dumb down itself if we want to pass the Turing test, because otherwise it will be. 
 

too good and it will be detected as an artificial intelligence. So it's kind of like we're kind of there I guess because you see some video now and on social media where they they show and say well this I can't believe this person it's it's created by an artificial intelligence as a video and it is [00:09:00] Flawless And I am there, it's probably too flawless, it's more like too perfect, but again, I'll go the other way around. 
 

Ashley Jess: Yeah, and it's this interesting space right now, because it is constantly improving, but then you'll go and you'll beta test these things and there are, obvious flaws, and there's the issues with hallucinations with AI right now, which is where it will just confidently tell you something incorrect. And, um, there was a computer scientist who just gave a TED talk, um, a couple of weeks ago that was excellent in which she discussed testing some models and the best one she had, I think still had a 17 percent failure rating, but it's so confident in telling you that it's incorrect and it will make up sources and make up data that unless you're going and manually checking, you're going to miss it because those answers do look realistic. 
 

It is. Modeling the expect and answer. Um, so it takes a certain level of critical thinking to, um, you know, make sure that it is actually factually correct. So it is this weird [00:10:00] dichotomy that it is getting used and it's more impressive than it's ever been. But it's still not this infallible technology. 
 

But then the third kind of aspect is at the end of the day, Cybercriminals are still using it no matter what, um, so, you know, this presentation is hopefully diving into just a very realistic, objective view on the front lines of all how those three aspects play into each other and what it's looking like out there. 
 

Hence, you know, the dumpster fire, because it's more complicated than it might seem, so.  
 

Sean Martin: Yeah, and I, I think, for me, what makes it complicated is who, who owns, who's responsible for, Protecting us individuals, businesses, communities, societies, nations, whatever level. And I think if, if we're putting the onus on the individual to understand and spot and know how to react properly or. 
 

Whatever, I think that's a very hard problem to scale a solution around. And so then, is it an organization? [00:11:00] Which organization do they use technology? And Dennis Cruz, I don't know if you know him, he, uh, presented at OWASP, AppSec, and Lisbon about this idea that you could use a prompt to generate responses and use, uh, Different models of LLMs to validate multiple ways these responses and so basically using technology and the and the A. 
 

I. to validate itself. So you get a better result. But I don't know how we get that into the hands of whomever needs it. So maybe your thoughts on that in terms of responsibility and maybe connect that to your presentation, who are you speaking to and what are some of the things you hope to share with them? 
 

Ashley Jess: Yeah. So there's a lot of different aspects of responsibility. There's responsibility of, you know, the output and making sure that it's accurate. There's responsibility of. Some people are more advocating on the side of government regulation and, you know, regulating this content getting shared online. Or is [00:12:00] it, you know, whoever shares the content, is it more their responsibility to have a system for content prominence in place and using things like Web3 to really track the original source of that data? 
 

Do we have the infrastructure to do that? Now there's vendor spaces out there who are trying to be the ones claiming responsibility to be able to detect deepfakes and be able to detect, you know, artificially generated content. And I've tested a few of them. I've literally put in a paragraph straight into OpenAI's chat GPT, copied it straight into one of those vendor tools, and it told me I was 100 percent human. 
 

If you're using that tool, and it's telling you that, who does the responsibility lie with then as well? So um, there's a couple different things, and I really do think government regulation of this space is this first spot that's pretty severely lacking, especially when it comes to certain uses that. 
 

Criminals are using this tool for, namely, like, CSAM is a great, um, example there, artificially generated CSAM regulation is lacking there. So, um, you know, this is a presentation in Canada, so I am trying to keep it, um, I'm [00:13:00] discussing more of the vendor space and, and, you know, content provenance space of it all with a brief, you know, sort of touch on, um, regulation, but there's so many people internationally I can't possibly touch on. 
 

All of their possible government regulations, and I want to keep it as appealing to a wide audience as possible. So, um, yeah, I'll be discussing, um, a little bit more on, um, you know, content, provenance, detection and prevention at, like, an individual level. Um, and then, you know, where regulation should go. So  
 

Marco Ciappelli: and how about the fact that beside the quality, I see the quantity be an issue and how easy is to for everybody to jump in. 
 

And now you just need a picture to create something, you know, so it's, as you said, is going to get better and better, but also more and more accessible. So while we can create firewall for internal big companies, the everyday people that, you know, used to be the one like grandma that knock on the door, somebody dressed like a cop. 
 

Oh, it must be a cop. [00:14:00] Um, social engineering like that. I feel like those are going to be even more and more targeted because they don't have the guards up. What's your thought on that?  
 

Ashley Jess: Yeah. And it is, it is a lower, a lower barrier overall. It's a lower barrier to even use this type of technology. It's a lower barrier to use it for for more technical capabilities. 
 

It's cheaper than ever before. You need less source material than ever before. Um, yeah, there's a wider use of it. And then there's also that it's more widespread in internal systems to begin with, and now there's actors exploiting, you know, developers who have legitimately used AI to code, you know, bits of their backend. 
 

They're now finding vulnerabilities in AI generated code to then target, and then also specific targeting of AI systems. So, yeah, it's generally just overall a lower barrier, as I mentioned, you know, even with social engineering. You know, you need such little material, [00:15:00] you know, there's the scam where people call their grandmother and pretend like they were kidnapped. 
 

Like it's very easy now to deepfake your grandkids voice to make it even more believable. It's very easy to, um, you know, make a single image, even over a video to make a false profile that's then used for, I don't know, like pig butchering, you know, on some social media accounts. So same with  
 

Sean Martin: that for folks. 
 

Ashley Jess: Butchering. Yeah, pig butchering is a crypto investment scheme. Usually they'll reach out to you on social media or WhatsApp or via text message and they might even say they might even not use your real name. They might not say, Hey, Sean, they might just say like, Hey, John, I've got what you need. And then you reply and say like, This isn't John, this is Sean. 
 

And then, um, they start speaking with you. Sometimes it's romantic in nature, sometimes it's not, but typically they do pose as like a woman if they're speaking to a male, um, and then they'll eventually pivot to, um, cryptocurrency. [00:16:00] Discussions and get you to invest in cryptocurrency. Um, they might even start you on a legitimate website and then we'll pivot you to an illegitimate website where they're showing you a very large, unrealistic return on your investment. 
 

Uh, the term comes from the, or from the, sorry, the Chinese term, Um, it means like pig killing, but it's because they are fattening up their victim before they slaughter them by taking their money. It's really a terrible system that has, yeah, has a lot of links into human trafficking and, um, exploits. 
 

Exploited labor on the other end. You know, these scammers are often held in really atrocious camps in places like Malaysia and Cambodia. So, um, very complicated scheme, but it's easy for them to make appealing false profiles where they're not even necessarily using an image of somebody who exists anymore. 
 

So,  
 

Marco Ciappelli: yeah. Shona, I wanna, I wanna go back on the talk because reading on the website and the presentation of your, [00:17:00] of your talk, the end, the final sentence is that for you that the content provenance, um, and the method for detecting artificially generating content is kind of key. No, I don't wanna. And, and I'm, I'm looking at this thinking, are we trying to tell the people that are using the content to do their homework? 
 

Or is something that you are thinking the big social media, uh, The Gmail, the Apple, the people that are the top of the provider of everyday's people, email and social media to be the one that it to create filters for.  
 

Ashley Jess: Yeah, the latter in this case is really the ideal sort of situation. It gives users a way to confirm, you know, that the prominence of that published content. 
 

Is really there using information that can't be removed from that content. So, um, Adobe has done some research in this [00:18:00] area. They've done the content authenticity initiative. And then there's also the Coalition for Content Providence and Authenticity, or C2PA. Um, they have a standard that marks content with provenance information and uses cryptogenic algorithms. 
 

to insert hashes at like set intervals during a video that would change if that video is altered. And then, you know, obviously that needs to be integrated in a way where it's streamlined in the user experience where they don't have to go out of their way. So, you know, check it, but it should just be a very easy way to verify that, you know, the video that was sent to you was sent by this person. 
 

It wasn't altered anywhere in between, or if someone downloads it and reposts it on some X account with a very similar name that's not verified, or is the paid blue, you know, check system now. That you'll be able to tell the difference on the user end, but there's a long way to go. Um, in that, you know, there's a lot of different research going on. 
 

MIT's got some research in the space. There's a lot of space in like, um, you know, Web 3 and using the blockchain to do that sort of thing. So, um, lots of [00:19:00] interesting research happening in the content provenance space, but it is something That is going to become or need to become front of mind for a lot of people consuming media. 
 

Um, and it needs to be easy because the cognitive load of trying to verify every single image that you would see as you're scrolling Twitter or Instagram or Facebook, it's impossible. Like, it's, it's impossible. It's not going to  
 

Marco Ciappelli: happen.  
 

Ashley Jess: Yeah. Um, so it needs to be user facing, so it needs to be integrated into the back end by these providers. 
 

Sean Martin: And that makes me think about, Mark, we've had conversations, uh, the bad bot report where is it more than half of the internet traffic is nonhuman, right? So some machine driven bot enabled traffic. So we've crossed the point where machines have taken over. The network and now I can see a world if we're not there already. 
 

I don't know if anybody's tracking this where content is machine driven machine.  
 

Ashley Jess: There's a lot of research in the concept of pink slime [00:20:00] journalism, which is, you know, these. News agencies that are posing as, you know, your local newspaper that actually are funded typically by political entities, but I mean, there's foreign pink slime. 
 

There's a couple different versions of pink slime, um, that are usually using gen AI to generate filler content to kind of establish newspapers having a bunch of articles. And then, in fact, are pushing articles that usually have some sort of Um, Um, ulterior motive, whether that's, you know, on the political side, getting you to vote one way or the other on the foreign side for an interference one way or the other, um, but they're using gen AI to supplement the rest of their website. 
 

And it's been estimated now, um, there's a couple of great organizations that track, um, some of these websites that they have now tracked a number of pink slime websites that outnumber the number of local newspapers still available in the United States. Use us as an example, meaning that the false websites now outnumber the legitimate ones. 
 

Sean Martin: Fascinating.[00:21:00]  
 

I hope you, I hope you get to investigate and explore and experience other things beside this to kind of lift you up and out of the dumpster.  
 

Ashley Jess: There's a lot of other great talks in the AI track that have, you know, proposed solutions. So I'm excited to go sit in on all of those for sure.  
 

Sean Martin: Absolutely. Well, it's been a, an absolute treat. 
 

Chatting with you, Ashley. And of course, I encourage everybody, hopefully I'll see you there in Toronto, October 21st through the 24th at Sector. It's a Black Hat Informa event and Ashley Session, Hello from the Dumpster Fire, Real Examples of Artificially Generated Malware, Disinformation and Scam Campaigns is on Oh, is there a date and time? 
 

We don't know.  
 

Ashley Jess: I don't think they've given us a schedule yet, but I'll be on one of those days, so.  
 

Sean Martin: You're there, one of those three days. Everybody should go for the three days and be sure to connect with Ashley. And you mentioned a few resources, so maybe, uh, maybe you'd be kind enough to share those, so we can, um, [00:22:00] share some of those things with folks listening and watching this episode. 
 

Ashley Jess: Yeah, of course.  
 

Sean Martin: And, uh, Yeah, good stuff. I appreciate you taking the time to share. Congratulations on, uh, getting a spot to speak there. I know it's a, a fun, a fun deal submitting, uh, presentations and look at that. Oh, I hope everybody gets to, uh, to meet you and enjoy your session. Have a good chat with you afterwards and everybody listening, watching, please do stay tuned for more coverage on the road and on location. 
 

Subscribe. Subscribe. Is it up or down or sideways? I don't know. It's right here. I'm gonna click the nose  
 

Marco Ciappelli: Somewhere at the end you're gonna see it. I'm watching the video.  
 

Sean Martin: Very good. All right. Thanks everybody. Thank you. Ashley.  
 

Ashley Jess: Yeah. Thank you