ITSPmagazine Podcast Network

The Misinformation Crisis: Navigating Technology and Truth in Modern Society | A Conversation with Joy Scott and Andrew Edwards | Redefining Society with Marco Ciappelli

Episode Summary

Join me, Marco Ciappelli, as I explore the complex relationship between technology, society, and the escalating crisis of misinformation, featuring insights from Joy Scott and Andrew Edwards.

Episode Notes

Guests: ✨ 

Joy Scott, President, Scott Public Relations [@Scott_PR]

On LinkedIn | https://www.linkedin.com/in/scottpublicrelations/

Andrew Edwards, Founder and CEO, Verity7

On LinkedIn | https://www.linkedin.com/in/andrewvedwards/

On Twitter | https://x.com/AndrewVEdwards

On Instagram | https://www.instagram.com/andrewvedwards1/

____________________________

Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli
_____________________________

This Episode’s Sponsors

BlackCloak 👉 https://itspm.ag/itspbcweb

Bugcrowd 👉 https://itspm.ag/itspbgcweb

_____________________________

Episode Introduction

Welcome to another episode of the Redefining Society podcast on ITSP Magazine. I’m your host, Marco Ciappelli. Today, we’re tackling a critical issue that affects all of us: the intricate relationship between technology, society, and the burgeoning crisis of misinformation.

As we approach the upcoming elections in the United States and reflect on recent political events in Europe, it’s clear that the lines between digital and real life have become increasingly blurred. To explore this topic, I’m joined by two esteemed guests: Joy Scott, the head of Scott Public Relations, and Andrew Edwards, an entrepreneur and author of ‘Army of Liars.’

We’ll start by reflecting on how modern technology, particularly social media, has drastically altered the landscape of information consumption and trust. Joy Scott will introduce her professional focus on disinformation, sharing insights from her initiatives, including a specialized newsletter and her book, ‘The Gentrification.’ Andrew Edwards will bring his expertise on technology, audience tracking, and the significance of Section 230 of the 1996 Communications Decency Act, highlighting the unique responsibilities—or lack thereof—faced by social media platforms compared to traditional publishers.

Our guests will recount anecdotes illustrating the stark differences in media consumption from past decades to today, shedding light on how fact-checking has evolved in the age of algorithms. We’ll discuss the unchecked spread of propaganda by malicious actors, including foreign entities, and explore both technological solutions and the need for stricter legislative measures.

We’ll also examine the role of AI in misinformation, acknowledging its double-edged potential: while it can generate impressive, creative content, it can also facilitate the rapid dissemination of falsehoods. Joy and Andrew will underline the importance of media literacy and fact-checking tools like FactCheck.org and NewsGuard.

As we conclude, I’ll emphasize the necessity of a multi-faceted approach to mitigate misinformation’s impact, advocating for both user education and systemic change. This episode serves as a call to action for more conscientious consumption of information and urges society and policymakers to address the legal and technological gaps that allow disinformation to thrive.

Tune in and join the conversation as we redefine society together.

_____________________________

Resources

Fakchex.substack: https://magenta-nation.com/fakchex/

The Consumer's Guide to Spotting Fake News: https://magenta-nation.com/a-consumers-guide-to-disinformation-ebook/

____________________________

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Watch the webcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast

Episode Transcription

The Misinformation Crisis: Navigating Technology and Truth in Modern Society | A Conversation with Joy Scott and Andrew Edwards | Redefining Society with Marco Ciappelli

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] Marco Ciappelli: Well, hello, everybody. Welcome back to another episode of Redefining Society podcast here on ITSP Magazine. This is Marco Ciapelli, and as you know, I like to talk about society, and I like to talk about technology, and you cannot distinguish the two nowadays. As I like to say, the line between online life or digital life and real life It's just something that we made up because it's still life and that's how we get our information. 
 

We talked a little bit before we started here about reading a newspaper old school on a tube in New York or any other place in the in the world and how it may seem so weird someone and, uh, in reality, uh, that's that's life as it is to read it on your phone. I hope you guys are going to enjoy this conversation. 
 

It's going to be quite interesting, especially because we're getting close to election time here in the United States. There's been a lot of elections and political turmoil, if I can say that, in Europe in the past few days, and we're going to talk about all of that. Mostly about misinformation and how that plays into this with a little sprinkle of AI on top of that, because, you know, we have to talk about AI. Yeah.  
 

So here we are. We have Andrew and Joy that are joining me today, and I would like for them to introduce themselves. Let's start with Joy. Tell us a little bit about yourself and why are you so passionate about this topic?  
 

[00:01:43] Joy Scott: Sure. My name is Joy Scott. I am head of Scott Public Relations, which is a communications firm. 
 

And as a communicator, I found a professional interest in disinformation. And as I looked at what was going on, particularly in the U. S., it seemed that disinformation had a major impact in our political and civil unrest. So to that end, I developed a newsletter to help people to cut through the Obfuscation of disinformation, and wrote a book called The Gentrification, and just recently produced The Consumer's Guide to Spotting  
 

[00:02:32] Marco Ciappelli: Fake News. 
 

All right. Good enough. Andrew, how about you? Uh, hi. Well, first, Marco, for having me on.  
 

[00:02:39] Andrew Edwards: I'm really thrilled to be here. 
 

And, um, you can see right over there, there's a picture of a book called Army of Liars. That is, uh, the book that I have coming out, uh, in September. It's my second book. Uh, and it is about disinformation in social media and about section 230. Uh, I have been, uh, my background runs to, uh, technology, entrepreneurship and audience tracking. 
 

I was one of the founders of the Digital Analytics Association, which is the organization that made a home for people who were watching the behaviors of people who came to the corporate websites. This was somewhat before social media. And so I have a background in internet technologies and audience tracking. 
 

Um, and, uh, so this has given me some insights into how audience tracking and, uh, social media operates.  
 

[00:03:53] Marco Ciappelli: Yep. Very good. Very good. It's going to be a good conversation. And I want to start with, uh, actually, let's start with, with what you were telling me about reading a newspaper on the train the other day, because I think that that's important because I go back to my, uh, To my past of, uh, sociology of communication students and political science and at the time, well, Internet was pretty new. 
 

Um, I am getting pretty old, obviously, on the other hand, and, uh, and we were talking about, you know, history of mass communication, mass media and all of that and how in order to To spot the truth between, uh, let's say a magazine, a newspaper from the left or a newspaper from the right, you have to kind of read three newspapers, maybe one from each side, one in the middle, and then make up your mind. 
 

and decide what could sound like common sense and the truth. For me, it's not that complicated, but apparently, we went a long way from doing this, given that people would have been done it anyway, maybe they would read in only one newspaper, to now where we have all this bubble of information, we're letting people Clearly lie 100 percent and nobody's fucked checking anything. 
 

So, Andrew, tell me about this, this thing that you, about the newspaper, maybe, and how. Yeah,  
 

[00:05:25] Andrew Edwards: so, um, just very briefly, the little anecdote that I was telling you was, um, I was recently in New York City and I was on the subway and I happened to buy a now rather difficult to find copy of today's issue of the New York Times. 
 

Which I now call a conservative newspaper, by the way, but that's a whole other topic. Um, so I was, I was reading it, uh, much the way everyone used to read a paper on the subway. And some random guy just said to me, You are the last person in the world to be reading the newspaper on the subway. And I looked around me and I noticed, uh, that is true. 
 

And so what I think has happened, uh, the reason why this is impactful, impactful. Is because we, we have gone, we have gone from a place where we, as a, as a society, we outsourced fact checking to journalists. This was an assumption we made that journalists in, in real newspapers like the New York Times, and it's, it's many now departed cousins across the country, uh, for instance, the St., 
 

the, uh, Columbus Dispatch. Was the first newspaper to have an electronic version. It had over 250, 000, uh, circulation. Okay. When it did that, it now has 20, 000. So that just gives you an idea of the decimation of newspaper publishing in the United States. But, uh, what we did was we assumed that these newspapers had truth standards that we didn't expect them to be perfect or to have no point of view or any bias, but we will by and large assume that if it was in the newspaper, It wasn't a total fabulous, fabulism. 
 

That wasn't just something they completely made up with zero evidence whatsoever. And so that became the standard for what you read in public forums. Well, along comes, you know, social media curating your feed so that you only see what they think you should see based on What you told what you looked at and that you're unaware of what they're doing here So they're curating your feed and some of it is totally not fact checked and yet It's not flagged that way It's just put there as if it's just another article and people over the over several generations I think have been trained to believe that what you read in a public forum is Is more or less true. 
 

And so, you know, the cynical manipulators, uh, that look to, uh, disseminate disinformation are taking big advantage of that. And they're being very successful at it.  
 

[00:08:17] Joy Scott: That's, that's very true. True, Andrew. Um, and we also have seen groups and individuals who have taken advantage of this. We saw actors such as Russian disinformation, Experts who are extremely effective propagandists, actively feeding misinformation and knowing whom to target it to and who would spread it. 
 

We've also seen actors who are taking advantage of the rant factor, where you can make a lot of money broadcasting misinformation that gets people upset and yet may agree with their values. We've seen wealthy individuals contributing to what looks like legitimate media, but is actually biased sources of disinformation. 
 

Um, and I read recently that there's over 1, 200 fake news, newspapers, if you would. That is sites that are masquerading as newspapers and look like newspapers, but they aren't. They are totally propaganda engines. So it's a matter of taking advantage of what people assumed were true or fact checked sources and an active industry that has grown up. 
 

You can even hire an agency now. To promote disinformation just as you used to hire a public relations firm To promote your companies or your position So now you can hire experts to flood the zone if you would with disinformation  
 

[00:10:10] Andrew Edwards: And much of it is bot driven, I believe. Um, yeah, it was just it was just a story. 
 

Uh, that came out I think was yesterday where uh, the fbi has uh You Uh, uh, discovered over a thousand Russian bot, uh, accounts on Twitter, uh, pretending to be from places like Minneapolis and Wisconsin and, you know, places in the U. S. But what they really are, are Russian propaganda machines. And they are masquerading as local people, uh, and a lot of them  
 

have  
 

been taken down off of that. 
 

[00:10:47] Marco Ciappelli: Russian. I mean, I have. A large portion of the traffic on the internet is bots. There are the good bots and the bad bots. There is a company that I work with sometimes, uh, in Perva. It's not advertising. They are sponsor of us anyway, but, uh, they, they look at that. They do a report every year that is the good and bad bots. 
 

And, and, uh, you know, they're the one that scrapped the price on ticket. That's why you pay. Tyler Swift tickets so much. That's why you don't find a flight to go somewhere. We're in the 4th of July and then there is an algorithm now that, that control all of that. And, uh, and yeah, you're, you're correct. I mean, it, it goes on and on and on in terms of what they can do. 
 

But my, where I want to go with this, because it's, it's about technology and society. A lot of people point the finger at technology and say, Yep, it's all screwed up because of technology. Um, now it's going to be all screwed up because of AI. Truth is that It's a tool. So does this come with technology or is anything else? 
 

There is the bad actors that take advantage of these. And maybe we haven't done much about it because even internet was insecure. When we started, because it was a talk between three universities and now we are get hacked all over the places. So what's your take? Both of you on this? Joy, let's start with you this time. 
 

Sure.  
 

[00:12:24] Joy Scott: The technology had capabilities that simply weren't anticipated. And for example, in the United States, we have such a firm allegiance to free speech. The attitude was that any efforts to curb that, uh, were unconstitutional. Um, and should not be allowed. However, we do have regulation of traditional media where they are held accountable for the accuracy of what they're reporting. 
 

But with social media, it was originally back in the nineties seen as a bulletin board. And you can't be held accountable for posting something on a bulletin board. Those days are long gone. And we find most people getting the information online and via social media today. So in Europe and other countries, there are different standards and there seems to be. 
 

a higher level of understanding of the dangers of lies and propaganda of hate speech, and the media engines are more strictly policed. In fact, many companies come to the U. S. because we have lower standards for liability, for veracity, uh, and they have a much greater freedom to broadcast this information. 
 

So I think it's a huge social problem in America that we need to fix up to. We need stricter laws. We need to look at not only people's rights, but also the damage. that is caused when, uh, those rights are exercised and people are harmed. And we need balance in our legislation and in our society to account for that and to really protect us. 
 

[00:14:34] Andrew Edwards: Yeah, I, I agree. Um, I would, uh, say that, uh, what, what, what I think what you're, the specific law you're referencing. Joy, if I'm not mistaken, is the thing called, uh, Section 230 of the 1996 Communications Decency Act, which for those who don't know, and I'll be very brief, it was created in order to protect what, what then were the, you know, a CompuServe bulletin board, which was not being monetized. 
 

It was, there were no recommendation engines. Uh, you know, none of the sophisticated things that are part of the landscape now existed then. And it was really very simple. It was a very flat bulletin board. You know, do you want to buy a bicycle? Call me. And that was it. So these protections made sense because Uh, the dichotomy that, that was being addressed then and now, actually, is that, you know, if you're a bulletin board, you're essentially a carrier, kind of like, uh, a phone company, right? 
 

Where, uh, you, you're not at all responsible for anything anybody says or does because you have no control over it. No, you know, the phone company can't shut your phone call down if they don't like it. And yes, you could be plotting murder on the phone. The phone company will never be liable for that because they have no control over what is being said or done on the phone. 
 

Okay, that makes sense. Um, uh, whereas publishers and let's leave social media out of here for a minute. Publishers or those who can decide what is going to be promoted. By their vertically integrated platform of distribution of content distribution, uh, in the real world, uh, in, in publishing, in, in, in mainstream media, you know, cable and television, um, there is a responsibility. 
 

To not lie, not tell damaging lies about other people or companies. Perfect example, Fox News, not a digital platform, lied about Dominion voting systems, had to pay out 750 million. If they had been Meta or X, they wouldn't have to pay a nickel. So, uh, what's happened is that the social media companies are engaging in a sleight of hand, in my personal opinion. 
 

They're trying to say, we're not publishers, we're platforms. And, and they do this by claiming what I call a legal quantum superposition, which means, I don't know if you know about the Schrodinger's cat, it's dead and alive at the same time.  
 

It's  
 

kind of a Newtonian impossibility, right? Um, and, and in law, I feel like this is also frankly an impossibility because they're claiming that at once they're blind, can't see. 
 

Can't do anything about what anybody does on their platform, and not blind. They can take it down if they want to. So they have these two rights, which puts them in a unique position, which is frankly, above the law. They can ha they can allow beheading videos on their sites. They can allow ISIS to recruit people to murder people, and they have. 
 

And they have control over it. They could take it down if they want to. Now they'll say, oh, it's really hard and we can't find most of it. To which I say, you have a bunch of computers there, I heard. And your payment systems are awfully complicated. I bet you could figure this out. Um, but beside, that's kind of beside the point. 
 

The important thing is that they're claiming to be something that's actually impossible. Except, you know, In a fantasy land where they live and if we were to be fair about it They'd either be a carrier like the phone company in which case they wouldn't be able to censor anything Or they're a publisher where they can have anything they want, but it has to be You know, it can't be libelous so what I would say Is not that we need a bunch of new laws or regulations, but that we need to get rid of just one Which would mean that they then could get sued Which is really what keeps the other publishers in line. 
 

It's, it's not that the government's going to come after them. It's that someone who's been harmed by their speech will come after  
 

[00:19:11] Marco Ciappelli: them. A lot to unpack right there. And I like the reference with the, with the quantum superimposition, because in a way, if we want to break it down, they want to have the cake and eat it too. 
 

Exactly. Right. Exactly.  
 

[00:19:32] Andrew Edwards: It's a slight of, they know they're not, they know they're media companies and, and, uh, you know, I know that if we, if we got rid of Sector 230, they'd still be massive media companies. They just wouldn't be irresponsible, massive media companies.  
 

[00:19:48] Marco Ciappelli: Right. They found a loophole. They took advantage of it. 
 

Huge. But there is the technology. Probably to control this, it's missing the legal and the enforcement for doing this. So something is, we need to change. Um, a lot of people say, well, it's not going to change. And it's, uh, we need to educate the users. And I think that's, that's important. I'm a big fan of education. 
 

I think that A lot of problems in our society come from ignorance, to be honest with you.  
 

Oh, yeah.  
 

But at the same time, we need to accept the fact that it's a shortcut to not take responsibility for all these other people and say, well, it's not my problem. You know, you have to say that don't put the candle near the dripery, otherwise it will catch on fire. 
 

Yeah.  
 

So why not making an advice? Ma making a note or, or a disclaimer on this may be bullshit and , maybe you need to not put it in practice. Yes. So I don't, joy, uh, is there a way out of this on your opinion or, uh, I mean, I, I think you mentioned legislation would probably do something that maybe Europe is kind of leading the way on on this. 
 

Yes.  
 

[00:21:17] Joy Scott: Holding these companies accountable for the content of what they are publishing, I think is the first step and things are not going to change until that happens. When we talk about educating people to be able to tell what's real and what isn't, that's very important, but in a sense we are also trying to Lock the barn after the horse has taken off and is over in the next state, if you would. 
 

There has been so much disinformation and so many millions of people are completely invested in it now. Uh, this is going to be hard, but we do need to see a lot more examples of how to push back against disinformation. How to raise the question, is there any evidence? That supports this, to educate people about things like whataboutism, when the most famous example may be the welfare queen of the 1980s that has convinced so many people that there's a lot of people who are living off welfare and doing very, very well. 
 

When this was actually just one example, fallacious thinking of, well, if this has happened, if X is true, then Y and Z must be true. Uh, and that's not the case. So it's very difficult to Identify these nuances when you live in a bubble as we all tend to do And everybody talks and thinks the same and we are the same things So it's going to be an uphill battle But the best place to start doing that I think is in politics, which we're all watching right now at least In the US, we're still having our reaction, um, activities going on and to see how some experts can stop, can start pushing back and stop this flow from going on unchecked. 
 

[00:23:30] Marco Ciappelli: Well, Andrew, let me add one thing to this. So I like when I do a lot of cyber security conference, not from the cyber security. Practitioner. That's my partner, Sean. He's the cyber security expert. I look at the social engineering. I look at things that are have effect on on society, and where I'm going with this is that I feel like there is always somebody, obviously, that take advantage of the psychology of of the humans, and it can be for political reason, for stealing your money, for making romantic scam or whatever it is, The point is like to what Joy was saying is that at a certain point you need to say freedom is cool but now everybody's gonna wear the helmet. 
 

Freedom is cool but there is traffic lights with a red and green and orange and if you cross it you're gonna get a ticket and then we're gonna take your car, you can't drink and drive. So freedom, freedom, but there are certain things that you can't do because they damage other people and infringe other people's freedom and it's not good. 
 

Again, not that complicated, but you have a book behind that you wrote, Army of Liars, and who is gonna stop this army is a big question, but for me, it's more about is it the will to do that in this country?  
 

[00:25:01] Andrew Edwards: Well, um, I certainly hope there is. I'm always looking to expand the constituency for, of people who are fighting back against disinformation. 
 

Uh, because disinformation is a tool of war, and I, I do feel like if you put all these things together, if you put the massive amount of disinformation that's being disseminated, plus things like Project 2025, and the comments coming out of The right wing now about this will be bloodless if the left wing lets it be. 
 

And things like this, um, there's a, uh, I feel like we're entering a tumultuous time, uh, where, uh, almost anything could happen. And so I think it's all, you know, right now, I feel like I sure hope there's a will to do this. It's going to take the efforts of all good people to try and keep us from falling off the cliff. 
 

I feel like education is a large part of it. Um, I think, you know, media illiteracy is a huge problem. People do not, um, you know, I was looking at, uh, watching a ball game the other day, and it was one of those Uh, stadiums that has a water feature outside, it wasn't San Francisco, I think it was Pittsburgh actually. 
 

And I saw there were people, um, on like, uh, you know, floaties, there were floating on rubber mats in the water. Uh. And they were just staring at their cell phones. 
 

And I was like, okay, this is, you know, we've now entered, entered a dimension where you, you don't even enjoy hanging out on the river anymore. Now, that's not good enough. It, it's just a place for you to park while you stare at your cell phone. So clearly the cell phone has become the center of life. And therefore, uh, we need to pay more attention to what's coming out of it. 
 

And it can't just be whatever the corporate giants feel like putting out there because it's too dangerous. It's kind of like when we had, uh, you know, I, I, you know, when I first got started in, in the tech industry, we used to say, Oh, it's like 1905, you know, when the first cars were, Being put on the road and and okay, so well now it's you know, 20 years later now, it's 1925 And uh, there's millions of cars And it's and I don't know if you guys know this but In the 1920s, there were millions and millions of cars and no traffic rules Like there were there were no one way streets. 
 

There were no traffic lights. There were there were no cops to Direct traffic. It would just be a big free for all And the  
 

[00:27:51] Marco Ciappelli: car killed it Oh,  
 

[00:27:57] Andrew Edwards: yeah. Well, the story is that there was two cars in the state of Ohio and they collided.  
 

[00:28:05] Marco Ciappelli: I thought it was a pedestrian. I mean, anyway, there was a few factors. That, that made this happen. 
 

[00:28:11] Andrew Edwards: Yeah, there was almost no cars in existence and then somebody got run over. So, it, it started quickly. And, and cars are a wonderful thing and they're also dangerous. And, and I think we're all very much happier today that we, uh, have, uh, safety glass and crumple zones, seat belts, uh, anti lock brakes. And we have traffic regulations, and I don't see anybody complaining about these things. 
 

Uh, and we, this, and we are simply entering a phase where the technology has become, uh, uh, you know, defining for society to the point where it just needs to be looked at like all the other technologies that have been brought under regulation. They actually become better and more useful. And I think that's kind of where we have to look. 
 

[00:29:02] Marco Ciappelli: Well, you gave me an idea for, I write a newsletter, um, together with my AI here and there, and I haven't even read it. When you tell about the, the, the cell phone, the people looking at the cell phone, where they were in the, in the boat, one day I realized I like to go to concert and watch rock music and, and other kinds of music. 
 

And I realized I was at a Guns N Roses concert. And I was like, I watched one third of the concert through my freaking phone, taking videos and taking picture. And I'm like, from row here and instead of seeing, you know, slash shredding guitar live because I am here, I'm watching it through the lens of my phone so I can share whatever I'm watching with other people. 
 

So  
 

you gave me this idea to like, we're watching our life. Through the lens and the filter of technology, which is the phone. Very philosophical thoughts that I want to develop as a piece. But based also on this, let's touch on AI, because if the filter is Social media, the, the, the cell phone where social media is and where all the, you know, generative AI is happening nowadays. 
 

The chat, GPT and Apple is gonna put it as well very soon. It seems to me that every technology, you bring the example of the car, but every technology is a little bit or a lot more powerful. And the one before cars, hammer is technology, but you know, hammer is dangerous if I hit you in the head, but you can't hit 600 people in the head at the same time or 6 million people. 
 

This information I'm hitting people in the head. A very large number and with AI, really fast too, and really accurate. Yes.  
 

Yes.  
 

Let's, let's take a couple of minutes before we wrap this very interesting conversation in your thoughts on how AI is, is going to help or It's  
 

[00:31:17] Andrew Edwards: a double edged sword.  
 

[00:31:18] Marco Ciappelli: Yeah.  
 

[00:31:19] Andrew Edwards: Uh, I, so I, my experience with it is twofold. 
 

It's I have a good story and a bad story. So the, the bad story is being stuck, uh, at an overseas, at an airport, uh, trying to, uh, trying to get the idea of where, when my flight would actually leave and trying to get a human being on the phone, but discovering AI empowered bots and they cannot actually answer any questions. 
 

They just keep cycling through standard questions, and they don't tell you anything. So, I feel like that part of AI, where it's supposed to do a job, is I liken it to the dot com bust 25 years ago. It's not there yet. There's probably going to be a bust. It's not making any money. And they can't figure out how to, how to really make it work. 
 

They just, it's good enough to be a problem. And that's where I think it is today. And it needs, it needs more. It leads a lot, a lot more work before it gets to where it's going. Uh, uh, on the other hand, I would say that, um, I have been trying to create memes for my, uh, articles. And I've, I discovered this tool where, and I cannot even believe that this is going on, with generative AI. 
 

I think generative AI is truly scary, uh, and, and wonderful at the same time. And I'm, I'm very conscious of, you know, People's content being stolen, which is a whole nother issue. But there's this, there's this tool out there where you type in what you want. And in real time, as you type in each word, it gives you this fully rendered picture that looks like somebody spent a thousand hours on it. 
 

I mean, these are extremely detailed, dimensional. Beautiful pictures and it just changes second by second as you type in, you know, if you say, you know, lizard monster In fights King Kong. Well, within half a second, that's going to be there.  
 

[00:33:30] Marco Ciappelli: Okay. And so you need to tell me all that is cause I've never seen it. 
 

And I want to  
 

[00:33:33] Andrew Edwards: see it's, it's called imagined dot AI. Um, and it is, it is extraordinary. And so I, I, I think the, for the creative arts, uh, generative AI is, is going to be very game changing, uh, especially in, in movies and television and stuff like that. Um, But I think as far as, uh, it's usefulness in society as a An arbiter of truth or, or a purveyor of helpful information. 
 

Long way to go. Long way to go.  
 

[00:34:07] Joy Scott: And this also brings in the media literacy question, because it's going to be even harder to tell maybe genuine and true than what is created. So looking, uh, learning to look at things that, if it's outrageous, Look into it, do your own fact checking, find other legitimate sources, know the media and which are factual and which are partisan. 
 

There's a chart called ADFANTIS, excuse me, A D F O N T E S. That is an objective rating of media resources. And if you find one that's using AI incorrectly and or that is broadcasting disinformation, stop listening to it. Stop reading it. But AI right now has not demonstrated that it is accurate. There are a lot of people who've used it to try to write and found that it came up with. 
 

Information that's just totally not viable. Articles were cited that don't exist. So it's very dangerous in that regard to using it to vet sources of truth. So the problem here is many faceted. And again, it calls for initiatives on media literacy and pushing back. On, um, AI generated propaganda.  
 

[00:35:46] Marco Ciappelli: Yeah, I always have this joke where the fact that AI generative hallucinate and make it even more human because human hallucinate constantly to by hallucination, meaning I'll just come up with something. 
 

I don't know the answer, but I'm going to, I don't even know how many times. Yeah. I got to come up with  
 

[00:36:05] Andrew Edwards: something.  
 

[00:36:06] Marco Ciappelli: Yeah, many time I caught, you know, the AI that I use and say, Are you sure about that? Oh, yeah, Marco. I'm sorry. I misspoke. And well, yeah, and then they come up with some other things that is. 
 

Yeah. Anyway, I think we are at the point where AI is super powerful, as you mentioned, Andrew, especially generative AI. It's a full cool. Pandora's box to, to open and experiment. I'm one of those that like actually AI, even for the creative. I know a lot of people are not going to like what I say. I don't like the fact that it's still content, but I think there is a lot that you can do with it. 
 

But to use it as the, the ultimate truth. No, but we also say at the beginning that newspaper. Despite an editor, despite the, the journalist's, the journalist's, um, ethics and oath in a way, uh, to, to the truth. Also, it's not something that you have to trust, that you can't trust a hundred percent. So my point is you always need to kind of double check things. 
 

And, and that is true as, as with AI and AI, I think that. That we deal with. So I agree, too. We need to find a way to live with it. And I think that someone that can make the rules need to help us a little bit. They may not resolve the old problem. I'm kind of summarizing our conversation here, but If they give a really good stab to making responsible this platform, then we will deal with a lot less of, um, of misinformation, I think. 
 

[00:38:02] Andrew Edwards: Oh, I agree. I'm all about level playing field and I don't, I don't think it's a level playing field right now.  
 

[00:38:09] Joy Scott: There's another resource that I should mention called Fact Checks. F A C K C H E C K. It's a free subject newsletter that addresses articles. Or has articles that are fact checked about topics that have a lot of disinformation around them. 
 

And another service called NewsGuard that helps to vet news information. So if we can get more people to start using some of these tools and sharing them and sharing what they're finding out is actually truth, it would help. It may not be the total answer, but we need Truth advocates, uh, if you would, who can help us all fight back against  
 

[00:38:59] Marco Ciappelli: this. 
 

And, and I, I think to conclude that as a, as an advice that I, that I follow what you just said and, and Andrew, what you just said to the users is that unless you are okay with that, you are into the, the, the, the, the lie, uh, of one side or the other, or whatever it is. AI or any other tool like a thought check tool, it's it's a good cleanup system is not going to take everything out and give you the perfect, but it's a good help to mitigate the damage. 
 

Maybe out of 1000 news, you know that 95 percent are lies, and then you're left to decide of what the other 5 percent is. I think it. It gets a little easier. So that's where technology could help  
 

[00:39:59] Andrew Edwards: if we could, if we could get AI to do that. Yeah. I haven't seen that yet, but it'd be nice.  
 

[00:40:05] Marco Ciappelli: Awfully. Yeah.  
 

[00:40:07] Joy Scott: That would be an ideal solution to automatically correct this information at the source. 
 

Yeah.  
 

[00:40:16] Marco Ciappelli: I think there is people working on  
 

that.  
 

Yeah, search engine is kind of like the same thing. I might say, you know, there are search engine that are using nowadays AI to get contextual results, but also based on, uh, like consensus, for example, for academic paper and, and things that have been vetted by, uh, The academic, uh, community and, and so on. 
 

So it's, we need to remember that it's still very fresh. The internet hasn't been existing for as long as cars, and it's probably even more powerful for our brain. So I think, uh, these conversations are very important, and I'm very glad that Joy and Andrew, you joined me for this. Of course, we could have gone so many different places, but we are 41 minutes. 
 

So I'm going to I'm going to call it off. I'm going to thank you and I'm going to thank all the people that listened so far. I hope that you have more questions now than when you started because questioning, it's a actually a really good thing. So thank you so much, Andrew. Enjoy. Thank you, Marco. Thank you very much.