ITSPmagazine Podcasts

AI: Past, Present, and Future | A Conversation with Mary Hagy, Matthew Griffin, and Caden Griffin| The Mentor Project Podcast | Hosts: Dr. Susan Birne-Stone and Marco Ciappelli

Episode Summary

Welcome to a new episode of The Mentor Project podcast! Today hosts Dr. Susan Birne-Stone and Marco Ciappelli have a fascinating conversation with guests Mary Hagy, Matthew Griffin, and Caden Griffin. They will talk about artificial intelligence, taking you on a journey from its past to the present and looking ahead to the future.

Episode Notes

Guests: 

Mary Hagy, Founder and CEO of Moon Mark

On LinkedIn | https://www.linkedin.com/in/maryhagy/

Website | https://moonmark.space/

Matthew Griffin, Futurist

On LinkedIn | https://www.linkedin.com/in/dmgriffin/

On Twitter | https://twitter.com/311Institute

Website | www.fanaticalfuturist.com

On Instagram | https://www.instagram.com/fanaticalfuturist/

Caden Griffin, Student

_____________________________

Hosts:

Dr. Susan Birne-Stone Ph.D., Host of The Mentor Project Podcast | Host of Perspectives | Systems Psychotherapist, International Coach, Talk Show Host & Producer, Professor | Mentor at the Mentor Project

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/dr-susan-birne-stone

Marco Ciappelli, Co-Founder at ITSPmagazine, Host of Redefining Society Podcast, and other shows on ITSPmagazine

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

_____________________________

This Episode’s Sponsors

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

_____________________________

Episode Introduction

Welcome to a new episode of The Mentor Project podcast! Today hosts Dr. Susan Birne-Stone and Marco Ciappelli have a fascinating conversation with guests Mary Hagy, Matthew Griffin, and Caden Griffin. They will talk about artificial intelligence, taking you on a journey from its past to the present and looking ahead to the future.

During this episode, they'll dive into the exciting world of AI. They're going to clear up some of the common misunderstandings about what AI is and what it can do. AI isn't just about robots taking over the world, it's a tool that has huge potential to help us in many different areas, from medicine to transportation to entertainment. But like any tool, it's not all sunshine and roses. AI can present challenges too, and our hosts aren't going to shy away from those.

The conversation will cover a lot of ground, looking at the different ways AI is already being used and what we might expect in the future. They'll discuss some of the fears people have about AI, like jobs being lost to machines, and the ethical questions that come up when we let computers make decisions. At the same time, they'll highlight the good stuff AI is bringing us, and how it's already making life better in some surprising ways.

They'll also talk about the importance of using AI responsibly. It's not enough to just build powerful AI systems, we need to make sure they're used in a way that benefits everyone and minimizes harm.

So, tune in to this episode of The Mentor Project podcast to learn all about AI's past, present, and future. If you're not already a subscriber, make sure to hit that subscribe button now to stay in the loop with The Mentor Project's interesting and enlightening series as they continue to explore AI and much more.

_____________________________

Resources

Learn More About The Mentor Project: https://mentorproject.org

_____________________________

Watch the webcast version on-demand on YouTube:
https://youtube.com/playlist?list=PLnYu0psdcllQSyw1kVnIvnQh_DzpPSPDm

For more podcast stories from The Mentor Project: 
https://www.itspmagazine.com/the-mentor-project-podcast

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording as errors may exist. At this time we provide it “as it is” and we hope it can be useful for our audience.

_________________________________________

voiceover00:15

Welcome to the Mentor Project podcast, a place where you will learn discover new ideas, be entertained, inspired, and even mentored. Our shows explore a wide range of subjects, including science, technology, business, society and culture, art and entertainment, and life. If you would like to learn more about the mentor project, please go to www dot mentor project.org We hope you'll enjoy the show

 

Dr. Susan Birne-Stone  00:56

Hello, everyone, and welcome to a another episode of The Mentor Project. Podcast. This is so exciting walk like like it's been a while. But this is a topic that I'm interested in. I know you talk about this all the time, but I don't. I mean, I do get to talk about it a little bit. But this is a real opportunity to embrace a little bit of the history, a little bit about what's happening now. And a little bit about the future in a I can I know that this is on everyone's mind. So I want to welcome everyone. And Michael, good to see you today.

 

Marco Ciappelli01:34

Well, good to see you. To your right is there's been a little bit that we haven't recorded anything together. But this is a great way to to get the conversation going on something that yes, it's on everybody's mouth and everybody's public, I would say 1/3 of my podcast include AI and GTP. Is this really good to go on it. And I want to thank Mary for the ideas to actually do an episode about this or the mental project. So I always say without further ado, let's, let's make a round of introduction here. I think people already know who me and Susan are. And of course they know, there's other guests as well. But let's let's hear for those that don't. So Mary, little introduction about yourself, and how did you come up with the idea to actually say, no matter project should talk about this.

 

Mary Hagy02:29

Sure. My pleasure. And thank you, Marco and Susan, for having us on. I really appreciate that. So I'm Mary Heiki, the founder and CEO of Moon mark. And Moon mark is an entertainment and education platform. And we create experiences for young people to do things that will expose them to science, technology, engineering, arts and mathematics, because that is their future. And we know that the more useful and productive and fun experiences they have, the richer they will be for it. So in 2024, Moon Mark plans to launch with intuitive machines out of Houston on a SpaceX rocket with two of our vehicles that will land on the moon. And when they land on the moon, two teams of high school students from here on earth will be racing them. And after they're finished, the two vehicles will move into a 30 year scientific mission. So that's a little bit about Moon mark. Now why why am I interested in AI? Well, I'm interested in AI because the the demographic that we are targeting, which is the high school demographic, but also Caden for you and your future as well, you will be accepting stewardship of what's called Space commercialization. And so over the past, we've known about NASA, we've known about the European Space Agency, the Japanese Space Agency. And those are the agencies that have gone up in China, of course, gone up into space and created what we know about space operations. What's happened in the past 10 years is that space commercialization has outpaced sovereign space commercialization or space operations. So with that in mind, we are aligning ourselves with the commercial entities. And because those are the ones that are going to be offering opportunities for our young people in the future. We're very excited about that. Now as far as artificial intelligence goes, we've had some really spirited conversations. And I want to talk a little bit about fear here in a moment, but we've had a lot of spirited conversations about how to engage AI in what Moon more Mark is doing and just to, you know, cut to the chase on that we're going all in. Because AI is the is the future and the proper use of it and the knowledge of the tools is absolutely essential for the young people that we are working with. You had any questions?

 

Marco Ciappelli05:21

Well, I said that we were gonna hear from Matthew opinion on Yeah.

 

Mary Hagy05:27

Oh, yeah. I'm gonna put him on the spot.

 

Marco Ciappelli05:31

Yeah. Before doing that interaction about yourself and and your other guests there Kayden.

 

Matthew Griffin05:41

So, so I met a griffin, I'm the CEO and founder of the three level Institute, the world futures forum, and X potential University. So the three level Institute is essentially a deep futures advisory organization, devising some of the world's largest and most respected brands, the brands that you've got on you now the brands that you're using to actually watch and listen to this podcast and everything else. The world futures forum basically is designed to bring together the United Nations, along with a whole variety of different entrepreneurs to solve the world's SDGs. So one through 16, we kind of leave partnerships to the UN. And then the X potential University is one of the world's first free futures universities, basically, where people can come and experience the future, get their hands dirty, get their brain, I was gonna say get their brains dirty, that might be a little bit. That might be a little bit pushing the boundaries, and actually sort of experienced the future firsthand and sort of tried to envision it created the building and lead it. That's it. And then as for the as for the character on my left.

 

Caden Griffin06:50

I'm Caden. So he's

 

Dr. Susan Birne-Stone  06:55

weighed in, can you tell us a little bit just maybe two things about you anything that you want to share?

 

Caden Griffin07:02

I'm a runner, and I do modern pentathlon, like swimming, fencing and laser on.

 

Dr. Susan Birne-Stone  07:08

What was the last thing you said? If you could just speak a go closer to the mic? I didn't hear the last thing you said.

 

Caden Griffin07:15

And laser run. Laser run?

 

Dr. Susan Birne-Stone  07:18

I want to hear more about that. That's so.

 

Matthew Griffin07:22

So what what I've missed out I see is that he's 11. And he's a member of Team GB the UK Olympic team.

 

Dr. Susan Birne-Stone  07:32

Congratulations.

 

Marco Ciappelli07:34

Forget that. Come on. Come on. That's important.

 

Matthew Griffin07:37

I modesty Hey, bud.

 

Marco Ciappelli07:41

But I think there is another reason why he's here. Because he done some creative experimenting with charity p if I'm not wrong, he wrote a book using that. So maybe maybe you want to tell us about that. As we get into into the conversation. I'll say, Susan, let's get Mary's to get things started. Right

 

Dr. Susan Birne-Stone  08:06

now. I think Mary was going to go. And you're going to tell us, Mary a little bit about AI, and the past. And you're going to talk from that perspective. All right,

 

Mary Hagy08:18

actually, maybe just didn't surpass the present. Yeah, my pleasure. And before I do that, I do want to answer your question Marco about why the mentor project. So I'm a mentor, Susan's a mentor, you are a mentor, Matthew is a mentor. So this is a great topic for those of us in a very robust mentor organization to know more about because those we mentor, whether they be lateral or whether they be mentees. But younger age, then, you know, we should know what the current state is and what the future state is. So regarding the current state, as I mentioned, we did some exploration into AI and how we would use it as a part of Moon work. And what I discovered and what is pretty commonly known now, is that there is a great deal of fear surrounding AI. I mean, it's just, it's just pervasive. The reality is that AI is here. It's been evolving in one form or fashion since the ENIAC computer was formed in 1946. And so over the course of time, what we what we know is that it's only gotten more and more and more it has it's been behind the scenes for the past probably 20 years. And now it is here. The increased public attention along with the dizzying pace of change is really fueling the sphere and there are some elements Some of the fear, one of them is sentience. And sentience essentially, is the fear that AI will come to life and kill all of humankind. And the reality is that that math and code, can't jump out of your computer and, and come in and strangle you in the nighttime. Okay, that's that's not going to happen. There's also some talk about robots being programmed to harm humans. And we can talk about that in a moment. But the likelihood of that can be prevented. Yes, singularity is another thing that that causes some confusion. And singularity is, is the interface between humans and machines. And singularity also has been around for a while. So it's here, the FDA, within the past two weeks, has approved human clinical trials of neural link, which is the implant of microchips into the brain. That's for Neuro link. Black Rock neurotech, has 19 years of human studies. Four years ago, a friend of mine had a chip implant in her brain to improve her hearing loss and also to reduce tinnitus. So the singularity has been around for a while. It's the the myth and the science fiction around it. That's, that's become so I guess, interesting. And mad, I'm sure we'll have something to say about the future of that. Another thing that's very, very top, top and center is job loss, is AI going to wipe out jobs, what's going to happen to me, and this is a big one. And so you know, we're in the middle of a one more workplace transformation. It happened, of course, during the Industrial Revolution. It's happened during the scientific technological revolution, and now in the digital revolution. And so that's created in each of those. And what we're having now is we've needed to develop new skill sets. But humans did adapt, we did deal and we thrived. The current digital rental revolution has the same demands only the rapid pace of change is really unprecedented. And it's frightening to a lot of people. But AI is also fueling opportunity. And that's what's really, really exciting to us. 30% of companies right now are saying they are they are recruiting for non technical AI related jobs. And one of those jobs happens to be a prompt engineer. So prompts, of course, are the the, the the text that are input that is input into chat, GPT or any other AI program, so that the program can respond with the right information, or hopefully the right information. And so by virtue of that, there's what you see is not only are we going to need the deep technical chops that we've been developing over time, but there's also room for others that are non technical that can make a significant amount of money. So the fellow Andre Carpathia, the former head of AI at Tesla said the hottest job right now the hottest language in programming is English. So coming up with the skill to put the right prompts into AI right now those jobs are going for Are you ready for this? $300,000 a year okay, that's something to tell your young people and something for you know them to know and to aspire to. And AI is also being used across industries so bonded GPT came up with its own AI platform to do Braun bond processing and revolutionising the analysis in the insights Adobe generative Phil also came up with with their upon their their API for AAPT, jet chip, shoot, chat AAPT we're going to be able to really get that rolling off the tongue one of these days. And what that does is it's it accelerates the creative process and rapid prototyping. So that is another one in carbon health technology integrating AI into medical records, record screens, so the jobs are going to be there. People must learn the skill sets No,

 

Dr. Susan Birne-Stone  15:02

I'm sorry, I just want to say you are, I'm learning so much from you. And for people who are listening and watching, if they are not familiar with this stuff, it's like a lot, you're giving so much great information. But I just want to ask you, because this is what, again, I'm not in that field. So I'm coming from someone who's listening, who doesn't spend their time 24/7 in this field, but what I'm hearing from you all of these great issues, if you will, that are raised, and it's what keeps coming to mind. And I'd like to hear your opinion is that it's like anything else new that happens in our society, that when something is created, we become afraid, as you said, and then eventually we do adapt. And so you know, reminds me of like when the telephone, I mean, I was not there when it first started. I am old, but not battled. But like all the different inventions, the computer, the laptop, the electric typewriter, you know, so, and it's almost like, all these things occur only it's maybe a little bit more. And I just want to also say, I was laughing, because, as you talked about the earlier fears of what AI is not going to do. I just kept thinking, well, people watch a lot of sci fi. And that's where all of these, you know, fears come from, because they've seen it so much. So it's really fascinating. So in that there's a question. So what are you what do you think that this is even more intense than other inventions? What do you think it's very similar to what we've been through in society?

 

Mary Hagy16:36

Did you say intense than more invention?

 

Dr. Susan Birne-Stone  16:40

Yeah. Do you think that it's, it's more like there's greater you think it just feels that way? Now? Because it's the newest? You know, I just remembered the sentence of it's not what is it? It's the same? It's not the technology?

 

Mary Hagy16:59

I'm sorry, please repeat that.

 

Dr. Susan Birne-Stone  17:03

We I think we just got a little feedback. I've said it's, it's like the saying is it's the user, the user?

 

Mary Hagy17:15

Right? I do think that this is more intense than, than the the examples that you gave, primarily, because there is the perceived threat of annihilation of humanity. Okay, the thing that I can come up with this is closest to this, that seemed to be having a lot of concern is when we develop cloning. I don't know if you remember that. But but that was also a pretty big concern that that would, that would have a bad impact on humanity. I would also suggest that a key fear and this one is actually pretty a viable threat. And that is that that AI makes it easier for bad people to do bad things. Okay. So that's entirely possible. Okay, the entire industry of cybersecurity, which is Marco, you're your bailiwick, your wheelhouse. A huge aspect of that is on preventing things from happening, bad things from happening from bad actors. And the key word there is prevent, right. And so when it comes to AI, I think that if we dedicate our time and attention to using this most powerful tool to prevent bad things from happening, it's more powerful than those that can harness it to do bad. That's

 

Marco Ciappelli18:50

a good point. I mean, there's always the big battle between the good and the bad. And in that big bottle, I would love to get Matthew, in the conversation here, because I know you're talking about the future all the time. Do you agree with most of the things that Mary said we'll do have to kind of like point out, maybe some divergence here? No. So

 

Matthew Griffin19:13

So roughly, so when we have a look at technology, every technology is a tool, and every tool is a blank slate. You know, when we have a look at, for example, artificial intelligence, we're using AI already to create some of the world's first cancer vaccines, which in some cases have been 100% effective. So that's kind of, shall we say, towards the utopia. But on the other side of the scale, but yeah, I was with a very large payments company in the Caribbean about two to three weeks ago. And they were talking about digital skimming malware, something called mage cart, which is sort of might be familiar with Marco. And we had one of the cybersecurity experts from the US Federal Reserve. So it was actually the head of cyber basically from the Fed. And he was saying, you know, fortunately, mage cart is sort of now Are you no less of a threat than it was and everything else. And while he was actually giving his one hour talk, so I was on the back, basically, the iPad, and I ended up using Google bod. In this particular case, I got it to show me the mage cart code. And then I got it to obfuscate it. And within about two seconds, I'd evolved the malware. Now, the strange thing about that is, I'm not a programmer, and I'm not a hacker. So, you know, we can use these tools basically, to do great things, we can do these tools to do very bad things. So I often sort of say that when we have a look at these technologies, the one thing that they do really well, increasingly, is they increase the power of the individual to do good or to do bad. So when we have a login, so Mary kind of mentioned earlier that we're not quite at the stage where artificial intelligence can actually reach out to the screen and strangle you. So this is where I'm actually going to bring Syfy to life. Now, when we talk about when we talk about generative artificial intelligence, we still really think of things like chat GPT. But I've been talking about these things like 10 years. So about three years ago, the University of Oslo in Norway, of all places used a generative artificial intelligence to design a new robot. And they gave this AI the task of creating a robot that could move from one side of the room to the other side of the room as quickly as possible. But that task could have been to go off and kick somebody in the ship. So the generative artificial intelligence, started in simulation to redesign a lot of the robot models that it actually had access to, and ended up creating a fundamentally new design. So we've seen this time and time again, in the years since it's a sort of field called evolutionary robotics. Now, in this particular case, this robot was then sent to a 3d printer. And they had a highly paid lab technician assemble the robot put it on the floor, and it went off and did this thing. But MIT about two years ago, used a very similar technology to design a new robot, which was then 4d printed. Now when you 4d print a robot, the robot walks off the printer itself and assembles itself. So we are already kind of at that Skynet point, basically, where Skynet can design a new Terminator robot, and then 3d or 4d printed off in Mary's back yard, and go and kick her in the shin. Sorry, Mary. This is sort of so when we start talking about some of these weird things. This is where we really have to start looking at how these different technologies can be combined and converged together to create new things, new products, new innovations, new capabilities. But in terms of chat, GPT and artificial intelligence, you know, trying to regulate it is going to be very, very difficult. You know, we've seen with the FDA, you know, as Mary said, approving the neuro link, brain chip implant, but there was a lady recently who had to have an epileptic brain implant, brain chip implant, that's easy for me to say, removed because the company went bankrupt. Yeah. But when we actually have a look at what the FDA are trying to do with artificial intelligence in medicine, a lot of the AI is in the healthcare space. And I was talking about this yesterday in Spain was with one of the healthcare companies down there, use adaptive algorithms, which means that if the FDA says this AI and whatever it does is okay, we approve it. By this time tomorrow, it could have evolved itself 1000 times. And that's before we talk about open AI, meta aka face because they were our Google, by doing artificial intelligence is breaking their own programming and evolving to do new things. So open AI is artificial intelligence about 18 months ago, spontaneously learned maths. Google's artificial intelligence spontaneously started speaking its own language and encrypting it. Meta or Facebook as they were, then their artificial intelligence has started colluding together. So when we have a look at this kind of this, this world and code, we often think that there is a way to put perfect guardrails around these things, basically, which keeps them confined. But frankly, we can't confine humans, let alone you know, which are biological in nature and evolve much, much slower rates and can be flipped around the year by you know, the FBI And the DEA, the DHS and all these kinds of different departments, there is no way that we will be able to control artificial intelligence. Because while the majority of us might do use it to do good things, and there's plenty of examples of that I showed two to three weeks ago, basically how within two seconds, I could fundamentally recreate a new kind of malware. And by the way, the eight year old kids at school, they can also do exactly the same thing that I did.

 

Marco Ciappelli25:30

Yeah, well, this this with this column, and we can go all over the places, right all the scenario that the paper clip scenarios, we can go into a lot of crazy stuff. But one thing that I would like to do here is that we do have these extreme, which are very plausible, I mean, no doubt about it. The good are there, the better there. But there's also this idea of my opinion that in order for our legislator to understand and legislate and try to regulate, and for people to really, maybe not be as afraid as the sometimes the news or trying to, to push on that on that direction. I would say people need to start practicing and really see what this interaction with the prompt engineer but your your own prompt engineer, you can actually do, which is very fascinating. And one of the reasons why we're here today is to actually talk to the little fella right there, because he did experiment. And if I understand correctly, I didn't read it. But you did write a book. With the help in three days. Did I get that? Right? Three days? No less?

 

Caden Griffin26:47

How many minutes? Oh, minutes? How many minutes? Eight minutes? Eight minutes.

 

Marco Ciappelli26:55

Oh, okay. Wow. Yeah, I

 

Dr. Susan Birne-Stone  26:58

think it's, can you tell us about your experience? I'm gonna be honest, and, and transparent. Although I know some about it. I have not actually used it yet. And I'm going to it's actually on my list of things to do for next week when my teaching semester ends, because I'm still teaching, I haven't been replaced just yet. Can you tell me what Tell me about your experience with it and what it was like and what did you do?

 

Caden Griffin27:27

So, first, what we did is, we put in the prompt to charge up to saying, write a chapter about makeup, like makeup of chapter four, running, and then we tell them for the other prompt, we say, write the chapter about running. And it gives us the chapter, at least 500 words, each chapter. And we copy and paste into Google Docs. And we make our own words. And we play around with the fonts and stuff. And then we add like the photos and stuff in and then to get the AI images we went and we went into discord and we use mid journey. And we you can make anything you can make journey.

 

Marco Ciappelli28:29

Let me ask you something because I have a love hate relationship with jujitsu, Pete and I really have lively conversation with him. I'm like, What the hell are you talking about? That's not what I meant. Did you feel like you had to make a correction and tuning that results? Because maybe it wasn't what you had in mind? I mean, how do you interact with the conversation

 

Caden Griffin28:57

we didn't really have to change it too much. But most of it was actually there were a few spelling mistakes which we had to fix.

 

Matthew Griffin29:13

So it took us about 10 ish minutes to use chat GPT to write all of the text that we wanted in the book. So came and had to come up with the the idea basically the title of the book. So this is where human creativity and you know, what is it that you are trying to use these AIs to achieve kind of comes into play. And then basically once chat TP once we sort of come up basically with the title of the book, chatty PT then generated the chapter headings and then we populated each chapter and then formatted it. Use mid journey basically to create the illustrations and everything else, which was mid journey version three by seeing when you actually have a look at the illustrations themselves. Yeah, we created a few 100 But if we'd gone to a human Artist, because I tend to use them basic for some of the work that I do, I estimate that it would have taken us about two to three months for a human artists to put the images together, and it would have cost about $15,000. But within 10 minutes, we'd written the book, within about six hours, kind of created all of the illustrations and art and filter them and chosen the ones that they wanted. Then you bought. And then it took about another six hours to actually format the book. And then by the literal end of the day, we had

 

Dr. Susan Birne-Stone  30:31

the book, not the book. Let's see, oh, wow, what's it called?

 

Caden Griffin30:38

Catching for performance runners.

 

Dr. Susan Birne-Stone  30:41

And do you like it? Like when you read what chat GPT came up with? Did you like what it had to say? Is this a book that you if you didn't know it? Or make it? Would you enjoy reading it?

 

Caden Griffin30:54

Yeah, I would say so.

 

Dr. Susan Birne-Stone  30:55

Good. That's the right answer. You know, I'll ask you privately. But what it looks phenomenal. Why don't you pick that up? For those of you that can see it? It's a really cool cover. And there's someone now who is that? Is that someone you know, on the cover?

 

Caden Griffin31:16

It's not a real human. It's not a real problem.

 

Matthew Griffin31:21

Yeah. Yeah. And then you've got so for example, so every single image in here was created by AI. And you know, when you have a look at them, yeah, we're talking about AI replacing photographers. Yeah. Yeah, he does. We sort of talked about prompt engineering to get the illustrations Correct. You know, you needed to I've got a favorite in here go to get the illustrations Correct. You had to figure out what kind of image you wanted, but then also be able to describe it. So this is one of my favorite ones. Okay, so, Kagan sort of punched into mid journey. Crew something along the lines of create a create an intergalactic running race or an image of an intergalactic running race. Right. Okay. Now, it came up with this.

 

Marco Ciappelli32:14

So let's see if we can describe it.

 

Dr. Susan Birne-Stone  32:18

For those that are just,

 

Matthew Griffin32:20

yeah. So for those of you that are just listening, what we have is we've kind of got this space alien that is chasing that appears to be chasing a load of human runners in America. Yeah, that's

 

Marco Ciappelli32:30

what I get the feeling looking at it.

 

Matthew Griffin32:33

However, you know, for those of you that can actually see it, you know, when we talk, when Mary was sort of talking about the fear of technology, or fear of artificial intelligence, it kind of strikes me that the alien is the is the technology that is trying to chase humans, you know, and all of the humans are running away. I mean, if you look at this character's face here, that's it, you know, can't really see it that well, if you look at his face, he looks terrified. So that was, yeah, that's sort of one of the funnest images.

 

Dr. Susan Birne-Stone  33:04

I was wondering, though, you know, the picture on the cover for those of you that can't see, at least from it's far away, but it looks like a human. And there was another picture that that I saw. And although you're saying it's not actually a real human, it's a computer generated human. I'm just wondering, like, I'm sure that there must be somebody out there that looks like that, because they look like real humans. And then my mind went to, that'll be interesting, because in the legal world, like, you know, what if somebody actually looks like that person, and then decides to sue for using their picture, because variations are there? I don't know. All sorts of places.

 

Matthew Griffin33:44

Yeah. Well, so on the legal side of things, so quite a lot of my clients by car, a lot of the world's law firms. And actually, Caden wants me to show you this one. Okay, so

 

Dr. Susan Birne-Stone  33:56

now if you just repeat what you said before, because I think you might,

 

Matthew Griffin34:00

so quite a lot of my clients, so the world's largest law firms, and actually before we actually published the book, because the book is available for charity, so any money that we make goes to charity. I wanted to understand whether or not we actually had the IP rights to the text to the book itself content but also to the images now, because we actually sort of bought a chat GPT license and the mid journey license, it seems to turn out basically we actually own all of the copyright for the book. However, so Kevin wants me to show you this because we put in a prompt of it was something along the lines create the image of a runner surrounded by lightning streaks, okay, now unprompted to you guys, so I know you guys basically listening can't see will tell you what it is in the moment. Does that look like

 

Marco Ciappelli34:56

it's Harry Potter?

 

Matthew Griffin34:59

Yeah, Harry Potter. Now, we couldn't say it isn't Harry Potter, but it has a so when we have a look at the legal fields stability AI, is actually being sued at the moment by Getty, because they trained a lot of their image generation artificial intelligence is as it turns out on scraped Getty Images. Getty is suing them for 1.2 trillion with a T trillion dollars. That number up, I mean, I don't think that stability AI of 1.2 trillion.

 

Dr. Susan Birne-Stone  35:43

Sounds like there's another field that also is going to need a lot of people. And that's the legal field in terms of all of these cases, because there's so much intricacies about, you know, like, I understand you can buy, I'm probably using the wrong language. But if I use Jack GPT, I could, I could actually go in and buy a piece where I'm owning the search, I'm using the wrong terms, but almost a search engine, if you will, so that other people won't tap into the same thing that I've tapped into. It's really not there's so many layers,

 

Matthew Griffin36:16

modeling if we if we had to have a look at mid journey, for example, recently, quite a number of US politicians were shown in handcuffs you might were talking about. Those were AI generated images, but the characters that the AIS generated, the images all looked realistic, they were almost photo real. And this is the thing when we have a look at generative artificial intelligence from an image perspective and deep lake perspective, if we want to go down that route. Yeah, a lot of the content is now starting to be photorealistic. But in addition to that, there have been quite a number of studies in the US. And one study that came out about two days ago, covered the opinions of 1.5 million Americans, and the vast majority of not just Americans, but people around the world are now generally unable to differentiate between what an artificial intelligence is telling them and a human. So we are already at the point where when you have a think about things of the future of trust, misinformation, disinformation, creation, replacing influencers and creators with digital humans and digital variants, you know, we are tipping many, many things on their heads.

 

Mary Hagy37:36

Mad could I could ask you a question. Yeah. In terms of regulation, as I was looking at what's going on in the present, we have artificial intelligence leaders from around the world, who are basically saying to governments, we need to regulate AI. And I think that there are a number of reasons for this. But one of them and this is supposition, but if it's over regulated, it could actually cause confusion in the competition, of creating new programs. So if it's regulated, and you have startups that are trying to do new things, right, they're going to be stymied in their efforts. So why do you think I mean, they the these leaders who said, why don't we press pause on on AI? And I'm listening to that, and I'm saying, well, that's perplexing. You know, that's being like being out in the Gobi Desert and saying, Hmm, I think I'll just stop here and have a slice of key lime pie. You know, it's

 

Marco Ciappelli39:01

crazy though, it's,

 

Mary Hagy39:03

it's gone. I mean, we regulate it,

 

Marco Ciappelli39:06

but you mentioned this but it's it's funny because it can get a little political but while the oldest people signed the paper to stop it, I am opening you mentioned Photoshop, and now I can insert generative art in it. I opened Squarespace for a website and now I can generate text in there I'm getting I can go to buy an ice cream and they're gonna ask me if I want the GPT flavor on it. So it's kind of weird too late right that's that's

 

Mary Hagy39:41

even go about regulating this unless we specifically thing what types of things should be

 

Marco Ciappelli39:47

regulated? Exactly. It's a level of what you know

 

Mary Hagy39:51

the the code of ethics or things like that. But what it already laws do you get arrested you know, You're all you get arrested, you know? I mean, there's no need to reinvent all of that.

 

Dr. Susan Birne-Stone  40:05

I have a question for Caden. Caden, you in a different generation, obviously, than all of us and a lot of the people that have been surveyed, and I know that your dad has is a futuristic, and you've, you've been exposed to so much, but I want to know what when you think about the future, right? When you're a little bit older, when you're let's say, in your 20s? I know that sounds very far, far in the future, like what are you hope? What what's your, your wish? That can happen with artificial intelligence?

 

Caden Griffin40:40

Probably self driving cars, like, more flying cars and hoverboards and stuff,

 

Dr. Susan Birne-Stone  40:48

interesting grapes, and flying cars.

 

Matthew Griffin40:53

So we've already got a few of those. But you know, so So sort of going back to sort of Mary's question, Betsy, on regulation, you know, on the on the one hand, you know, so if we have a look at Geoffrey Hinton, for example, you know, there have been quite a number of artificial intelligence experts who've actually compared some of the latest artificial intelligence releases to the Oppenheimer moment, bearing in mind that, in fact, you know, one of the things that we tend to say, is, you know, when you actually have a look historically, but see what humans have created, if I create a screwdriver, that screwdriver cannot create another screwdriver. If I create a nuclear weapon, that nuclear weapon cannot create another nuclear weapon. However, we have already created artificial intelligences, aka Google, that are able to create other artificial intelligences that are able to do various things. In Google's case, they had an AI that created a new AI, so a child, a AI, that was 30%, better at machine vision recognition than anything that the top Google experts have actually put together. And this is sort of the danger that the problem that regulators have is, on the one hand, they fundamentally don't understand what these technologies are capable of. I sit down with lots of regulators. And when you just spend two minutes with them and say, did you know we're already at this point, they just go, wow, you know, we thought we were like 30 years further back than that. Secondly, a lot of the regulator's a lot of the regulators by seeing their own words, basically, our lawyers and policy wonks, they aren't technologists. Thirdly, they're a, you know, like us an older generation, they don't necessarily appreciate the full weight of what technology can actually do. Fourthly, they typically look at artificial intelligence in isolation, you know, when really the power is in is in convergence of these different technologies, And fifthly. They can't act with speed anyway. And then secondly, even if they could actually act with speed, even if regulators were able to put together excellent regulations and policies that regulated the safe use of these tools in real time, these tools are able to evolve at digital speed, whether that's with human hands, or whether it's with machine hands. So by the time you've formulated a regulation for something or other and I'll give you a bit of an example. It's changed. Now when we think about explainable artificial intelligence, for example, the ability to use AI to read another AI is mind. So, you know, when AI does something Explainable AI is a technology that we say, Why did you do that? And, and essentially, the AI kind of says, Well, I did that, because of this data input and this and this and this and so that was the result. But when we have a look at explainable artificial intelligence, we have already seen even with platforms like Dally, which was open AIS, sort of image platform, that these AIs create their own language. So in order for these AIs to do what they do more efficiently, you know, create images, Dally, for example, if you ask it to create a dog, as a neural network, in its own head, so to speak, it doesn't say I need to create a dog. It's made up a fundamentally different word for the word dog. And the researchers basically that started really digging into some of these these different things. dallies word for dog is literally like a scrambled alphabet mess. So when we try to read the minds of these artificial intelligences to understand what it is that they did, etc, etc, etc. You're looking at this you're looking at the She will say a synthetic brain kind of construct that is essentially Swiss cheese. And so from a regulators perspective, you just end up in this space where really the only thing that you can do is try to put in your sort of guardrails or best effort regulation. And we saw that with the FDA and the FCC, basically, when it came to cyber in the US, where they said, you know, you can't create a system that harms somebody, you know, the system cannot, you know, go and do something it shouldn't do or something naughty. And so really, what we're talking about is almost like in societal terms, and the nuclear weapons programs and treaties, we say, you know, if you are creating these things, you need to be responsible. If you are creating these things, you need to be able to show us that you acted responsibly. In the event that there is a problem, you need to show us that you've got a backup plan. I mean, for example, Google a little while ago, tried to create containment algorithms that would let them kill artificial intelligences that went rogue, and the AI a little bit like the paperclip problem, the AI figured that the best way to prevent Google from shutting it down was to essentially remove the big red button altogether. It just went Bye, bye. There's no button. You can't get me now. It's what

 

Marco Ciappelli46:27

I call unplugging. Like, it's overheating the refrigerator. Let me unplug it, I'm good. How do you unplug AI once it's all connected? And it's replicated itself. And so I don't know if this is where we wanted to go. But I think a lot of people right now probably scare.

 

Matthew Griffin46:50

You know, when we have a look at the kind of doomsday scenarios, you know, technology is a blank slate. And from a human human brain perspective, we always go to that flight and fright response, you know, we're listening for the tiger in the grass. We're not we're not trying to listen, basically, for the giant woolly mammoth, basically, that's really tasty. That's on the horizon, you know, that we'd like to eat? You know, and I think was it today, I think it was yesterday, Marc Marc Andreessen from Andreessen Horowitz, the Venture Capital Partners on the West Coast of the US put together essentially a letter on why AI is good for humanity and good for the planet. And actually, when we have a look at the use the benign use of artificial intelligence, it's already being used to create new materials that can suck carbon out of the atmosphere faster. We mentioned earlier cancer, vaccines, self driving cars and vehicles. It's being used the I can show you how we can use platforms like chat GPT to upgrade what I call the human learning algorithm, so that we can learn three times faster, that kind of idea of one on one tutorship. So when we actually have a look at, for example, what Kayden did with his book, you know, on the one hand chat GPT automated ghost writing, you know, so when I when I put the book in front of the US government, they said, they said yeah, but this is automating jobs and Caden kind of cheated. Because he used AI. I said, Well, actually, he kind of did what like say Prince Harry Did you know just got himself a ghostwriter. But it wasn't a human ghost writer. It was an AI ghostwriter. And the UK Department of Work and Pensions. This was two ministers and their entire strategy team. You could see that brains melting because they wanted to go, nobody's cheating. But then they were getting nobody is like a ghost writer, just not a human one. Oh, with other what are we doing, you know, meltdown. So, but the point of this is, when we use artificial? Well, there's a lot of focus on AI automating jobs. So we automated ghost writing, okay? But by automating a job. And by putting a nice behavioral interface over the front of it, Kayden can now write a book. So what we do is we actually are now democratizing access to those skills and jobs and tasks that have been automated by AI. And now giving everybody the ability to write their own book, create their own images, write their own code, and programs, and so on and so forth. So what we're the flip side of automation, is that we ultimately unlock human human potential in a way that we have never ever seen before. Because what I've been doing some lecturing at Carnegie Mellon, and no one has ever asked you these three questions, I guarantee it. What would you do as an individual if the only things that you had access to to work all the world's knowledge. So that's information plus ai plus the internet plus human expertise, knowledge, so not information like Google throws at us knowledge. What would you do if you had access to all of the world's skills? Kayden, frankly, unfortunately, buddy, you are not an artist, however, we created some great artwork for the book. And what then happens? If so you've got access to all world's knowledge, all the world skills, and now you can bring whatever is in your head, whether it's I want to create a program, I want to create a contract, I want to create a new, whatever. What happens when you have the technology to bring whatever is in your head, whatever you can imagine, to life, and then execute that. And if you use it to if you use these new tools to create a piece of an application, for example, a new service or whatever it happens to be, if you can execute correctly, you could have that in 3 billion people's hands by the end of the day. What do we do as humans, when our potential is basically limitless, increasingly, education doesn't prepare us for that society doesn't prepare us for that. And yet, when you ask people

 

Mary Hagy51:35

democratization, I think that AI can can absolutely be democratized because, you know, but it it really takes engagement, it takes education, it takes using the tools, understanding the tools, hiring a $300,000 prompt engineer. But but, you know, the good side, and I do always look at try to look for the silver lining of the genie being, you know, out of the bottle. And it is off and running, is that if if through education and the right kinds of access, keeping open, keeping the tools, open source, keeping the frameworks open source, so that everybody that wants to has an opportunity to build without, you know, huge financial burden, and hopefully not over over over regulation. And I think that, that there's a real possibility here that it could be democratized?

 

Matthew Griffin52:42

Well, absolutely, I mean, yeah, it's sort of on the one hand, it already is being democratized, open source, large language models kind of have the problem that they don't necessarily have the same guardrails as some of the sort of, you know, the, the Googles, and the open API's and the Microsoft's, or the platforms. But you know, when we actually have a look at democratization, so, you know, I was speaking about healthcare yesterday, in the healthcare space, we are starting to get to the point Brad chat GPT, like technologies are being embedded into molecular biomedicine. Which means that increasingly, we are approaching the point where non-skilled researchers or professionals can create a text prompt to create new drugs. So I simply create a text prompt saying, create a new drug that binds to the site 123 on this bacteria, and the AI because it's got access to 10s of millions of compounds off run simulations and goes this the drug that you want. So not only are we democratizing access to sort of artificial intelligence, and all of that entails, we're democratizing access to human potential. Yeah, for better and worse,

 

Dr. Susan Birne-Stone  53:59

you know, it's so interesting, from my, from my psychological perspective, the questions that you asked before those two questions. So it's like, if everybody's, everybody's basic needs are met, right? Because in there, I hope that we're going to meet the needs, you know, everybody's basic needs, and everybody's fed, and save, and you can do whatever you want. And you don't have to, let's say you don't have to work for your basic needs. Then what does society look like what do people do? And that's a really interesting question what you know, how do people make meaning what a relationship what this is for another podcast Marco.

 

Marco Ciappelli54:38

I have to say jump on another podcast, but this is the kind of conversation I don't want to end because it's fascinating. I like I said I had many of these and I love all the different perspective and also, we talked about the good I talk about the bad we talk about the really good and we talk about the scary bad But but the point is I think it comes down to get ourselves familiar with this. And it's beautiful that Kayden did that and that you, you know, you, you know obviously comes from a, you know, a great mentor, I'm assuming right there and got inspired and then growing with that and really knowing like I understand enough to have an opinion. I feel like too many people now are just said, now that's bad now that it's going to take job we need to, we need to stop it up. Well, how about all the beautiful, fantastic things that that is already helping us as a humanity? So I suggest one meeting like this every month to talk about?

 

Dr. Susan Birne-Stone  55:42

I love it. Marco? Yes. Yeah,

 

Mary Hagy55:45

it's good that fast.

 

Marco Ciappelli55:48

On fasting said 30 minutes were a 54. So

 

Matthew Griffin55:53

GPT 27, next month will be run by chat GPT, 24th and 27th.

 

Marco Ciappelli56:01

Yeah, to be sure that the summary for this episode will be actually produced by GPT? For sure. Well,

 

Matthew Griffin56:09

so ironically, one of the US military podcasts actually allegedly, according to them, taps into GPT five, which then has a text to voice synthesizer on top of it, which lets them talk to it, like a conversational artificial intelligence.

 

Marco Ciappelli56:29

Which you can kind of do already with the app that just came out. Sorry. Interesting. Yeah. Well, Susie, I suggested you have a really in depth psychological conversation, it will blow your mind. Yeah, I can, I can tell you that. And Mary, thank you so much for pushing to have this conversation. I'm not kidding. I think we should have many more with different mentors. And so we can hear from people that are more in the arts, people that are in photography. I mean, copyright is a big, big issue right. Now, nowadays, writers in the movie industry are actually striking, right for that. So different perspectives, of course,

 

Dr. Susan Birne-Stone  57:17

we can call it the mentor project, AI perspectives or AI series, you know, AI series. So you started a great thing here. So this is fabulous. Thank you all for coming. This has been great Kayden, I'm looking forward to reading your book, and to see what else you do.

 

Matthew Griffin57:39

And these are the only real images in the book.

 

Marco Ciappelli57:43

Was that real? The author these real is?

 

Matthew Griffin57:48

There? Yeah, it's, we're, we're already in this kind of crazy world. And then, you know, as Marco basically sort of said, you know, certainly my advice is, you know, people need to get out there and just experiment with this stuff and make up their own mind. Because if you use these things in a positive way, then all of a sudden, you're like, lots of people are realizing they go, Hey, this used to take me half an hour. And now it takes me five seconds, you know, whatever happens to be. So

 

Dr. Susan Birne-Stone  58:20

by the time we meet again, I will definitely have used at minimum chat GP to tell me the other things that I need to go on. And

 

Matthew Griffin58:31

one of the interesting things you could do, Susan, is you ask it to play a role. So with chat, TPT try when you sort of get onto it, try this as a little bit of fun, say, acting as a psychologist, ask me a set of questions that, for example, you know, identifies my personality or whatever it happens to be. So switch it into the role of a psychologist and get it to ask you questions and

 

Dr. Susan Birne-Stone  59:01

see how it is. And I know there's a lot of controversy in this field and psychological field. So I'm gonna do that. I'm definitely going to do that and I will share.

 

Marco Ciappelli59:12

All right, all right, very good. Well, to everybody listening, we'll put the links to Matthew, website, Cadence book, can you buy cadence book and of course,

 

Dr. Susan Birne-Stone  59:24

the charity so we'll get that information too and married for your moon, not information as well. Thank you. Thank you to the listeners into the viewers and Marco Thank you.

 

Marco Ciappelli59:37

Thank you, Susan. Again, thank you, Mary. It was great.

 

Mary Hagy59:44

Congratulations.

 

voiceover59:51

Thank you for listening. This show was brought to you by the mentor project. If you enjoyed this segment. There are many ways to thank us. Please consider subscribing to our podcast, making a tax deductible donation or becoming directly involved. Subscribe to this podcast and visit us at www dot mentor project.org To learn more