ITSPmagazine Podcasts

Book | Luna and the Magic AI Paintbrush | A Lesson in Technology, Modern Society, Creativity, and Staying in Control of the Tools We Create | A conversation with Author Rob van der Veer and Sean Martin | Redefining Society Podcast With Marco Ciappelli

Episode Summary

When I first came across Luna and the Magic AI Paintbrush by Rob van der Veer, it struck me as more than just a children's story. It’s a modern metaphor for the delicate balance between humanity and technology—something I explore often on my Redefining Society & Technology Podcast. In this latest episode, I had the pleasure of diving into this topic with Sean Martin and Rob himself, using Luna's magical AI paintbrush as a springboard for a much bigger conversation about our hybrid analog-digital world.

Episode Notes

Guests: 

Rob van der Veer, Author, Senior Principal Expert, SIG

On LinkedIn | https://www.linkedin.com/in/robvanderveer/

On Twitter | https://twitter.com/robvanderveer

Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/sean-martin

_____________________________

Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast & Audio Signals Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

_____________________________

This Episode’s Sponsors

BlackCloak 👉 https://itspm.ag/itspbcweb

_____________________________

Episode Introduction

We live in a world where the boundaries between the physical and digital are fading fast. Technology isn’t just something we use—it’s becoming an integral part of our lives. And while this offers incredible potential, it also forces us to question how we interact with it and where we draw the line.

During the episode, we asked: Is technology steering us, or are we steering it? Too often, we follow the "blinking lights" without stopping to think about their impact on our values, creativity, and even our identity. These are the kinds of conversations we need to have if we want to define—not just adapt to—our hybrid society.

Lessons from Luna: AI as a Tool, Not a Master

Rob’s book offers a refreshing take on technology, packaged in a way that even kids can grasp. Luna, the protagonist, discovers that her AI paintbrush is a tool, not a creator. It’s not perfect; it makes mistakes. And that’s the whole point. The story teaches that while AI can enhance what we do, it doesn’t replace the magic and creativity that only humans bring to the table.

This is a lesson many adults need to hear, too. AI is powerful, but it’s still a tool. The beauty, the meaning, and the intent behind its use come from us, not the machine.

The Human Touch: Creativity AI Can’t Replicate

We dug deeper into the limits of AI. Sure, it can create stunning visuals, write convincing text, and even mimic human expression, but it doesn’t feel. It doesn’t know what makes a piece of art moving or why a story resonates. That’s uniquely human, and it’s why creativity will always be ours to own.

As Rob put it, AI might help us go faster, but it can’t replace the soul behind the work. If we let it, it can be a partner—but we must stay in control.

Staying in Control of the Tools We Create

One of the key takeaways from our conversation was a reminder to pause before jumping headfirst into using technology. Whether it's AI or something else, we need to understand it, question it, and think critically about the long-term implications.

Over-reliance on tools like AI can erode skills, creativity, and even decision-making. As we embrace these innovations, it’s up to us to ensure they serve us, not the other way around.

Redefining Society: Our Role in Shaping the Future

As I reflected on the conversation, it became clear that redefining society isn’t a one-time decision—it’s an ongoing process. Each technological leap gives us a choice: to let it shape us or to actively shape our relationship with it.

That’s what makes Rob’s book such a powerful metaphor. It’s not just a children’s story; it’s a call to action for all of us to pick up the paintbrush—whether literal or metaphorical—and decide what kind of world we want to create.

Wrapping Up

In Luna and the Magic AI Paintbrush, Rob van der Veer reminds us of an essential truth: tools are only as magical as the hands that wield them. In this hybrid analog-digital age, we need to embrace technology thoughtfully, never forgetting that it’s human creativity and control that drive progress.

I invite you to listen to this episode of the Redefining Society & Technology Podcast to reflect on these themes. And if you have kids, maybe share Luna’s story with them—it’s never too early to learn about the power of imagination and responsibility in our increasingly connected world

About the Book

Can an AI-powered paintbrush create an artistic masterpiece on its own?

This is a story about understanding how AI works through the experiences of a creative young artist. Meet the little girl who discovers that when working together with an AI paintbrush she can create great art - only by combining her imagination with the brush’s technical ability.

Seriously Simple Series Books - Book 1 of Sub-Series: AI Made Simple
Big subjects made simple for kids. All stories are created in collaboration with experts who are in the digital space.

_____________________________

Resources

Luna and the Magic AI Paintbrush (Book): https://www.amazon.com/Luna-Magic-AI-Paintbrush-Seriously/dp/9083414477

_____________________________

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Watch the webcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

Episode Transcription

Book | Luna and the Magic AI Paintbrush | A Lesson in Technology, Modern Society, Creativity, and Staying in Control of the Tools We Create | A conversation with Author Rob van der Veer and Sean Martin | Redefining Society Podcast With Marco Ciappelli

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] Marco Ciappelli: Hello, everybody. Welcome to a new episode of Redefining Society and Technology podcast. I need to change the title for you guys watching this, but because we ended up always talking about technology and society, I said, why not? Let's just put it in the title and let's get over with that. And it's always about, one influencing the other more, it's, uh, it's even, or are we just, as Sean heard me say many times about following blinking lights? And funny noises. And so that technology drive us wherever we're going now. And we do things just because we can. Um, I like to think that we were not, but I'm not a hundred percent sure. 
 

So that's why we have this conversations. And today I have Sean that act as my co host and guest.  
 

[00:00:56] Sean Martin: Look at that. You invited me back. It's been a while.  
 

[00:00:59] Marco Ciappelli: I didn't invite you. You just say, I'll show up. And  
 

[00:01:02] Sean Martin: I heard, I heard he was going to be on. I'm like, I'm, I'm just gonna.  
 

[00:01:05] Marco Ciappelli: All right. All right. Well,  
 

[00:01:06] Sean Martin: I love Rob. 
 

[00:01:08] Marco Ciappelli: The truth is that that's the reason why Rob is here. So Rob is right in the middle here. If you're watching the show, if you're listening to it, it's in the middle. And, uh, yeah, Sean and Rob, they met at cyber security event and I forbidden them to talk about cyber security today, but we're going to talk about artificially. 
 

I don't know if I'm able to do that, but we'll see. We'll see. Rob, uh, I'm glad that you met Sean and that it didn't scare you. And then he said, Hey, you should talk to Marco and here you are. So welcome.  
 

[00:01:41] Rob van der Veer: Glad to be here. Thank you guys.  
 

[00:01:42] Marco Ciappelli: That's very good. So I would like to start with a little introduction about who you are and, uh, kind of throw there that you work in cyber security, but you're also a, especially an artificial intelligence expert, you participated in right regulation for the European community, but also you wrote a book called The Future of Cybersecurity. 
 

And we're going to be talking about a book for kids called Luna and the Magic AI Paintbrush and being also a person that writes kids story. I was fascinated by that. So we're going to talk about that as well. So let's start with you. Who is Rob?  
 

[00:02:22] Rob van der Veer: Thank you. So yeah, who's Rob? Rob is a computer nerd. Did computer science, studied AI. 
 

In the beginning of the nineties, I started in the AI industry. And I was there for, for two decades as a data scientist and hacker programmer, uh, CEO for, for nine years. And then I switched to software improvement group because I made so many mistakes in IT that I wanted to help companies try not to make the same mistakes. 
 

So I became a consultant. Uh, and also did a lot of innovation around AI security and privacy in the company, helping organizations all around the world and recently, uh, involved quite intensely in, in AI regulation. So I'm the co editor of the AIX security standard. And I also have founded the OWASP AI Exchange, which is a sort of an open source worldwide collaboration to help mobilize a group of experts to get these standards done in time. 
 

So, yeah, that's, that's me. Oh, yeah. And I wrote a children's book. We'll get to that.  
 

[00:03:34] Marco Ciappelli: We'll get to that. Sean, how about we get to that? How about you? Rob and heard about the book and that sounds like a good idea so  
 

[00:03:43] Sean Martin: we yeah, Rob and I met it. Oh, lost in Lisbon and presentation there is amazing if it's online and available. 
 

I encourage everybody to catch that Robbie. Speak all over the place. I'm sure there's there's different versions of that. Um, Since I'm forbidden to speak about cybersecurity, I'll just say that morals and ethics matter. And cybersecurity feeds that. So if you're not building something that's going to function safely for people, you might not be doing the best thing for folks. 
 

So I think there's a lot of that and you see that in the AI Act that you champion there, Rob. So, met Rob, great conversations, great presentations I've seen, um, kept in touch and, and, uh, noted, noted the book and here we are.  
 

[00:04:38] Marco Ciappelli: And here we are indeed. Here we are. So, Rob, why don't we start with, with that? Like, you, you write all the legislation, you talk at high level. 
 

Uh, with experts of all sorts, but you felt like maybe I should talk to kids.  
 

[00:04:55] Rob van der Veer: Yeah, it's, it started with, uh, an increased importance of talking about more than the technology. Um, and instead talk about how we as humans, uh, adopted. And how do we collaborate with it? This was increasingly a theme with our clients. 
 

We help a lot of software engineering teams and they increasingly started to use AI. And the most important topic to really help them with was how to not over rely on it and how to not under rely, uh, on AI. So to sort of understand what it is and how you can best. Uh, let it augment your workflows and that has led me to write a couple of pieces on it and that those got a lot of traction. 
 

Uh, some of them more than a hundred thousand views, which is to me extreme on LinkedIn. So it resonated. Uh, people want to talk about how to build a future with AI. Uh, and then, uh, Bessie Schenk from Seriously Simple Books approached me and said, I want to Write, uh, books explaining technology to kids. And, uh, I know you from the field. 
 

How do you fancy being my expert and writing this book together? And I immediately said, yes, no negotiations whatsoever. I, I felt I needed, I need to do this because. The kids from ages four to eight, in this case, they are forming their brains, they're forming their, you know, the way they view the world. 
 

And it's, I think, an ideal moment to instill some ideas and thoughts. And it's, it's not heavy stuff, but it is showing that AI can make unexpected mistakes and how you can best collaborate with it. Uh, yeah, we hope they pick it up and, uh, this is, you know, another way to try to make a difference, uh, to, uh, to as many people as possible when it comes to AI. 
 

[00:06:52] Marco Ciappelli: Yeah, that's really cool. And it's, I think Sean and I have been talking about this for a while. We've never been shy about the fact that we do use. AI to help us in writing things. It doesn't mean that AI write things for us. They write it with us. It helps us to do things a little bit faster. But I think the core here of the story is the fact that there is a combination of the imagination of the girl that works with AI and Is able to do things using the technical capabilities of AI, which for me, we could stop the podcast right here because I already said it all, but I'm sure you can elaborate that a little bit on the on the concept of using it. 
 

[00:07:45] Rob van der Veer: Yeah, I think, um, there are many ways to use it. Um, the wrong way to use it is to use it too much and use it to finish a complete task without you being involved. That's the wrong way. Uh, compare this to, uh, driving a car that's almost autonomous, but makes serious mistakes in, you know, 1 percent of the cases. 
 

If you drive such a car, that's a very attractive thing to do because you can scroll on your phone and read your newspaper and whatever, eat your food in the car. Uh, it's attractive, but the problem is that you get lazy and you're not alert anymore. So you don't notice the mistakes. And the moment that you have to intervene, you, if you don't miss, What's going wrong? 
 

You need the skills to intervene. You need to be able to drive a car, but you haven't been driving a car. So you lost those skills. And this is, I think, a good analogy with how we sometimes tend to use AI. Let it generate code. Look at the code a little bit. Think it is, will probably be fine. What could possibly go wrong? 
 

And then put it in production. Doing research. and uh, increasingly rely on the answers to be true. And we all know that they tend to not be true because of hallucinations, which are not side effects of generative AI, but are just part of the game. Uh, and when we over rely, That's when there's problems. So key is to strike the right balance between being completely uninvolved and not using AI at all. 
 

And that means dissecting your workflow, find the tasks that are boring to you. Uh, where you can apply AI and keep the task where you add your ability to make, uh, great decisions and use your creativity and your understanding of what the end results should be. But that requires some effort, you know, bringing down your workflow into a combination between you and the AI. 
 

But it is the best way. And  
 

[00:10:03] Marco Ciappelli: Sean, you raised something before we start recording about actually using AI without knowing that you're using AI. I think that that's a problem. Well, you want to elaborate on that?  
 

[00:10:17] Sean Martin: Yeah, I was just just thinking all the places and AI is not new, right? It's been been in things for many decades, I believe. 
 

Um, we just now see, you know, See it as a tool with an interface in, in what many people know as CHAT GPT. Um, but it's, it's been in many places, continues to expand in many more places. And it just struck me when, when, uh, talking about the book, uh, for kids. No, I believe the book is specifically looking at using AI as a tool to, to create new things. 
 

Um, But we have devices, we have phones, we have cars, we, we fly on planes that, that have AI. Many cases, we don't know that we're using it. Um, and I, what I want to touch on is, is this idea that, I mean, there, there's making ourselves more efficient. So automating our life and leveraging AI to help with some of those things, which you just described. 
 

And then in, in creation, um, it's about finding new things that maybe my, my brain. As one example, can't think of, naturally, or maybe not fast enough, or in certain ways, an AI might open, open up things for me in that way. In one sense, you want some guardrails, right? And in other cases, maybe you don't want the guardrails to see what comes, come what may. 
 

But to your point, in both of those cases, whatever I choose to do with the results, I better know that I have a hand in What happens with it, right? Do I trust what it created? Do I trust the path that it took if it was an automated type of thing? And where I get a little concerned is when that's buried in another application or a device that I don't know exists or how it works. 
 

And therefore I'm blindly trusting the results of this stuff. So I've said a lot there, but that's kind of the thought process that I was going through with this.  
 

[00:12:24] Rob van der Veer: Yeah, it is. It is a concern. I agree. Um, Stump AI is integrated in such a way that it's not really a problem, like for example, uh, face recognition on your phone to get authenticated. 
 

Once we're looking at content that's generated by AI, there can be risks involved. And the AI Act is very clear about this. Uh, you need to be transparent about it. So it's, if you are creating solutions and you have European users, the AI Act forces you to be transparent about that use. So that sort of addresses that problem, Sean, that you're, you're, you're pointing out, and it's very tempting for vendors not to do so I've seen vendors with, for example, um, Remediation advice for software engineers generated by AI. 
 

And I noticed some mistakes and they didn't put the disclaimer, uh, there. Uh, because once you put the disclaimer there, you're actually saying what you're reading here could be completely false. And that doesn't sell well. So it's, you know, counter some, some business models, uh, to, uh, to be transparent and that's something that we need regulations for. 
 

[00:13:41] Sean Martin: And I know I interact quickly, Marco, just the, there's a large retailer online that I might buy some things through and might need some support with some of those things on occasion. And it's. It's clear only because I'm thinking about it, that I'm interacting with an AI enabled bot. And, um, I try to have fun with it. 
 

So I use specific language that I wouldn't necessarily use with a human person as I engage with them, but I'm, I'm very precise and I use the same language over and over and over. And I haven't really seen it yet, but I'm expecting it to start to respond in a way that. In a similar language or tone that I'm engaging with it. 
 

So I haven't seen it quite yet, but I've, but I keep feeding it this, this stuff to see, but they, to this point, they don't disclaim it. They know they're sorry. Disclose it that that's there, but you can tell and only until you actually connect with the real person.  
 

[00:14:47] Rob van der Veer: I guess if they call it a chatbot, Sean, then that, um, I don't know what the AI says about it, but in this case, I think if they call it a chatbot, uh, it should be clear, but if it's a chat interface, that could as well be a human. 
 

That's actually against the, uh, European regulation.  
 

[00:15:07] Sean Martin: Yeah. And I don't know that it actually, well, it doesn't give it a name. So it's just an interface that you interact with. And then eventually a name is given if you need, if you head down a path that requires that. But, um, and who knows, I, I sometimes feel that even the human one. 
 

is being supported by AI. They're being fed, fed the stuff to see. Well,  
 

[00:15:34] Marco Ciappelli: I, I actually had the experience where I think that the AI was actually more human than than the human itself. But, uh, no, I, I get the point. I mean, I mean, I think the disclosure is, it's very, very important, especially if we want to go back to, to the kids that, that interact with things and don't think that. 
 

All of a sudden it's magic, right? But at the same time, I feel like, and I want your opinion on this, Rob, what's the difference between trying to teach these basics in a very simple language to a seven, 80 years old kid, versus trying to explain exactly the same thing to someone that has probably the same knowledge of AI of a kid. 
 

That means they don't know anything about it, but it's actually a well formed brain that is used to do things in a certain way, see things in a certain way. An adult or a teenager. Do you, do you think you need a different approach there?  
 

[00:16:42] Rob van der Veer: No. And that has been demonstrated by the reactions that we got from guardians and, and, you know, people who were reading this book to, to kids, uh, that they were the most enthusiastic kids. 
 

You know, they, they, they benefit from it, but they can't see the perspective of how important it is for their development that you say, nice colors and funny story, and that's, that's great because we want them to engage. But, uh, the people who read it to them, uh, they really said, wow, yeah, this, this made me really see some, really some, some good points in there. 
 

And sometimes when I do a training on AI, I include, uh, stills from the, uh, from the book to get the points across.  
 

[00:17:29] Marco Ciappelli: So give me, give me some example of the things that you, that you talk about in the, in the book.  
 

[00:17:36] Rob van der Veer: First of all, we didn't want to depict the AI as something with a soul. And that's too, people do that a lot in, in, in, in pictures, uh, storybooks on AI. 
 

Nice pupils and smiles, but we wanted, we didn't want to enter a promorphize it because that's, that's already going on too much. Uh, we didn't want the kids to believe that it has a soul and the AI is actually, uh, a paintbrush. Now, let me show to you how we, how we pictured it in the book. Um, I have it right here. 
 

So Luna, the main character, wants to paint and then she finds this paintbrush. This AI paintbrush and she starts to paint and it turns out that the painting has severe mistakes and she doesn't notice, but her parents do. So that's lesson number one. It looks great. It looks really certain about what it's doing, but it can have mistakes. 
 

That's really important thing to get across. Mistakes that you don't expect. And then the AI explains, listen, yeah, I'm just a machine. I'm not a person. I've seen a lot of pictures in my life. Uh, people trained me and that's where I base my work on. And that's why I cannot paint a horse that's lying down because I've actually never seen one. 
 

Um, and then Luna realizes this. And, uh, learns that the AI cannot read her mind, uh, and can cannot understand what she wants also because it is not human, although it appears human. And then she learns, and that's actually the morale that she needs to stay at the helm and hold. The paintbrush and, um, together they can make great results, but stay in control. 
 

That's, I think the main message of the story. And the funny thing is that that morale or the, the way we wrote it down was actually created by CHAT GPT. So I fed the story, uh, into CHAT GPT and I said, listen, I want to get across that, uh, as a person, you need to stay at the helm when you create something with, uh, with AI. 
 

And then CHAT GPT figured, listen, you call this book a Luna and the magic AI paintbrush. Why don't you. Make the whole point that the magic is not about the paintbrush, but the magic comes from lunar. And I had goosebumps when dt uh, proposed this, uh, I was not surprised by its creativity, but in this case, sort of chat d pt, you know, finishing the book. 
 

I think that was the icing on the cake.  
 

Yeah,  
 

[00:20:25] Sean Martin: I, I love that. It, I mean the, the picture you showed, uh, the paintbrush and it has the Ai, ai label on it, to me that really, that demonstrates that. You have different tools in life to do different things and it could be a hammer, it could be a saw, it could be a guitar, right? 
 

And also have an AI label and create new things. So it, AI is in, in the tool, in the thing. And I love what you just said that the magic comes from. The person using the tool with the AI. Uh, I love it. Cool.  
 

[00:21:03] Marco Ciappelli: That's really cool. So on my, on my side, I got to tell you that when I, I write the story, it's actually my mom, she writes first draft and then I get on it and I kind of help her and, um, she, she brings her 75 years old age, um, Past in it. 
 

So many stories are very traditional and very Tuscan and Italian in certain way. But because she deal with me, uh, quite a bit, she does story about robots and she throw, you know, science in there and technology. And, uh, and when I feed it to AI, uh, CHAT GPT is say, Hey, what do you think about this? This is for, for kids or the young at art. 
 

The target will be. 8 to 14, let's say, but it's enjoyable from people of all age and, um, it's really good that finding the moral of the story. I, I, I share that with you because ultimately I, I think it's pretty much human. I mean, it's learned all the stories that have been written by us as human and What's more important in a kid's story than trying to teach something that they can use? 
 

[00:22:24] Rob van der Veer: It definitely has human abilities. It is not human, of course. Uh, it's, it's, it's made to appear human. You know, when they, when they train a large language model like, like CHAT GPT or Copilot or Gemini, they use a technique called reinforcement learning from human feedback, which lets people pick the favorite answer. 
 

That's And then it learns how to make a favorite answer and favorite answers are really confident and really human like, really kind. And that's, uh, sells the best, but it sets the wrong expectations. Because you expect it to make mistakes like humans do, and it doesn't. And you expect it to have certain values, uh, and act like a human, an ally. 
 

But AI doesn't project everything that it's read about what is right and what is wrong on itself. It's not an entity in that sense, and therefore not a human, but you may expect it to. And if you expect it to, you You tend to get, uh, uh, disappointed and it does things that you, that you didn't expect, but indeed, uh, it can do fantastic  
 

[00:23:38] Sean Martin: things. 
 

I want to ask you this, Rob. Is that something that's been on my mind for, I don't know, maybe eight, nine years when we started ITSP magazine. One of the, one of the first things I wrote was around, we move into a world where everything is digitalized. technologically driven and we kind of leave behind a lot of the physical aspects of life and at some point they'll become nostalgic and we want those things. 
 

I, I, I hold that position because I find, and this is where I want your opinion, I, I feel that technology will lead us to the best answer, and that best answer will be what I call the common denominator. So it'll, it'll kind of narrow us down into what it thinks we want to hear. And that's where it leads us all the time. 
 

And we end up in this world where everything looks the same, everything sounds the same, everything smells, you know, all that, whatever it senses. But, and I'm afraid that it's not just a common denominator, but perhaps the lowest common denominator, depending on where society takes things. So I don't know your, your thoughts on. 
 

[00:24:50] Rob van der Veer: It's already taking place right in the bubbles that that algorithms create with selecting the content that we engage with the most, most, which is. Um, economically, uh, the best model, but, uh, society wise not, uh, because, um, well, we got the internet and we got algorithms to help us select the information that we have indicated that we wanted to most. 
 

And it creates a situation where we can't agree on the simplest things like, uh, who won the election or, or should we get a vaccine or not? It's not helping our knowledge, knowledge position and it's creating bubbles. It's, it's polarizing. So, uh, the common denominator that you mentioned, we have multiple of them and they're all local, local, uh, Optima. 
 

Um, and polarizes, uh, society. Unfortunately, uh, I do believe that, uh, and we've seen it in the past that, uh, technology adoption goes to some extremes and then we take some steps back, step, steps back because we learned how things can go wrong. And we set some rules, sometimes by regulations, sometimes by culture. 
 

Uh, when we realized that a certain use of a, of a, of a mobile phone just is, you know, you can't do that. But that is because we've learned that it has negative, negative effects. And the same I predict to, uh, happen for, uh, that whole bubble effect where algorithms will take more into account. Uh, that it's important for people to step outside of their bubbles and to, uh, learn to see, uh, more. 
 

And the same goes for how we work with AI, for example, um, using AI to summarize, uh, online meetings. Uh, some organizations say that is the biggest productivity gain from AI. Others have forbidden it. Uh, because things can go really, really wrong. We need to find a way to work with this technology and set some guardrails indeed, make sure that we use it in a right way. 
 

Uh, and yeah, um, try things, learn from them and then, uh, and then change it. And the most difficult things to change typically are the ones where there's a business model behind it. I mean, don't get me wrong, Big Tech is bringing us these, these new innovations, uh, which, which, which can, you know, um, make our lives greater. 
 

At the same time, they have a business model and that can be countered to what is best for us humans. Um, and we'll, we'll realize this step by step.  
 

[00:27:30] Marco Ciappelli: Yeah, I don't know how you guys interact with this, but me being Italian English is not my first language, as people can tell. Um, I welcome. CHAT GPT or any grammar corrector. 
 

But as a writer, I do not like when CHAT GPT change the way I write. So it's always a battle between fix the grammar. And then I see that and I say, usually say, tell me what you change, because I'm very, very protective about that. And then eventually I say, you know what, just fix the grammar. I don't I don't want you to change that. 
 

So I feel like what I'm saying is. You keep that active role, you're still the one writing, you're getting an editor, and it may suggest you something that maybe you didn't see, which is kind of like this, the same thing that the Luna learns, she needs to keep the prompt and, and imagine what she wants to brush to, to do, but also be able to say, you didn't do it the way I told you. 
 

You know what I mean? So, Yeah, your, your feedback on, on this, maybe on the way that you use AI, cause I know you use it.  
 

[00:28:43] Rob van der Veer: Yeah. Yeah. I think the, um, the end of the book where, where Luna is holding the brush is an analogy for us finding the right way to work with the technology. Like you found your way saying, just fix the grammar. 
 

I'm also, and I think many of us are exploring this and finding ways. To improve our writing, for example, sometimes I'm really stunned how CHAT GPT is able to make my writing a lot more clear. Um, and, uh, what I've done recently, I started to look into prompt engineering and to find out how I can, uh, set my own style as an example. 
 

And you can do that. So you can set up a prompt. It's, it's more work. It's elaborate, but you can give some examples of your own, uh, writing style. And if you start with your own first draft and you ask it to improve it for clarity, I always say, don't change it too much. Uh, use English for, uh, uh, people, uh, for whom English is not, uh, you know, their, their, their mother tongue, which makes it easier, easier to reach. 
 

You have a couple of those instructions that are likely to stay in there. And then I, I managed to, uh, stay authentic and, um, uh, improve, uh, structure and, uh, and clarity and of course, uh, grammar and spelling. So it's, it's about finding. The right way to work with the technology. And it's, it, it, it is complex. 
 

Uh, we haven't all learned this, uh, mean either because it's, it's so new and we were, you know, exploring our, our, our best way to interact.  
 

[00:30:26] Marco Ciappelli: Sean, easy button. Just do it.  
 

[00:30:29] Sean Martin: I am lazy. I'm lazy. No, what I, what I find and I try to, I don't have the standard prompt that sets my style. Um, so I just let it kind of go with some initial direction and see what it comes back with. 
 

And I find that I reprompt, reprompt, reprompt, giving it regular feedback until it sounds and sounds like me and it says, that's what I wanted to say. Good strategy. 
 

Yeah, I think it takes more time. So it's not it's not reducing as much time as if maybe if I did did it in a different way. But it's also not just limiting to that initial example. It's it's starting with its own creative juices, if you will. And then I'm tailoring it to what I really wanted to say. Yeah,  
 

[00:31:25] Marco Ciappelli: I think this is a good example of what Sonia said at the beginning and Rob you reinforces. 
 

If, if we all do the things in the whatever perfect way, it's the common denominator that it just lowers our standard. And I gave you an example I was recording on, on my other show with a musician. He is a classical guitar player. And, and we were talking about what it means to, to be the standard, let's say, for playing classic guitar. 
 

And he said, well, if you think you're the standard, you're, you're not doing it right. Because you need to keep mistake in order to grow. And he was also bringing the example of somebody like Yo Yo Ma playing the cello, which his whole life has been trying to be perfect. And then he realized, I need to be imperfect to be perfect. 
 

And I think that's a huge, huge lesson, huge lesson that we need to apply when you, when we use the technology, we need to retain our imperfection. 
 

Any comments?  
 

[00:32:31] Sean Martin: That's a deep one. Uh, you heard this yesterday, Marco, I'm going to unplug it, but, I mean. Now, granted, this is, it's basically, it's technology, right? Computer controlled synthesizer. But in, It has its limits. So it is a box, right? So it is boxed and it has its limits, but in inside it, I can tweak it and adjust it, and if you turn this knob a little way and the other knob a little way, it sounds different than, than those knobs turn the opposite way and the speed and the tone and everything I can get. 
 

And it's not perfect, right? What, what is perfect and what's imperfect? So there's so many, so many dimensions inside there. And so I love music. So I'm, I'm always looking for that, that odd sound, something that sounds like maybe something I've heard before. Um, and with my own Unique spin on it. And so this is not ai, but it, it goes to the point that it has all that it needs to help me create something that's unique to me to share with others. 
 

[00:33:44] Marco Ciappelli: So, but you need to break  
 

[00:33:45] Sean Martin: it. Some people do break them. No,  
 

[00:33:48] Marco Ciappelli: no, you need to make it meaning, you know, you need to find that blue note or you're not going to play the blues.  
 

[00:33:54] Sean Martin: Yes. I need to make, well, I need to make mistakes that may not most, but some people may not put together that, that particular combination, but ultimately in the, in the grand, grand scheme of things, it may work in what I'm putting together. 
 

[00:34:08] Rob van der Veer: Yeah. There are mistakes and there are mistakes. Um, some mistakes are beautiful. And that is because, uh, the, the, the model that defines what is perfect. It's actually looks perfect, but it's not the model for beauty. If you look at, for example, uh, Stewart Copeland, the drummer of the police, he changes tempo, uh, many drummers do this, but in such a way that you can't figure out, uh, how and why, and it doesn't follow any, any specific models that we have, it looks like a mistake who is slowing down, but there's actually beauty, uh, behind it. 
 

And the same goes for, uh, You know, all art, including writing, where, um, things that don't follow the rules, which you could call imperfections, uh, are actually what, what, what makes it great. 
 

Very true. Very true. I was thinking about  
 

[00:35:05] Sean Martin: that the other day. In fact, the spirits in, uh, I can't remember the name of the song, the spirit song, but Very, very, yes. What it does is it draws you to that moment, that change, um, versus just letting the song roll. You, you, you kind of, at least to me, it pulls, it pulls me like, oh yeah, I'm, I'm listening to something really cool here. 
 

[00:35:29] Marco Ciappelli: All right. I guess, I guess after this, I have to. To open one of the vinyl from the police. Cause yesterday we were talking about walking on the moon. We were. Remember Sean? Yes. Um, so I think the lesson here is that you still need to know the rules if you want to break it. And I'm, I think I'm mentioning, you know, quoting Picasso here, but it's like, you can't just throw painting on the wall. 
 

You need to know how to paint and then you can play around with it. I mean, Copland knows how to play the drums. Uh, he knows the music and he's not just, but sometimes there could be something completely random that it creates something completely beautiful. And I, I don't think artificial intelligence think that way because it tried to follow certain rules. 
 

Who knows?  
 

[00:36:15] Rob van der Veer: Yeah, that's uh, I'm not certain. I do think that  
 

[00:36:20] Sean Martin: I'll be more direct. I disagree. 
 

[00:36:25] Rob van der Veer: So five minutes  
 

[00:36:26] Marco Ciappelli: to bring  
 

[00:36:27] Rob van der Veer: it. It's interesting. So what makes human imagination? What, where is it different from AI? And at least I think we can agree that, uh, humans are the consumer of what is created. So they are more in touch with what is beautiful, uh, and what is needed, uh, than, uh, AI is AI has to pick up some of the, you know, the rules or the patterns from existing material. 
 

So if it picks up some patterns of drummers slowing down certain parts of, uh, of a musical piece, it can really learn. These, uh, these rules that go beyond the theoretical models and create some of, some of that beauty. Humans will be better at this, I think, because, uh, they are actually, the consumers are more in touch with, with what people, uh, appreciate. 
 

Uh, but I, I don't think there is a real, um, Uh, structural difference between the type of creativity between AI and, uh, and humans. I know that's just an unpopular opinion.  
 

[00:37:38] Marco Ciappelli: Question. Do you think that if AI doesn't listen to that pattern from the couple that you're talking about will ever think to just slow down something? 
 

Or is it just doing it because It hurt someone doing that, and so it copies a mistake and reproduce it. Something that the human did it on either purely by mistake or purely by imagination.  
 

[00:38:12] Rob van der Veer: Yeah, I think it can sort of skip the mistakes and see the patterns in how humans play music. And it, it is mimicking, right? 
 

Uh, it is mimicking. It is not feeling it. And I think that's where the, where the difference is, but it can mimic things, uh, pretty well.  
 

[00:38:31] Sean Martin: And that's why I disagree because for me, I want to feel it first. And a lot of times in, well, Marco, you know, I'm picking up the guitar. I'm messing around with this synthesizer. 
 

A lot of times I'll start with not knowing anything. And I'll just, I'll see what happens and see what I can do and how far I can take it. And then, then I might look for examples and it might for the guitar, for example, it might, it might share. This is the way I suggest the strums to be put on the strings and I'll try that. 
 

And I'm like, yeah, but if I do a pick here, instead of a full strum, I might get a little different sound. So in that sense, I'm, I'm following an example and then tweaking it to my own. 99 percent of the time I like to lead with, I don't know what the rules are. What, what, what am I feeling and what, what comes from that first, and then maybe shape and, and, and transition into something that, that, uh, or lean on, lean on an example of something that I, that I know I like, right? 
 

So I'll. Feel it for myself first, then listen to something else in the music example, and see if there's a way to mix the two things together.  
 

[00:39:46] Marco Ciappelli: So Rob, how about we finish this with a moral that you learned from Luna, in writing Luna and the Magic AI Paintbrush, but that you would use to give a moral of the story to an adult. 
 

Would you use a different metaphor, maybe, to do that?  
 

[00:40:15] Rob van der Veer: Control your, uh, your natural tendency Uh, of working with AI, take a step back and, uh, think about the long term effects of what that natural tendency results in, which could be you losing your skills, which could be you over relying and AI messing up. 
 

Uh, really serious things in your life. Those are the risks that we talk about. So don't, um, don't trust your instincts when you work with AI, because, uh, it has been shown that those instincts are typically wrong, partly because AI is designed to behave like a human. But it's actually alien because it's alien. 
 

We need to be more careful in, in how we embed it in our way of working and take that step by step and also, uh, listen to what others have learned, uh, instead of, you know, um, messing up first, uh, and learn from that.  
 

[00:41:18] Marco Ciappelli: You don't need to hit the wall if I've been telling you there is a wall there, right? All right. 
 

Well, this was a great conversation. I sure am glad that Sean brought Rob to the show. Um, we talk actually about the things I love the most. Music, storytelling. technology and society and looking into the future and, uh, and getting a bit philosophical. So I'm happy. I hope you guys had a good time. And I do hope that the audience had a really good time too. 
 

[00:41:51] Sean Martin: Fantastic. Good to see you, Rob. Always good chatting with you. I love it. Get the mind going. Let's make some music sometime. Ah, Marco and I are looking at some, some ways to do that. Yep. Sitting in different places. Just  
 

[00:42:04] Marco Ciappelli: found out a little jam shazam with which we need to test. Oh yeah. Music together. Yeah. 
 

So there you go.  
 

[00:42:11] Sean Martin: That'll be fun.  
 

[00:42:12] Marco Ciappelli: We got that. And then another thing, if whenever you want to come back, Rob, and have another conversation about, AI. I'm more than happy to have you back. This is this has been fantastic and I really hope that Audience enjoyed it. If you did, uh, check the notes. There'll be a link to, uh, Rob links on LinkedIn and the book, of course, that there'll be the link to that one as well. 
 

Sean, you know how to find it, so I'm not going to repeat it. Subscribe. There'll be many more conversations that I hope will let you with more questions than answers when we're done. Because questioning gets you thinking. It's important. It gets you thinking. Thank you very much. Thank you. Thanks guys.