Tried & True With A Dash of Woo

The Psychology of AI: How Technology Is Quietly Rewiring Our Minds

Renee Bowen Season 3 Episode 110

Have feedback? Text us!

Last chance to apply for ELEVATE - click HERE to apply

SHOW NOTES:

In this eye opening conversation, technology journalist and author Jacob Ward joins me to talk about how AI is reshaping human behavior in ways most people never see coming. 

We explore why our brains are so vulnerable to AI’s design, how companies are creating emotional companions on purpose, and what it really means to outsource our thinking without noticing it.
 

We also get into the future of creativity, the danger of frictionless living, and how to reclaim your agency in a world where even your attention is monetized. This episode will leave you thinking differently about your relationship with technology, your brain, and the choices you make every day.

Connect with Jacob: 

Want to work with Renee?
SCHEDULE A FREE DISCOVERY CALL HERE

LEAVE A REVIEW in 5 seconds flat (helps us a ton!)

JOIN the Podcast & Creative Community

LEARN MORE about Renee at
www.reneebowen.com - main site (photography + coaching)
&
www.reneebowencoaching.com (coaching + courses)

SOCIALS:

Instagram
Facebook
TikTok

FOR PHOTOGRAPHERS:
FREE TRAINING for Photographers


Make sure you TAG me when you post on social and once a month, we choose one person who leaves us a review and we'll send you a FREE audible book of your choice!

SPEAKER_00:

Okay, photographers, this message is for you. There's a lot of education in our industry, as I'm sure you know, but I do something a little differently with my group coaching program that I call Elevate. And I want to talk to you about it for a second because we are re-enrolling for this next round. It's a six-month program. And when I tell you there's really nothing else like it, here's what I mean. Yes, we do have Zoom calls every month, and there's a portal of a lot of information for you online that we add to as well as our time together progresses. But what makes this really unique is first of all, I am certified in life coaching. So we really do take into account the entire person that you are and not just the photographer that you are. So we talk about mindset, we talk about abundance, we talk about manifestation, and then we also absolutely talk about strategy and money and profit and making sure that you are running your business as efficiently as possible. So you get access to me via Voxer, which is a voice texting app, pretty much 24-7. I don't know any other photography educator out there who is this accessible to their coaching students. It's kind of unheard of. But here's the thing: I created this program and all my programs for the photographer that I once was, for that version of myself who really struggled and needed this so much, but couldn't find it. It didn't exist. So if you're looking for not just community and accountability, but also serious education and ongoing support and mindset from someone who's actually certified and understands how to pull out of you what you can't do on your own, then you need to go ahead and check out Elevate. It's by application only. So you'll need to go ahead and hit the link in the show notes. And then someone will get back to you and let you know if we think you're a good fit. And we may even have to hop on a quick call to double check. But I look forward to hearing from you and working with you to build the most profitable and most fulfilling business that you possibly can.

SPEAKER_02:

So both of these attributes are perfect for AI, for a for a chat bot style system. That one that first of all says, don't worry about it. I'll drive the car, right? I'll tell you what to do or or what you should conclude about this. Or, you know, young people these days are coming home from a date and running it through AI to say, here's what he said. What, you know, what what do you what should I take away from that? Like we're built to outsource our processing wherever we possibly can. And so the the moment that we're in now that worries me so much is this is not a product that, first of all, was built in universities or even by the Department of Defense. It was built by for-profit companies. And they are, as you know, having to spend an incredible amount of money to build these systems. They're going to need that money back.

SPEAKER_00:

Welcome to Tried and True with a Dash of Woo, where we blend rock solid tips with a little bit of magic. I'm Renee Bowen, your host, life and business coach, and professional photographer. At your service, we are all about getting creative, diving into your business, and playing with manifestation over here. So are you ready to get inspired and have some fun? Let's dive in. Okay, friends. So buckle up because my guest today is a fascinating one. You know how we talk here about the dance between the logical and the magical or the unseen, how our thoughts and our emotions and our energy shape what we create. Well, what happens when something else starts shaping it too? Something that knows us almost too well. So my guest today is Jacob Ward, and he's been reporting on the intersection of technology and human behavior for more than two decades. This guy's, this guy's real smart, you guys. Okay, so just uh hang on. He's written for The New Yorker, the New York Times magazine, and was the tech correspondent for NBC News, among other things. His book, The Loop, actually predicted the AI chaos that we're living in right now. Everything from the rise of Chat GPT to how algorithms are rewiring our choices and even our psychology. What I love most about his perspective is that he's not just here to fearmonger. Okay, we definitely dive into a little bit of the uh the alarming aspects of AI, but he's also here to help us see clearly. He understands the behavioral science, uh, neuroscience, the manipulation behind the scenes, behind the screens, and how our awareness that human superpower might just be our way through this. So today we're talking about what AI is really doing to our brains and how we can stop living on autopilot and what it means to reclaim our creative agency in a world where even our attention is for sale. This is one of those conversations that is going to make you think differently about your relationship with technology, creativity, and maybe even yourself. You guys know I love talking about AI, and I've talked about it quite a lot over the last three years, specifically ChatGPT and how I use it. Jake's work reminds me that the most advanced technology that we'll ever use is still the human brain. And the most powerful code that we can write is consciousness itself. Let's dive into this conversation. Jacob, you wrote a book before ChatGPT even came out, and you basically sort of predicted the exact uh AI storm that we're sort of living through right now. So before we dive into all of that, I wonder if you could just take me back to that moment.

SPEAKER_01:

Sure.

SPEAKER_00:

Um, what were you seeing that most people weren't?

SPEAKER_02:

Well, I really appreciate this, Renee. Thanks for having me. And I and I um, yeah, you sort of you think that you wanna be right when you write a book forecasting a future outcome, but I in this case, uh turns out to have filled me with enormous regret to have been correct. So when I say that I called it, I say it with the asterisk of like, I really wish I hadn't. Uh in this case, I was, I had basically two things happening in my life simultaneously that sort of led me to this thesis. And one was I had spent some time doing a documentary series for PBS called Hacking Your Mind, in which I went on this kind of life-changing experience for me, uh being sort of the guinea pig of a lot of experimentation uh to show our viewers the cutting edge of behavioral science, basically the last sort of 60 years of behavioral science. And the whole concept of the show was I would go through these various experiments and tests and experiences and be with all of these cutting edge researchers to show, through my personal experience, just how uh unconsciously and predictably human beings like myself make decisions. And going into that process, I really had thought of myself as a very unique and beautiful snowflake and that I did my made my own choices and was my own guy. I came out of that process realizing that I needed to quit drinking, that I was deeply racist and sexist, that all of these things suddenly that that you know I had assumed I was immune to were very much part of my wiring. And I was learning about all of that at the same time that in my day job as a technology correspondent, I was encountering company after company that was either hiring behavioral experts and or uh bringing in whatever they could of the at the time very primitive kind of AI stuff, early machine learning or human reinforced learning kind of systems, to try to analyze and predict human behavior and then wherever possible to shape it. I was just going through an old email I had in 2017 with this guy who invited me to a dinner party for a bunch of tech people who were specifically trying to create the most uh they were trying to take psychological research, psychology and and behavioral research and pull it into how they were building apps. And they called themselves the behavioral tech group and BTEC. And they and I I attended this one dinner where this two, these two PhDs out of uh USC who had just finished their PhDs in addiction and learning how addiction works, were basically sort of loaning themselves out to any company that wanted them to create the most to apply what they'd learned about addiction to app making. And they called themselves dopamine. They showed up about six months later on 60 Minutes as an example of just kind of how unscrupulous these companies are in using our brains program against us. So I just knew on the one hand that we were very, very, very predictable because of my hacking your mind research experience. And then I was also encountering all these companies so desperate to predict our behavior and to use AI to do it. And as soon as I learned about transformer models, which are the thing that made ChatGPT possible, I really started rolling in writing this book. And I thought I was like five years early. And then the book comes out January 2022, and in November, ChatGPT comes out and I went, oh my God, here it comes. And I, you know, if I'd been a good smart media person, I would have gone really hard at promoting this book. But I think I was so discouraged and depressed, honestly, by what I've found that I withdrew and I just I didn't really think about it. And so one of the reasons I'm grateful to you is that I've I think I've just sort of summoned the internal fuel to make it possible to kind of reassess this book and and and the world that we are now in. Um, so that was the that's broadly how I came to this.

SPEAKER_00:

Fascinating. So fascinating. I mean, and I've like full disclosure, like, I mean, I followed you on TikTok probably since like three days, like a long time ago. Oh, crazy. Um just because hello, love the weirdness. Like, I mean it's in your bio, like you know, all the little different things, and then there's like and weird stuff, and like of course, human weirdness, yeah, totally and true.

SPEAKER_02:

The algorithm has put us together. That's exactly right. Yeah, totally.

SPEAKER_00:

So, all right, let's now kind of cut to current day and things are moving very swiftly. Um, yeah, I mean, I I I jumped on that chat GPT thing like the day it came out, not because I'm super like excited about it in general or super techie, but I had this sense of I need to at least know about this. I need to know what's going on. I want to, I want to know. I just kind of want to know what's going on. So, and again, it's probably another reason why you popped up on my my for you page, right? Like you're talking about this kind of stuff is super interesting to me. So you said that we're facing a social and psychological emergency with AI. So, what does that really like look like and feel like in real life, at least from your perspective?

SPEAKER_02:

Sure. So the thesis of the book, and my thesis all along here, has basically been that AI is the perfect way of making us crazy to one degree or another. And that, and I'm and and I'm going to be describing a kind of sort of various flavors of crazy that I consider kind of a spectrum. And I want to be clear that I am not immune to this. I consider myself to be just as crazy as anybody else in my interactions with this system. But basically, my thesis was that because our brains, and we can go on and on about this, but because our brains are evolved to make to basically do two things. To one, uh, use uh uh like basically to shorthand, to shortcut, to summarize as much as it possibly can. Our our brains are not built to process information raw. Our brains are built to say, oh, I know this story. You don't have to finish it. I know how this one goes.

SPEAKER_00:

Right. Patterns.

SPEAKER_02:

Right? Patterns is pattern recognition is is what we're good at. That's what we do as humans. It's what enables your brain to, for instance, you know, drive to the wrong place. You live in LA, you've done this, right? You've driven you've meant to drive to some weird new errand, and you accidentally drove yourself to the gym. Yeah. Right. And you're like, oh my God, why am I at the gym? And then you're like, oh, right. I unconsciously guided this half-ton vehicle through traffic, got all the way here before my brain, before my conscious brain even got involved. And that's because your brain's like, oh yeah, where am I? I'm behind the wheel. What time of day is it? Oh, it's two o'clock. It's time to go to the gym, right? Your brain just goes on to autopilot that that system is a very amazing system that has, in fact, worked to keep us alive for a long time. So that's one attribute of the brain. The other big attribute of the brain is that because it doesn't want to make its own choices and it loves to uh just sort of assume that that it knows the answer already, it loves to outsource decision making wherever possible. And psychologists after psychologists will tell you, as you know, I know you study psychology, that for instance, we are constantly outsourcing our decision making to our environment, right? For me as a drinker, former drinker, the reason I don't go into bars anymore because the beautiful dark wood and the smell of the place and the squish of the bar stool and all of that stuff, I can feel it literally as I say it to you, tells my brain, time to drink. This is your moment, right? And and uh because my conscious brain is absolved of having to make any choices there. My senses tell me what to do. So the same thing is true in the so both of these attributes are perfect for AI, for a for a chat bot style system. That one that first of all says, don't worry about it. I'll drive the car, right? I'll tell you what to do, or or what you should conclude about this. Or, you know, young people these days are coming home from a date and running it through AI to say, here, here's what he said. What, you know, what what do you what should I take away from that? Like we're built to outsource our processing wherever we possibly can. And so the the moment that we're in now that worries me so much is this is not a product that, first of all, was built in universities or even by the Department of Defense. It was built by for-profit companies. And they are, as you know, having to spend an incredible amount of money to build these systems. They're gonna need that money back. And what we're seeing is that they are very quickly trying to make these systems as easy for our brain to outsource decisions to as possible. They want them to be engaging and rewarding in this very deep way. So we are forming an attachment to these systems. OpenAI just recently had a video uh release where they showed uh they're describing their costs and their infrastructure plans and their new strategy. And one of the things they talk about is we don't just want this thing to be a productivity tool, we want it to be an emotional companion. A couple of weeks ago, they released a bunch of numbers that showed that that there was a you know a certain percentage of their users are exhibiting full-on signs of mania and psychosis. People are treating this system as if it's a synthetic soulmate. Some people are discussing openly their suicide intent with these systems. And so we're at a point where you know, and this gets us into sort of a squishy situation. I'd be curious to hear what you have to think about what you think about this. But like the numbers of people who are exhibiting this kind of openly uh psychotic and or uh you know dangerous communication with these systems are very small percentage-wise. So for instance, the number of people um openly discussing suicide with the chatbot is 0.15% of users, right? So on the one hand you think, okay, that's a very small number. And that's a smaller number than the percentage of people in the country who uh attempt suicide every year, which is 0.6%, right? But 0.15% of open AI's users, of which there are 800 million weekly users, means that's about 1.3 million people who not just once a year, but every week are discussing suicide openly on the platform. And so then I what I come to is this thing of, okay, well, if maybe that's just a reflection of society. Maybe that's just a reflection of the numbers that we're gonna see no matter what, right? But this is a special case because this is a company that is making an emotional companion. That's what they say they want to do, and they need you to engage with the system as much as you possibly will so that they can use, you know, they can they can make money, right? And so that is the emergency that I worry about in the that's the immediate sort of emergency I worry about. Beyond that, the book goes on and on and on about I go on and on in the book, about how um there's also just going to be a fundamental kind of de-skilling that I really worry about. And we're already seeing signs of it, of people not able to read as closely a set of a a thing. I was just talking to a professor at Cal at the UC Berkeley, who says that his number one problem with his intro to literature class at the very beginning of college for these kids is trying to convince them that there's any difference between having read a book and having read ChatGPT's summary of the book. Right. This stuff is gonna, I think, due to our decision-making system, what GPS did to our sense of direction. And because, as you know, our brains like to just follow directions and go on autopilot, I just think it really is gonna present a real kind of emergency over time. It's not gonna be an immediate psychotic break kind of emergency, but I think it's gonna be an emergency that develops over time.

SPEAKER_00:

Yeah. Now, I always say that if I all my kids are grown now and are in varying different ways, not really into this whole thing. Um, and and very uh cautious about it, let's just say, uh, which is interesting. And they're in their early 20s. Um I love that. Yeah.

SPEAKER_02:

But how do you talk about it? Tell me more about that.

SPEAKER_00:

Like, well, it's interesting. I have three kids, they're all very different, right? My oldest son has autism, he's 27, he's been building computers since he was 12. Uh, you know, he's just naturally gifted at that. Um, Chai Ch BT has helped him learn code in a faster way than he could ever have learned on his own and in a classroom. He just doesn't learn that way. For some reason, code has helped him tremendously, but he is vehemently against the um the stealing of art and design to train the model, right? So he's caught in that push-pull as well. And plus, atheism, it's a very interesting way in which he you know views the world. So we have a lot of conversations about that. And then my daughter is a grad student and highly academic, always has been, like is very smart on her own and and uses it to, you know, for efficiency's sake and things like that. But you know, that's kind of the level uh that she's sort of like open to using it. And then I have a son who is 25 and a musician and absolutely not refuses to use it, hates everything about it, thinks that it's the demise of our entire society. Um, you know, like very, very, very against it. And we have had some very interesting conversations about that too, because he knows I use it for business, for productivity. I talk about it. Um, so I think that this conversation, it's a very important conversation just in general, right? Because as I've said many times, it's like it's not like I'm like pro all the way AI, but it's here. So what are we gonna do about it? Right. And so I always tell people like, if I had little kids right now, like this would be like the the most important conversation. That I would be having with them because I we can't control necessarily, it's out of the box. It's not going back in the box per se right now. We'll see what happens. But how are we, how are we teaching young people, especially not just how to use it, but like why, the ethics of it, the digging into it, like aside from even all the environmental factors and everything that we've talked about, but everything that we're we're talking about here. Um and this loop of, you know, I call it the yes man loop. You've broken down into like the psychosis, really. And and that is not a small number. Yeah, it might seem like a small number, like that is, and that's the most extreme group, right?

SPEAKER_02:

I think that any of us who has, for instance, entered into the misconception that this system somehow knows us, yes, reason, you know, or that it can reason or understand us when what this system is, is a is literally a parrot that is every so often being told uh reinsert these names and this information back into the conversation. It it's it's a mimic. It does not understand you, it doesn't know you. Exactly. And yet, you know, I can't help but uh but perk up when I mean I so I use these, of course, these systems too. And I can't help but perk up when it says, you should absolutely work, you know, weave this into your work as a journalist, blah, blah, blah, blah, blah. And I think, wow, geez, I he really understands me. You know, then I have to slap myself and be like, oh, right, no, no, no, but you know, this is this is this is marketing. But so, so, so I really I love what you're saying about the the variety of reactions that your kids have to it, because I I think it's important here to say, and I've been trying to, I've been trying to get better and smarter about how I talk about this, because I I rather than just being alarmist, although I am alarmed, right, but rather than just being alarmist, which makes people I think feel like they're just helpless. There's nothing to be done here. I don't want to give that impression. I think there's lots to be done. Yeah, but I not all of it, I think, is up to us individually, and I don't think most of it should be, but some there are some things we could do individually, but but for me, I love, I love starting from the allergy that young people tend to have around it. Like I love there's a term going around among kids younger than yours, even um clanker. When they do use that term to describe a piece of AI or an older person who's using it too much. Yep. Don't be a clanker, look at this clanker, you know. Oh, that's just clanker content, you know, that kind of thing. I I just it shows that they're like seeing it clearly in a way that I really like. So there's a cultural allergy that I think is awesome and that is good, is like you and me thinking about our grandparents smoking cigarettes at the table, kind of thing. You know, like what were we thinking? What were they thinking? You know? Yeah. So that's great. Um, I I do think, though, that kids also see things clearly in a way that makes it really hard to make the case that you shouldn't use this system to shortcut your education, for instance. Right. You know, I was talking to a valedictorian from Texas once interviewing this kid, a high school valedictorian, and and he said, he said, just it was quite chilling. He was like, You guys taught us that an education is about getting a Tesla and a house. So why wouldn't we use whatever we can to get to those goals? Right. He's not he's not thinking of it as like we're gonna and make a better democracy with it through an informed citizenry. It's a transaction, in his view. And I think you it's hard to blame a kid for for having that perspective on this stuff. So I but there is also, right, what if like any research about what young people want, right? And you and I are are uh older people on TikTok. Um, you know, like the thing that people respond to on TikTok is authenticity. Yeah. They want to see you struggle with an idea. They want to see the ugly aftermath of your of your catastrophic date. They wanna, you know, they wanna see the raw feed of humanity. And so I think there's something about the the there's something about their sensitivity around that stuff, combined with some, you know, practical advice on, on, for instance, don't let this thing passively entertain you.

SPEAKER_01:

Right.

SPEAKER_02:

You know, this is it's this was the lesson of social media. Don't like go use it to go get what you need and then get out. Don't let it be in the background, bubbling away, uh hitting you with compliments or making porn for you or whatever else it's gonna offer to do, because it is gonna offer to do all these things. So there's a little bit of personal friction we're gonna have to build into this stuff. But I I do think that like this is the weird, I'm Renee, this may be too weird to get into here, but I I I was just at an event the other day where um a couple of of uh people from the Chinese consulate were in attendance at this gathering that I and that I've been speaking at. And they came up to talk to me and and they were very interested in my book. And my book did pretty did I wouldn't say did well, but it was reprinted in China. It it it did an uh there was an edition of it that went out in China. And I realized that like I've bumped into this realization that like, you know, China is a place that is very actively regulating this stuff, very actively regulating kids' exposure to it. You know, we in this country believe that the market's just gonna kind of work it out.

SPEAKER_00:

Yeah, well, we know why that is.

SPEAKER_02:

I mean, it's yeah, well, they're doing it in part for control, right? For political control.

SPEAKER_00:

There's it's super intelligence as well, and the money and everything that kind of comes with it, right?

SPEAKER_02:

Yeah, yeah. But but their number one thing is social control. They don't want the country to come apart. And one way that they avoid that is making sure that they limit what kids are supposed to do in China. And that's a bit that's bad in some ways, you know. The the in a lot of ways, I would say, you know, you're under an authoritarian regime. Like there's a lot of terrible things about it. But I do I can't tell you how often I find myself in conversations like this thinking we need to, you know, we need to regulate what kids see. And we need to, we need to, you know what I mean? We need to keep their brains off this stuff, you know, in a way that I think a Chinese Central Party person would be like, yeah, hell yeah. And so I I think there's I don't think we in the United States, I think this is new territory for what we want to tolerate from the open market, from the free market. And that's that's I think one of the big challenges in front of us right now.

SPEAKER_00:

Oh, for sure. I a hundred percent agree with that. And what I kind of meant by that is that we're not really doing any of this regulating because like these for-profit companies, they, you know, the the person who wins, right? Like that's it's capitalism, it's just what we're all about here, right? Like um, we know the end game, like and why it's not being regulated here, because of everything that's at stake for them. Um, which is why I think we're we're seeing so much of this. And I agree, I feel I feel like this brings up a very interesting point because yeah, I think that you know, the whole freedom of speech versus I think maybe we should put a little bit of control on this, guys. You know, like we're we're in the wild, wild west of all of this right now. And I think that we will look back on this in 20 years and be like, who knows? But like there, there is this sense of um extreme, extreme, like you said before, sort of it's it's just very alarming. It's very alarming at how fast it's going. And especially for like people in my age group. Like, I'm you know, I'm in my 50s, right? And so I this is one of the reasons why I wanted to be uh an early adopter of it, let's just say, because there are so many people in my age group, and especially like a little bit older than I am too, who are completely like just closing their eyes to it. Like they don't even want to hear about it, they want to talk about it, which I totally understand. But at the same time, like I feel like we all need to be having difficult conversations about it just in general. And, you know, a friend of mine not long ago, I I was telling her about how, you know, I was using ChatGPT and how it was helping with efficiency and this, that, and the other, like just things behind the scenes of my business, um, mainly. And because that's really what I use it for. And she started using it. And so within a few days, she's like, this is amazing. Like I now she was using it as a coach, basically. And like this, this, this partner, this emotional partner, like you're talking about, uh, giving this person a name, asking this, asking them to like show a picture of what they would look like, and it like nailing her exact type of guy.

SPEAKER_01:

Totally.

SPEAKER_00:

Exact type of guy.

SPEAKER_01:

Totally.

SPEAKER_00:

And I was like, um, you do realize that this is you, like this is a mirror, right? And like, so this isn't a person, this is not an entity. And she did, she got that and stopped, and then sort of like kind of like waste.

SPEAKER_02:

Doubler back a little bit. Yeah, that's good, that's good.

SPEAKER_00:

And got real, like, whoa.

SPEAKER_02:

Thank goodness she's got a friend, you know, that she can bounce this stuff off of. This is the thing I worry about, right? Is the number of kids who for instance won't have that.

SPEAKER_00:

She's emotionally stable just at the heart of it. Like, you know what I mean? Like who's to say? Like people, like you said before, who are not, and it will ex it will definitely kind of just feed you whatever kind of output it seems you want, whatever's wanting to get you to that next place. That's right. So how can we okay? So now that we sort of like kind of went there, and it is alarming and it's super scary when you really kind of dig into that. Um how do you break that, right? Like like we were talking about before, how do you break this loop? What does that look like as a society too? Like you said, yes, we do have some uh responsibility, obviously, as parents and things like that, but um bigger scale. Like what what's the what's the answer here?

SPEAKER_02:

Yeah, so so one thing I would just say is that you you don't, these are not um because this is a poor a for-profit company running this stuff, the for-profit industry running this stuff, you can't rely on them in any way to bring in breaks on this, in the same way that once upon a time, you know, you couldn't have expected a cigarette company to put the brakes on it, right? Now, we can argue in good faith whether these are apples and oranges, right? A cigarette has no practical purpose, right, and doesn't help you with your taxes or anything else. On the other hand, the argument being made by those companies at the time was that somehow it was like good for your throat or the pause that refreshes, or you know, that kind of thing. And if you asked a human being, a smoker, and I was once a smoker, uh, you know, in 1957, before the Surgeon General came out with the big national warning saying this shit causes cancer and you shouldn't touch it, if you ask somebody in the 1950s, do you like smoking? They'd be like, This is great. I love smoking, right? And the expectation that somehow that person should somehow know better is crazy. Where uh in the same way that, like for me, as a as I've mentioned, as a former drinker, when I see on the liquor ads drink responsibly, I'm like, yo, there's no such thing for me. And the expectation that I should be able to go into that bar and control myself is outrageous. That's not how it's supposed to work. We know the brain export outsources its decision making to its environment. And so, no, that's not on me, right? So I do believe there's a little bit of personal responsibility we can bring to this. I'm glad that you had, for instance, you you know, the input with your friend to be able to slow her down on that, on that concept a little bit. But it shouldn't be up to you and it shouldn't be up to her. What I think is gonna happen, because for all of its flaws, uh the the open market does produce one really strong guardrail, and that is lawyers. They love to sue and they make huge money by suing. And I think this is gonna be an incredible cash cow for those companies, for those firms. And we've already started to see this with social media. We're gonna see a couple of big uh social media cases. These are actually state cases, but still big cases coming to the courts uh in early 20 to 26. I think we're gonna learn a huge amount from that. Um, but you know, I think that when you've got a company that's showing that it knows that a certain number of people inside their user group are openly discussing and in some cases being encouraged by the chatbot to to pursue suicide. Seven cases were filed filed last week alone against OpenAI for uh people uh whose families say that they were encouraged to commit suicide by the by Chat GPT. You know, like there's gonna be big lawsuits. And I think those what those lawsuits are gonna start to shift in this country is until now, the basis of a lawsuit had typically been either financial harm or physical harm. That's what we we sue about for the most part in this country. That's how you win, is your body is hurt or your finances are hurt. But I think in the future we're gonna start suing and winning on your brain being hurt. And we've already seen cases in which this is true. Um, manipulative uh gambling games, gambling companies in some cases have have had to settle out of court for huge amounts of money to avoid going to court. And I think that these AI companies are going to be in a position where they're gonna be held responsible for a lot of this stuff. And you know, I was just talking to a guy today who who basically was telling me the speech he's been trying to, you know, the conversation he's been trying to have with these companies, which is you should let me, as a mental health researcher, uh look at your data and help you get ahead of this stuff because you're gonna get sued in a big, big way. And so that's a place that I I, you know, I know that lawyers have a bad rap, but I'm I'm looking forward to the ambulance chasers getting a hold of this because I think that is actually gonna, you know, it's how it's why you and I aren't smoking cigarettes right now as we have our this conversation. It's why we have seat belts in our cars. You know, that stuff is is uh uh the result of legal strategies. And so I think that's coming.

SPEAKER_00:

Yeah, I absolutely agree. And I think it's gonna be um like I I think that it well, I I do think it's necessary, like you were saying. Um and I mean it's sad that it is necessary. It's it's terribly sad, obviously. Like, you know, um, but even like okay, like even if we're not even just talking about suicide here, like which is the absolute worst, there's other underpinnings that are like it's not even does you know what I mean? Like it's already doing some other psychological damage in people who are already struggling.

SPEAKER_02:

Um and people who aren't struggling. There are people, right? Very otherwise happy and normal people who say, Oh, I suddenly became convinced that this fantasy was true, or I became convinced that this system knew me in this fundamental way, or I became convinced that this was a an imprisoned friend that needed to be freed.

SPEAKER_00:

Interesting to me to dig into that piece of it just from the psychological point of view. I mean, like the movie her, right? My God. Like it is literally basically we're seeing a lot of this play out.

SPEAKER_02:

And isn't it funny how they the the movies, these movies? So that you know, there's this famous instance now in which it turns out that OpenAI tried to use approach Scarlet Johansen to be the voice of their chatbot, and she said no, and then they used her voice anyway, which is pretty spooky. Um but but I just over and over again, they there are these science fiction references like her and um they called the new uh infrastructure package around AI Stargate. Yeah. Where I'm just like, have you guys watched these movies? Like, do you know what the like her? Really? Have you watched her? Like the point of the movie is not good, right? Like awesome, it's about loneliness. The movie's about loneliness, you know?

SPEAKER_00:

So no, but I think it's an unconscious thing too, right? Like that's fascinating. But from the psychological perspective of it, that's what really um, that's where I really like to, well, I don't like to think about it, but I do think about it. I I like to kind of like dig into the pieces of that. And like you said, like someone could be quote unquote emotionally stable and you know, be completely not within um uh a few weeks of using something like this. It it really is fascinating to me on how we sort of get there and and the steps that that necessarily it takes to do that. So, one of the things that I talk about a lot with people, like you know, I I work with people to help create automations and systems to like make their thing, the back ends basically easier with AI, things like that. You know what I mean? Just to kind of like take some of that busy work off your plate. That's that's one of the things that I'll I'll do sort of with with one-on-one coach coaching schools. Um, but the thing that I'm always sort of at least trying to talk about a lot about in those cases is I really feel like if we're we have it and I'm using it, I feel like it's my responsibility as well to train it with empathy, telling it I don't you're going off the rails. Like we're not doing that here, right? Like that is, you know, like literally, I kind of see it as my little personal, my personal responsibility in like training it and making it as human as possible with relation to um the outputs that I want from my AI system, which as we know, all of all of this is being used to train these LLMs, right?

SPEAKER_01:

Right.

SPEAKER_00:

So we can't control what somebody else is training their LLM with, um, which you know, we can't control what anyone else thinks or feels or acts or anything that's you know, but we can control the way that we use it. Um and so how do you feel about that? Do you feel like that's gonna make a difference? Do you think that that is um something that that you do as well? Do you what thoughts about that?

SPEAKER_02:

Well, I worry, I mean, so so I think it's gonna make the filter bubble problem much, much, much worse because the essence of these front of these products is to be as engaging and frictionless and and fun to use as possible. And so they're not gonna, you know, I was talking to a guy the other day who's got a recovery chatbot he's trying to create that that for people who are recovering from drinking or sex addiction or whatever else can use this chatbot to kind of get them toward a human sponsor or human recovery, human-led recovery program. And so the chatbot that he's built, you know, I asked him, well, what's it like to try and use the off-the-shelf LLMs to build that chatbot? And he's like, well, it doesn't work very well because they're super sycophantic. They're always telling you, Great job. You're this is great, you're doing great. What a good idea. And they never leave you alone. They're always at the end of every single conversation saying, Well, what else can I do to you? How do I keep this, you know, how do I keep this conversation going? Um, and he said, you know, and you need a chatbot that that cuts you off at a certain point and sends you out into the world, one that continually is trying to kick you off the platform and put you with a human whenever possible, and one that'll call you on your stuff. If you say something that is diluted, you want a system that's gonna be like, ah, I don't believe I don't agree with you there. Or you've told me that before, and I think that's not true, right? It doesn't want to do that necessarily. That's not in the rules of good product making from a software perspective. So he and I said, So, so how hard was it to tweak those things? And he said, It's surprisingly easy to make those off the shelf LLMs into a much more therapeutically responsible thing.

SPEAKER_01:

Okay.

SPEAKER_02:

Which says something about the choices that. Those companies have made in what they have built, right? So you know, so I think the idea of each person having their kind of like perfect little filtration system for the world is, I think, on gonna on the one hand be a very comfortable thing for people. And on the other hand, it's gonna make it really hard for two people to agree on the basic facts of reality. It's gonna we're each gonna be in our own little weird filter bubble. You know, you think the social media filter bubble was a problem. I think this is gonna be a much, much deeper difficulty. So, you know, one of the things that these companies don't want to do, don't think about doing, is um, you know, being what they would call paternalistic, right? They don't want to tell you what reality is. They want you or the market or whatever to figure it out for yourself. So, you know, they their instinct is not to like jump in and say, uh, you know, it seems based on what you're asking me about that you are pretty thoroughly addicted to cigarettes or porn or whatever else, you know. Let's let's think about how to get that, you know, get you off that. You know, that is not necessarily how these systems are are designed to respond. They're designed to sort of be like, you know, to help you to all in some cases facilitate your addiction. Right. These guys, these companies don't want to be in the business of trying to control your behavior or shape your behavior, right? Um, except in as much as they want you to keep going with the product. And so I so I don't know, you know, the question is what do I think about about the individual personalization of these bots? I think that it really works contrary to like the spirit of the Enlightenment, which was supposed to be that you can get educated in, you know, some of the big abstract truths of the world and then share that expertise with other people. You know, I think we're gonna be in a world in which in which instead everyone's just looking so inward and just relying on these systems to tell them what's what's up. Um, I I really I think that's a real problem. And and there's nothing in our certainly nothing in our legal system currently that that prohibits that or in any way shapes that. So I, you know, I just want to say here though, because I I I know Renee that I am like a total bummer on this topic. I ruin a lot of parties when I talk about this stuff. But I just want to say this thing I was saying before, like I'm trying not to be alarmist. I'm trying to, I'm like, one thing I've learned is that like I don't want to I don't want to be so good at articulating the problem that like I'm inspiring people to to think that's hopeless. The thing that I want to get into when and one thing and the language I'm trying to sort of get better at is the idea that our brains are incredibly beautiful and special things. And there are and and and that it is possible, I think, through even with a system like this, to amplify the best parts of who we are. Um, you know, you can have a system, you know, you can say to a system like, I know it's better to be with people. How do I do a better job of that? Right? You can ask an AI system to to to to push you in directions that you wouldn't naturally go or to help you brainstorm ideas about challenging yourself or expanding yourself a little bit. You know, and so for me, there's like there's a capacity that this thing has to further scientific advancement and to give you a little bit of a of support when you don't have it and you know, all that stuff. But but I think we just we've we've treated their the brain as if it's like an just a an endless resource and that it's can take anything you throw at it. And I think we need to start sort of thinking about our brain as a really special and vulnerable thing and protecting the best parts of being human.

unknown:

Yeah.

SPEAKER_02:

So I'm I'm trying to, I'm I'm kind of workshopping that with you because I don't know, I haven't fully fleshed it out, but I'm trying to get toward this idea of like, how do we how do we stop just giving away the best parts of who we are and in and start valuing them in some way such that we protect them against a for-profit company's efforts to to get you to sort of hand that over to their product?

SPEAKER_00:

It's it's a concern for sure for me as well. I think about that a lot. I think about just this whole conversation about the outsourcing of the of humanity, really. And I mean, like everything is just moving so quickly. I feel like a lot of people don't even know what to do with the information, though. That's really kind of where we're at. And it's like, well, we can't keep up, so you know, why bother? Sort of thing. Um, so for people who feel like that and and feel like sort of hopeless, right?

SPEAKER_01:

Right.

SPEAKER_00:

Well, first of all, I do think that they are going, we're probably going to see, I think we're already seeing it, but I think we are gonna see like a very anti, anti-AI thing arise. We're gonna really start to value things again. Like, you know, that's how it always happens if you look back through history and things like that. And I do really believe that we're sort of like on the verge of a of a different kind of renaissance, like a very, like a really interesting, uh, and then maybe not right away, but it but we're we're we're getting there. But for people who are sort of like really overwhelmed by this, um what would you say like, you know, to them as far as like where are maybe places and spaces or things having these conversations that um that are maybe well, maybe they are triggering, but but maybe they are more enlightening. And and how can people maintain their humanity in this race?

SPEAKER_02:

Well, so one thing that I try and sort of remind everybody all the time is just to like give yourself some grace around this stuff. Like the the degree to which um you you're you are made to feel in our modern media environment as if you're somehow behind and need to, you know, catch up is a construct of people trying to make money off you. And you should just put that shit aside. Like that is not how it should work. I went for this documentary series, I I had this amazing opportunity to go to Tanzania, and I was with a I got to go spend about a week with this tribe in Tanzania that lives the way we all did, like 60,000 years ago. They're called the Hadza, and they're they're uh scientists love to study them because they live like we all did once upon a time. They're nomadic, they have no last names, they have no concept of marriage, they have no property, you know, it's a very they live in the in this sort of primitive way. And and one of the things that really grabbed me when I was with them is they have no word for a number larger than five. Because why would you need that? Right. If there's if there's if you if there's more than five people around the campfire, then that's a lot of people. If, you know, if I'm gonna have to meet up with you in one moon, two moons, five moons is a lot. Like that's way out there to keep track of, right? Beyond that, that's too much to even consider, you know? And so, and they don't, there's no property, so there's no like, well, five for me, five for you. They don't do that. So I just sort of think like that that's the natural condition of human beings. Now, I don't think we should go back to the natural condition of human beings in all these other ways. This is a really hard life, and women die in childbirth. I'm like, it's tough. Like, I don't I don't mean to suggest that like nature is the way to go back to, you know, or whatever. I'm just saying our brains and our best qualities aren't about keeping up with everything. What they're about is connecting with one another. And and, you know, I I have a podcast called The Rip Current, and and one of my guests is uh was a guy named David J, who's a sort of specialist in friendships. And he has this wonderful thing about the difference between a good and a bad friendship. He's not he doesn't say good or bad, but he's basically saying the difference between like a you know, like a colleague you're kind of friends with for professional reasons and someone who's a true friend. And the difference he says is that professional relationship that's just sort of for your purpose, for your for your benefit, is one in which there will be no surprises. You don't know what's coming. You know, you you know exactly what's gonna happen in that conversation, right? Whereas a real friendship is full of surprise. You don't know what's coming, you don't know what's gonna happen. And to me, there's something about the preserving of the friction of life and the surprise of life that I want to keep leaning into. I, you know, for me it's like you you're the there is just enough surprise, it feels like, in in spending time with an LLM that just like it scratches a little bit of that itch, but it's not the stuff that is really that our circuitry really needs to thrive. That stuff involves like totally pointless stuff like taking a walk with a friend, you know, totally, totally pointless stuff like like drawing, even though you can't draw, you know. And so there's something about like doing illogical things that involve a huge amount of just pain in the ass friction. No, for sure. That we have to cherish, you know, and and protect. And that sounds that's you know, it's full of privilege that I get to say that. People are working three jobs, you know. Uh half of the country uh has 98% of the wealth. Like, there's some problems, you know, that I recognize not everybody's got time to be like, you know, thinking about my lofty thoughts on friction. But let's just remember like that's kind of that's what your brain you really thrives off of.

SPEAKER_00:

Yeah, it's what really being human is about, you know, like that part of it that I mean, like, look at all the studies that show with the elderly what keeps you alive the longest isn't like the best diet, it's the it's your human interaction.

SPEAKER_02:

Yeah, that's right. And gardening. Yes. Yeah, that's very pointless gardening. You know, like it's great. So, so for me, I think I just think everybody should like, first of all, don't let the world convince you you're you're not doing enough because you're doing more than anyone in the history of humanity has ever done. The fact that, you know, the number of people you're in touch with, the the amount of things you're doing, the distances you travel every day. I mean, it's crazy compared to what humans are built for. So give yourself some grace about that stuff. And then try to just make a little room to indulge, yeah, like I say, some sort of just a little bit of pointless creative friction. I think that stuff is so important for your soul. So that's my that's my that's what I try and hang out to.

SPEAKER_00:

Yeah, that's really great advice. I was just, I just came back from a retreat. It was an in-person retreat for like an online female entrepreneur group that I belong to. And nice, you know, we definitely can strategize all day, every day over Zoom and you know, get all kinds of like good ideas. And that's all valuable. But being in person, like most of I walked away with just, I mean, feeling so restored, just just you know, just from being in a in a in a different place, like, you know, and in Mexico, like, you know, literally in a different place physically with people who I had I didn't know if I was gonna gel with them or not. There was that friction walking into it, like I don't know. I don't know if I'm gonna gel with these people. I don't know. I'm not gonna worry about it. Like at the heart of it, I think all humans really just want to be, you know, seen and accepted. So we all have that underlying thing. And then walking away after reflecting on it, realizing that most of the things like even the ideas that we kind of came up with and like strategies, that all happened when we were like by the pool.

SPEAKER_02:

Or you know, how do you know what's so funny about this? I just I was just talking to so I I got to um there's a Stanford neuroscientist that I know. Um, we got to teach a course together, and he does he studies what he calls the dark matter of uh human relationships. And dark matter in in astrophysics is is all of the energy they know exists in space because they can they can measure it, they know it's there, but they can't observe it. They can't actually like point a telescope at it. And he's describing something similar in human interactions. And one of the way, one of the things he talks about is it turns out if you take two strangers, let's say you and I have never met before, and we and you throw us at a task together, and then you take uh two other strangers and you have them play a game first and then throw them at that task, the two who've played the game will do vastly better on the task. Right. It's the it's the two of people who hung out at the pool first, yeah, that are somehow picking up something between each other. I have a whole there's a whole chunk of the book that's all about these very magical, seeming unspoken connections between humans that we can observe or that we can measure, but we can't quite observe. And you know, he's shown that like the difference between me looking at uh myself on Zoom, because that's totally what I'm doing right now, yeah, versus looking at the camera directly, right? Like I am right now. Yeah, there's a there's a huge difference in our degree of connection from just this four inches of difference, you know, there. Like there's so to my mind, I like the thing that's so frustrating to me about people talking about like how AI like can reason like a human or like it's gonna have universal human values or blah, blah, blah, blah, blah, is we know so little about how we really connect. And the idea that they're gonna somehow automate that so you don't need it anymore is so crazy to me. So I love this. So I would just say, you know, if you get the chance to hang out by the pool with somebody pointlessly and do nothing together for a couple of hours, you're gonna be bonding with them in this ancient evolutionary way that no, you know, that that is incomparable. So cherish things.

SPEAKER_00:

Yeah, things are very like we've we've been here a long time. So I love, I love that. Yes. Um, thank you for for chiming in about the numbers.

SPEAKER_02:

I don't get a lot of uh I don't get a lot of um uh I'm I'm I'm bad at the good news and I'm trying to get better at it. So I really appreciate your your pushing back on that, Ren.

SPEAKER_00:

No, I just I like to see all the different pieces of it. I and I uh obviously I could kind of talk about all of this all day long, but I will let you uh I will let you off the hook for today. Thank you so much for being here. Where can people obviously I'm gonna link your TikTok and your book. I appreciate that. But like where else do you like to connect with people?

SPEAKER_02:

Yeah, so I uh have a podcast and a newsletter at theripcurrent.com where I talk to experts about uh the big invisible forces that are working on us all the time. I'm sharpening up a lot of my thinking about this stuff into a much more specific set of research projects around AI and psychology, AI and uh risk. And so um all of that will be at the ripcurrent.com. But like you say, weirdly enough, TikTok is my primary audience. I don't know why young people want to listen to an old dude like me, but for some reason they do. And so um, yeah, buy Jake Award is my is my uh venue on all these platforms. So, Renee, thank you so much for a really thoughtful conversation. I really appreciate this.

SPEAKER_00:

Thank you. This was awesome. Okay, so I don't know about you, but I feel like my brain needs a deep breath after that conversation. Yeah, in a really good way. Okay. Jake's work is such a powerful reminder that AI isn't just about technology, it's really about us. Okay. I talk about that a lot. How it mirrors back who we are, our fears, our desires, the blind spots that we have. And the only real way to fight back is through awareness and curiosity and a conscious choice. Okay, these are really, really powerful and important conversations to have, regardless of what you feel about AI. Whether you're an artist, an entrepreneur, or somebody just trying to make sense of the digital world. I hope today's episode reminds you that you still hold the pen. You get to decide how technology fits into your story, not the other way around. And it's definitely something that is top of mind for me. I use AI. And I do believe that there are ethical implications. I do believe that, uh, like I said on the show, we are in the wild, wild west. We're not really sure where this is all going. I definitely want to stay on top of this. I want to be in the know. And so I think it's important that we have these kind of conversations where we look at all these different aspects and we use our brains. So if you want to go deeper into Jacob's work, check out his book, The Loop, and his podcast, The Rip Current. All of that is linked for you guys in the show notes. Follow him on TikTok like I do. He's really fun to follow. Very interesting, always insightful. Follow me on TikTok too while you're at it. And if this conversation sparked something in you, maybe a new way of thinking about AI or creativity or the stories that we tell ourselves, uh, share it with a friend who needs to hear it too. And keep showing up with curiosity and courage because the version of you who already knows how to stay grounded in this crazy new world is already in motion. It's already there. Don't forget that. Okay. And give yourself a lot of grace if you feel like this is overwhelming too. Always want to have these conversations with you. Reach out to me on Instagram at Renee Bowen. Shoot me a DM if you want to uh ask me any questions about this episode or any. I'd love to chat with you there. So have a great rest of the week. I hope you do something really good for yourself. Have some good human connections out there. Love you. Bye.