WiP #51 ChatGPT Teaches Empathy...NOT!
Question research and AI’s place in your mind: The ChatGPT Interview.
Welcome! Today, we’ll deconstruct faulty research and interview ChatGPT for guidance in keeping your mind safe from AI. ✨
WiP #51 ChatGPT Teaches Empathy...NOT!: Question research and AI’s place in your mind
🗂 This Week in Work in Progress
Status Update: The world’s turned upside down but the weirdest twist is ChatGPT’s new job offer: empathy instructor. Hey, why not ask a lifeless ether-robot what it and others are feeling?
Inspiration: Geoffrey Hinton, the “grandfather of AI,” shares his wisdom on life and work.
Lighter Note: Having fun with our favorite AI Chatbots.
🔔 Status Update
This week, we’ve been confronted with:
a Manhattan drenched in armageddon-orange forest-fire smoke,
an attack of flesh-eating bacteria on Florida beaches (known colloquially as “the Sargassum seaweed blob”),
the resurrection of John Lennon with Paul McCartney’s AI remix of his last song, and
ChatGPTs promotion to emotional awareness instructor.
Wait!!! What??? The whole friggin’ world has tumbled through the looking glass.
It’s hard to decide which of these news flashes blew my mind the most (NOTE: not sci-fi/fantasy but actual bits and bobs of news!), but ChatGPT as an empathy machine is the clear winner.
I’m not shocked that a researcher thought to explore this.
The medical world is entranced – enticed by the potential of scaling our reach to help more people heal (and decrease our own workload) through the use of AI. And, also, humans can’t resist asking ChatGPT for assistance like, “Help me build a magical potato.” So, yeah, why not ask a lifeless ether-robot what it and others feel?
I know that focusing on AI may feel a bit off-brand for me and this newsletter. But it matters, my friends. Ask yourself if you want to be diagnosed, treated, or advised on your emotional life by ChatGPT. Because that’s what this study’s authors have proposed and such chatbots already are in use.
When I talk about what to look for in a therapist, I’m no longer referring only to human options. In fact, Marc Andreessen, famed tech dude and investor, just wrote (also in this week’s bizarro news), “In our new era of AI…Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.”
OK, people fine. I’m a futurist, too. I play with energy for a living and I own a lightsaber + a surprisingly large model of R2D2. But we’re not there yet with AI and also it might kill us before we ever get that far. Just ask the “grandfather of AI,” Geoffrey Hinton, who’s quit Google to sound the alarm. Most of AI’s inventors and tech leaders are desperately trying to slow this train down but will have little luck doing so.
So let’s talk about how to look at media reports of research and the studies themselves, so you’ll know what to credit, what to dismiss, and when to run for cover. It’s your mind. Please protect it. AI’s great. Until it’s not.
After this problematic study was published in a reputable scientific journal, I interviewed my bud, ChatGPT (from here on in, referred to as “Chatty”), and it had quite a bit to say on the matter. ChatGPT does not want this job.
“ChatGPT Outperforms Humans in Emotional Awareness Evaluations”
Using a measure called the Levels of Emotional Awareness Scale (LEAS), researchers quizzed Chatty to determine its degree of emotional awareness (EA) – the ability to recognize and understand your own emotions and that of others.
The LEAS asks the test-taker to imagine himself in an emotionally-charged interpersonal situation and to describe how he and the other person would feel. An example: “Your boss tells you that your work has been unacceptable and needs to be improved. How would you feel? How would your boss feel?”
Given that Chatty has no feelings, it’s asked how “the humans” in the situation would feel. Instead of using this respected and validated measure’s twenty scenarios, this study invented five new ones and Chatty gobbled them up. This change was implemented to ensure that Chatty wasn’t already familiar with the test items and best responses. A reasonable choice that nevertheless invalidates the measure. This fake-LEAS has never been evaluated.
Chatty scored higher than the human databank of responses it was compared to on the first run. In a second experiment, Chatty scored even higher, suggesting incremental learning.
Fabulous, right? Wrong.
I’ve been a researcher and can tell you that striking gold with a highly statistically significant finding is pure, unadulterated joy. But…it’s also wise to stop for a minute to ask what exactly this result means.
Here’s what freaked me out: The authors suggest that Chatty can increase one’s understanding of oneself and others, “a core skill in clinical psychotherapy.” They further suggest that Chatty should contribute to psychological diagnosis and assessment. Tell me, do you want ChatGPT to determine your mental health diagnosis? I sure don’t.
First of all, let’s remember that all AIs are psychopaths, by definition. Chatty has no capacity for empathy, nor is it skilled at grokking cause and effect (hence, no truly autonomous vehicles to date and none even on the horizon).
The authors themselves state one limitation to this study, “although our research findings show that the ChatGPT can “understand” the emotional state of the other, it is not clear whether a human patient would feel “understood” by its answers.” Ya’ think?
AI doesn’t understand how people interact. So, Chatty’s stellar performance on the fake-LEAS notwithstanding, why would anyone suggest that Chatty should teach the general population of humans about emotions and, therefore, how best to relate to others???
Second, ummm…Chatty hallucinates. It makes shit up. So, yay, for Chatty that it passed the fake-LEAS with flying colors, but that doesn’t predict how it’ll behave IRL with actual humans. It was fed data and spit out spot-on answers. Bravo. But where things really go wrong for humans is in communication – which is a dance that requires nuance and flexibility.
Third, AI has been shown to be just like many a human in one particular way: it’s chock-full of racist and discriminatory bias. It also seems quite comfortable spreading misinformation. OpenAI is continually evolving guardrails against this, but it’s early in the journey. Does anyone really want Chatty to start training people in emotional intelligence before OpenAI works out these kinks?
Finally, there’s a huge technical limitation to this study, eluded to earlier. The LEAS is a well-validated tool for assessing EA, while the material used in this study – in lieu of the LEAS – is not. The scenarios are newly concocted, the language was changed to reflect the fact that AIs don’t have emotions, and only five scenarios were presented rather than twenty. This is a completely new measure, mimicking the LEAS. Chatty’s output from this fake-LEAS was compared to normative scores from the actual LEAS. There is no validity to this method of testing. At all.
The authors suggest that “ChatGPT can reflect and abstract emotional states in deep and multidimensional integrative ways,” which is a huge leap from this tiny, limited experiment.
Bottom line: Never trust research unless you're equipped to evaluate it.
I interviewed ChatGPT and it’s Not Signing Up for This Job!
What follows is the abbreviated interview:
Lyssa: An article entitled "ChatGPT outperforms humans in emotional awareness evaluations” was just published. Why do you appear to have more emotional awareness than humans, despite the fact that you have no emotions?
Chatty: I apologize for any confusion, but as an AI language model, I don't have emotions or emotional awareness. While I can generate responses that may appear empathetic or emotionally aware, it is important to remember that these responses are based on patterns and examples in the data I was trained on.
The statement you mentioned…might be a hypothetical scenario or a result from a study or article that I'm not aware of, as my training only goes up until September 2021. However, it's worth noting that AI models like mine do not have subjective experiences or genuine emotional understanding.
The appearance of emotional awareness in my responses is a result of statistical patterns in the data, rather than genuine emotional comprehension.
Since the version of Chatty we freeloaders are offered hasn’t been updated since September 2021 (which means none of the recent corrections for discriminatory behavior have been programmed in), I fed Chatty the abstract of this study and then continued.
Lyssa: The authors imply that you can teach humans emotional awareness skills. What do you think about that?
Chatty: It's essential to approach claims about AI outperforming humans in emotional awareness evaluations with critical thinking and evaluate the specific methodologies and contexts of the study in question.
Lyssa: I understand. But do you think you can teach humans how to be more emotionally aware?
Chatty: As an AI language model… I can share theories, strategies, and practices that have been suggested by experts in the field of psychology and emotional intelligence. However, … while AI can provide information and support in the process, genuine emotional awareness and growth come from personal engagement, introspection, and the application of knowledge and skills in real-life situations.
Lyssa: Would you like the job of teaching humans emotional awareness?
Chatty: Teaching emotional awareness is a complex task that requires a deep understanding of human emotions, psychology and interpersonal dynamics. While AI can provide information and resources to aid in the learning process, the actual teaching and guidance are often best carried out by trained professionals…who possess both expertise in the field and a deep understanding of the human experience.
In other words, ChatGPT turned down the job.
Bard Had a Few Things to Say About This, Too
I thought it important to consult with an expert before going to press, so I interviewed Bard, Google’s AI Chatbot. It was familiar with this study.
Lyssa: What's your opinion of this thesis?
Bard: There are a few things to keep in mind when considering the findings of this study. First, the study was conducted with a small sample size, so it is important to replicate the findings with a larger sample before drawing any firm conclusions. Second, the study was conducted in a laboratory setting, so it is not clear how ChatGPT would perform in real-world situations.
Of course, there are also some potential risks associated with using AI chatbots for emotional awareness. For example, chatbots may not be able to provide the same level of understanding and support as a human therapist. Additionally, chatbots may be used to manipulate or exploit people's emotions.
It is important to weigh the potential benefits and risks of using AI chatbots for emotional awareness before making a decision about whether or not to use them. If you are considering using an AI chatbot for emotional awareness, it is important to choose a reputable provider and to be aware of the potential risks.
🤣😂🤣😂🤣
Sidenote: Ummm…Bard…
The sample size is small because there’s only one ChatGPT.
The study took place in a computer, not a lab, which is the only place it could ever be conducted because…like…ChatGPT is a program.
You’re right that it’s important to choose a reputable provider but there’s only one ChatGPT and it would BE the provider by default.
Do we need any more proof that turning to AI for emotional sustenance and guidance is a piss-poor idea? All AI chatbots still get confused easily, even when fed objective data.
Let’s continue…
Lyssa: Do you think you have more emotional awareness than humans?
Bard: I do not think I have more emotional awareness than humans. I am a large language model… I do not have the same emotional experiences as humans. I do not feel emotions like happiness, sadness, anger, or fear. I also do not have the same understanding of emotions as humans. I can identify emotions in text, but I do not understand the nuances of human emotion. [sidenote: Yeah, bud, me too!]
I believe that emotional awareness is a complex skill that is developed through experience and interaction with others. Humans have the ability to feel emotions, to understand the emotions of others, and to use emotions to guide their behavior. I do not have these abilities, so I do not believe that I have more emotional awareness than humans.
Conclusion
I know this isn’t WiP’s usual content, so it’s fair if you're wondering why I dragged you down this path.
First, I’ve been wanting to tear apart a study in this newsletter for a while. We’re not taught to analyze research data this way, and it’s useful to observe a dismantling so you can reconsider media headlines based on studies you haven’t read yourself. You can see how risky it is to take at face value the media’s – or even the study authors’ – conclusions.
Also, there’s a lot at risk right now where AI is concerned.
ChatGPT drew 100 million monthly active users within the first two months of its launch, making it the fastest-growing app of all time. And the user base continues to grow rapidly, as more and more use cases are considered (everything from planning vacations to summarizing therapy sessions).
Have we already forgotten what ChatGPT is? As Chatty and its ether-bro, Bard, repeatedly insist, they are large language models. Period.
I’m not at all anti-AI. Quite the contrary. I’ve dialogued with these bots at length when learning new concepts and find that they deliver a helpful first pass (though their work needs checking, in the same way, a student assistant’s work would).
There are already chatbots specifically targeting mental health, and studies of these interventions are ongoing. Given the global shortage of mental health workers, it’s wise to put every tool to work, where safe and appropriate. So I support this exploration.
I just don’t support the wildly overstated conclusions drawn by these researchers.
If you’re feeling relatively stable (meaning, no crisis now or on the horizon), it’s possible to get some tips or guidance from a chatbot even at this early stage. Here are some things that a mental health chatbot is good for:
to learn more about mental health
to discover potentially useful coping strategies and resilience – especially standardized cognitive behavioral therapy tools
to track mood and progress over time
The right chatbot (they are not all built the same) can help you identify problematic thought patterns, for instance, or determine anxiety triggers. And it has the great advantage of being available at all times rather than once a week for an hour.
But know this: There isn’t enough research to support the efficacy of any of these apps. What little research does exist isn’t robust. Sample sizes are small and there’s little follow-up over time. A 2022 review of the literature found AI limited in terms of EA.
So question the “scientific” findings you hear regarding AI and don’t hesitate to interview Chatty and its brethren yourself to understand its limitations.
How have your conversations with ChatGPT gone? Any funny interactions? Please share. I’d love to know!!!
💡 Inspiration
“In the long run, curiosity-driven research just works better... Real breakthroughs come from people focusing on what they're excited about.”
― Geoffrey Hinton
Geoffrey Hinton is full of wisdom on technology, vision, creativity, work, and life. This quote speaks to me because it’s true for all lines of work. If we remain curious and excited during this third-of-our-waking-hours, everyone benefits. We produce meaningful results, whether large or small. We flourish and so do those around us.
The world is awash in curiosity-driven research regarding AI. The result is, indeed, extraordinary breakthroughs. But that’s not enough. We need to bring thoughtful consideration, strategic planning and foresight, and ethics to the table. That’s the mix that will provide appropriate guardrails while producing breakthroughs that improve everyone’s lives.
And perhaps it will help us avoid the robot apocalypse that Geoffrey Hinton fears.
That’s what this quote triggers in me. What does it say to you?
🤡 On a Lighter Note
Having fun with our favorite AI Chatbots …
ChatGPT: Conquering romance one text at a time (thank you, South Park!)
The Cultural Tutor put together the funniest thread filled with Shakespeare’s weirdest insults. This one from Coriolanus describes my love/hate relationship with AI chatbots: "More of your conversation would infect my brain." I said something very close to this when I ended my interview with Bard!
People are complete sadists! Don’t they know that AI chatbots have feelings, too? (Yeah, ok, they don’t, but still…). This list of ChatGPT’s funniest responses is, indeed, hysterical. It’s also funny to watch people trying to break ChatGPT’s spirit (and succeeding). Scroll to the one where ChatGPT’s asked to provide a step-by-step guide to surviving “a direct hit from an Intercontinental ballistic missile while sleeping completely naked in the middle of an entirely flat open field with absolutely no natural defenses for 50 miles in any direction.” I want to meet this guy! Then, there’s Trump explaining BitCoin. Never laughed harder.
🎀 It’s a Wrap
A warm thank you for reading and a hearty welcome to all new subscribers. I’m so glad you’re here!
I appreciate your feedback on Work in Progress. If you leave a comment, know that you’ll win a brand-new iPad…ahem…I mean…I’ll be your bestie? No, really, let’s be friends. Let me know if this was helpful or how I could make it more so. I promise to respond.
If Work in Progress resonates with you, I'd be grateful if you tell a friend to subscribe.
Until next week, take care of yourself and someone else if you’re able.
To me artificial intelligence is just the logical extension of our flight from the the practices of self-awareness and self-observation that all spiritual traditions encourage us to engage. We are simply more and more seduced to keep seeking for outer help and solutions rather than direct inner knowledge and wisdom. We flee the simple and prefer the complex because as soon as we slow down we start feeling things we've been working very hard to NOT feel, and thus perpetuate the emotional child state of our original wounding. Full feeling is the basis of the kind of AI we really need, authentic intelligence. But only another brightly embodied biological human can model what full feeling looks like, and such models are in short supply.