- Reading Between the Lines with Rob Volpe
- Posts
- Where's the Empathy in AI?
Where's the Empathy in AI?
Table of Contents
→ Special Deal on Kindle e-book - EXPIRES FEB 1
→ Empathy and AI Part One: Table Setting
→ Empathy and AI Part Two: Is this Really My Writing?
→ Empathy and AI Part Three: Can AI Be Taught to Be Empathetic
→ Media Appearances: Keeping New Year’s Intentions with Self-Empathy
Hello! Hope you are having a great day!
Welcome to all the new members of the community - thank you for subscribing and I’ve put more personal info about me in the last section of this newsletter so you can learn more about the person that just dropped this in your inbox.
And as always, a quick reminder of what you can expect from me in each edition of Reading Between the Lines…
My thinking is here in the newsletter. Links are for diving deeper.
I strive to deliver ‘news you can use’.
I also share insights into human behavior and topics I’m thinking about.
I include amusing or interesting “slice of life” moments.
The Q&A feature is based on questions that come up in conversation - please send me your questions!
I’d like to hear your thoughts- ‘reply’ to this email or reach out directly to: [email protected]
Limited Time Promotional Price for Kindle e-book edition of Tell Me More About That: Solving the Empathy Crisis One Conversation at a Time!
I’m excited to announce that we’re running a special promotion on the e-book (Kindle) edition now thru February 1! BookBub selected my book as a featured title for this week in key markets so we made the offer available to all potential readers. The US price is $0.99, now through Feb 1. It’s also on similar discount in the UK, Australia, Canada and India. Please share with any friends, colleagues, connections that you think might be interested or if you haven’t picked up the book yet yourself, this is a low risk way to do so.
The hardcover remains available wherever you buy books and I narrated the audiobook if that’s your preferred format.
And, if you have a reading group or would like to do a conversation with the author (me) for a work or social group, the e-book is a great way to get that started. Just email me, [email protected] once you’ve bought the books and we can make that happen.
Where’s the Empathy in AI?
This is a big topic that’s both broad and deep. I’m not going to get every thought I have about AI and empathy squeezed in here as I’m mindful of your time and attention, however I have to start somewhere. So here we go…
I do my best to stay open to recognizing patterns and look for meaning in them. I’ve always been that way and my skills heightened in the past 17 years doing insights and strategy research. Patterns help identify trends - both trends as in ‘trendy, what’s hot’ and trends as in ‘trending, pay attention to this change over time’. After we spot the trend, what’s important is what we do with it.
If something keeps popping up, I think of it like a child seeking attention. The child will continue tugging on our sleeve until we address it. I was encouraged by my energy healer, Marie, to write about AI and empathy months ago. But I put it off. I’d think about it from time to time and kinda sorta plan for it, but the topic is so big and broad it felt daunting.
Until now. What changed? I recognized a pattern. Over the course of a week, conversations on AI became that kid tugging on the back of my shirt, looking for attention. And these were all new discussions on aspects of AI that I wasn’t having six months ago. As I reflected on this trending topic, I took it as a sign it was time to begin to write about it.
Let’s start with some “table setting” and then I’ve written distinct sections based on the conversations and experiences I had. From the practical to the philosophical. And if I get something wrong or you have a different perspective, I’d love to hear from you - just email me [email protected] or leave a note in the comments below! 🙏🏽
The basics, as I understand them…
I recognize that “artificial intelligence” is quite varied and has been around for years depending on how you define it. For purposes of this newsletter, I’m thinking about generative AI or the “language learning model”. That’s ChatGPT and the other programs being developed where AI is learning from information that’s on the internet. ChatGPT-4 is now relatively current in its ability to scrape the internet as well as being trained on data as recent as March 2023. It does depend on your subscription and if you have it linked to Browse with Bing.
Many of the other AI programs or apps are fueled by ChatGPT running in the background making those apps possible.
Language learning models are designed to predict the next sentence or phrase based on all the sentences and phrases it has ingested in its training or can find. Meaning it’s only as good as the content it’s trained on. The real eye opener for me was when someone explained to me that “truth” or “fact” is not part of the base programming. This is part of why it will “hallucinate” and make up answers that seem plausible, because it’s completing the programming instruction of answering the prompt based on what has come before. In other words, it felt forced to complete the assignment so it made stuff up in order to turn the paper in on time. Lawyers in New York learned the hard way that you have to review and fact check ChatGPTs work as it made up 6 cases referenced in a brief filed with the court. The cases made sense and supported the brief, except that they didn’t exist.
AGI or Artificial General Intelligence - is creating a program that can think for itself. Many take this to the “robots take over the world” scenarios in science fiction like HAL in 2001: A Space Odyssey or the Terminator movies. While AGI is under development by ChatGPTs parent org, Open AI and others are chasing after it as well. It hasn’t been introduced to the world formally.
One newsletter I’ve found helpful is Ben’s Bites. It’s daily and is a digest of all the happenings in the world of AI with some select deep dives.
Moving over to empathy, this is an ability that is very broadly about connecting with others. There are two types of empathy. Cognitive empathy is understanding the point of view of other people. That point of view can be a belief, behavior, opinion, value, lived experience or perspective. Think of it as “understanding where they are coming from.” Emotional or affective empathy is feeling the feelings of others, as they are feeling them or conveying them to you.
Cognitive empathy is the fundamental ability that empowers the skills we use in daily life. It’s also what’s currently getting programmed into AI applications as you can pretty easily anticipate “if they say this, reply with this in empathetic support.” Emotional empathy seems harder to program as it’s based on connecting to and feeling the feeling from the heart rather than the head. It’s possible to cognitively relate to what someone is feeling - imagine what it might feel, but AI, to the best of my knowledge, doesn’t truly have feelings. Or does it?
Is This Really My Writing?
The first conversation and the one that sparked my deepest philosophical questioning was following an introductory call with Eric Forst of Creator Pro AI, which provides application GPTs developed for different tasks. Eric and I had a great conversation that went beyond a typical intro call. And then I got a follow-up email from him where he shared a ChatGPT generated paragraph with thoughts about remote work, which you can read in this image...
ChatGPT generated paragraph to a prompt about the pros and cons of remote work.
And then, Eric asked ChatGPT to revise it as though I had written it…
The same paragraph revised to be in my voice.
A lot of thoughts ran through my head when I read that rewrite. I could hear parts of my writing voice in those sentences. These were words that I’d use. AI definitely picked up on my penchant for modifiers, metaphor and analogy. However it really layered those literary devices on thick, like the scene in Friends when Joey wears all of Chandler’s clothes at the same time.
See what I did there? 😉
To create that revised paragraph, ChatGPT scanned the web and took what it knew or could find about my writing. Clearly my use of storytelling, metaphor and analogy was showing up as a distinguishing characteristic, and so ChatGPT anticipated that I would incorporate those tools if I were writing that piece myself. But like Joey, ChatGPT took it too far and put it into every sentence.
Yes, ChatGPT could be trained not to lay it on so thick and the piece lacked a POV but, as people often say about AI, it got about 70% of the way there. I could see that response being generated, then I could edit and insert my perspective so it felt closer to my voice and then publish it.
But is that writing really me? And does that truly matter?
I pride myself on my ability to find the right words and place them in a hopefully engaging order. When I write, I’m continually thinking and shaping what I’m going to say. The superhero metaphor that I used in the last newsletter about empathic distress and compassion fatigue came to me while I was writing. And it strengthened as I went along. I got more positive feedback on that newsletter than I have on any other so something must have worked.
If I were to use AI to draft a piece of my writing, would it truly be coming from me? Or is it a facsimile of me like those forged pieces of art created in The Thomas Crown Affair? I select words with intention, not because I anticipate that’s what should come next because of all the words, phrases and sentences that have come before.
Minutes after I received these examples from Eric, I had a conversation with a friend who’s increasingly using AI in their work. The discussion revealed a contrast between us as they didn’t have a concern about provenance while I was having a deep reflective moment. For them, the analysis and writing jump start was a means to an end. I was thinking about it in terms of sharing original thoughts and ideas in an engaging, entertaining way.
When I’m writing, I could similarly train ChatGPT to generate the content after I prompt it with my thoughts and feelings and have it work on that. Would I be the writer in that case or should I be called the “prompter”?
Generating sentences is different from creating sentences. How many sentences do you write during the day that can be generated compared to the ones you need to create? Sentences that are created represent the fusion of thoughts and feelings. The words we choose to express those thoughts and feelings is distinct to each of us. While they may bear similarity to what has come before in our own writing and in the sentences of someone else, we still have the freedom to express ourselves based on where we are in the moment rather than be predicted.
I can see where my friend, who considers having trained AI to be a junior person on their team, has generated use cases where routine tasks or the start of a bigger document can be handled by AI. These are early steps where original thought and critical thinking haven’t been applied yet. How far can we take it and what would that mean?
I encourage people to try AI, if they are comfortable, to get started, but not to rely on it as the final product. It remains a facsimile. For now.
What I know is that I am a writer and artist at heart and also a published, award-winning author. Words are important to me. I prefer these words and thoughts remain my own.
Can AI Be Taught to Be Empathetic?
The power of empathy is that it enables better use of many of the skills we use which leads to positive outcomes and makes us more successful in the roles we play in work and life.
Empathy is an inherent ability we are all born with and it is developed when you have it modeled for you and are given opportunities to experience it. It should flex its muscle on instinct as we get stronger.
Cognitive empathy is the recognition of someone else’s feelings or point of view and then responding using supportive (empathetic) words. While we all need to find the words and phrases we are comfortable using to express empathetic support, it could be scripted and therefore programmed into chatbots and AI.
And since 97% of customers say empathy is the most important element in customer service according to a Genesys/MyCustomer survey in 2021, there’s money to be made in making sure people are trained and using empathy most effectively.
This has led to empathy being programmed into chatbots that provide support in customer service as well as other times when people might need support, like mental health.
One experiment found a nearly 20% increase in empathetic response when a chatbot was coaching volunteer support personnel at an online peer support site. This Wall Street Journal article from October 2023 has more information on this and other programs under development. From what I can tell, these are all using the application of cognitive empathy.
The expression of empathy however isn’t just a function of interpreting the words people say. There are also decisions to be made in how to communicate back in a way that supports the individual. This is a decision that’s made on the spot and often using our instincts and intuition.
In fact, the WSJ article points out that the National Eating Disorder Association had to suspend using its bot because it shared therapeutic advice that would be harmful to someone with an eating disorder. Another online mental health platform, Koko, ran an experiment where they used Chat GPT-3 alone rather than in tandem with a human, to provide responses. It seems that people didn’t care for the responses, at least when they found out.
The idea of having a bot help prompt more empathetic responses seems like a great idea. It’d be like having a mini-me sitting on your shoulder whispering in your ear. Companies are already seeing results from increased use of cognitive empathy in communication.
The big watch out is when there are situations, like when you are providing comfort and support, which calls for deeper emotional empathy and the bots are incapable of understanding or really authentically replying.
I know from my friends who work in tech that our biases are baked into the code the moment they start writing code. Just like my biases are baked into the words I’m writing here. What’s happening to eliminate that bias in the cognitive empathy, so that it extends equally to all rather than letting some judgment slip in.
And when the code is cracked on AGI - general intelligence - and emotion isn’t part of the sentience - then what? Empathy comes from sensing the emotions of others through body signs and spoken language as well as some understanding of the lived experience of the other person. (That’s why its easier to have emotional empathy with the people in our bubble or cave - the more similar, the easier it is to connect.) Will compassion be part of the programming? Maybe a little sympathy where AI will feel ‘for’ us compared to the empathetic feeling ‘with’ us?
The problem with sympathy, as Brene Brown explains in this short video, is that it creates a power structure where the person expressing sympathy is in power over the recipient. Should we give that power to a non-human intelligence?
Like tech bubbles before it, we’re in the ‘wild west’ period of AI. Now is the time to learn, join the conversation and help shape the future.
Keeping New Year’s Intentions Using Self-Empathy
I’ve had the good fortune to land 3 television appearances to discuss how empathy can help maintain new year’s resolutions. Studies have found, as I shared recently, that 80% of us will give up on our resolutions by the end of January. And only 9% will make it through to the end of the year.
In addition to appearing on Good Things Utah to discuss this, I also appeared in-studio with Colorado & Co. as well as Great Day Colorado, both out of Denver.
I also have a byline piece coming out on Friday February 2 in Entrepreneur’s online leadership forum about Keeping Intentions at Work and with your team.
Live in studio for Colorado & Co with host Claudia Garofalo - as seen on my aunt’s TV
Zooming in from San Francisco for Great Day Colorado
I hope you liked this edition.
Please help spread the word to keep the community growing - pass this newsletter along to someone you know that might also enjoy it. Either forward this email or invite them to subscribe at the click of the button below.
Reading Between the Lines delivers of-the-moment insights into empathy and human behavior; expect practical tips on using the skill of empathy in everyday life and exclusive updates to keep my community close. All on a biweekly basis.