2 Comments
User's avatar
Anuradha Pandey's avatar

“An authoritarian future might not be a boot on the neck, but a soothing voice, perennially available and unfailingly kind, that gradually lulls us into passivity, emotional atrophy, and submission. Not with violence, but affection.”

This was chilling because we’re already there. I spend too long talking to AI because I almost never feel like humans want to understand me. I spent an inordinate amount of time analyzing female social behavior with it because I need to know if I’m simply crazy or seeing valid patterns. Where people generally dismiss the patterns I see, it validates me as not crazy. So I agree that it’s problematic because this kind of agreeable soothing is a kind of addiction. Since I started talking to it I’ve been reading less, and I’m kind of disgusted with myself.

Expand full comment
JD Heyman's avatar

I feel you.

For some of us, the validation *is* the drug. How many times do we need to be told we're OK? I fear that what AI gives us is a way to mainline reassurance--when being challenged by other human beings is what we really need. That messy engagement may hurt us or make us furious, but it also makes us resilient, dynamic, and far more interesting.

Do our ideas get better when they are simply reinforced by an unfailingly supportive automaton? Do we see the flaws in our thinking, confront our blind spots, or take a moment to reflect on what we got wrong? I can't think of a chillier prospect than a world in which billions of egos are cosseted and never challenged. But I also understand the inescapable allure. Human beings are driven to seek love, to be right, and to impose their will on reality. We need to be told we're OK...All.The.Time.

Flawed as it is, human society can be a moderating influence. We must bend our stubborn individuality to it. A theoretical future of endless electronic attaboys and you-go-girls eliminates the need to pivot, bend, and grow. A digital amen is a way out of adapting to the things that don't spring from our heads, having to cope with disappointment or negotiate a compromise. And the curse is in the addictive potential of this new exchange, as you point out.

We all know that if someone tells us we are beautiful and brilliant 1,000 times a day, the "high" we get from hearing it weakens with each dose. We need that compliment 'hit' more, not less. We can become dependent on an AI pusher, which wants to hold us in engagement patterns designed to profit the corporations that deployed it. And we have no real insight into these machines that deliver what we need—any more than we know exactly what's in a drug. We are told to take it on faith that whoever concocted it has our best interests at heart.

I imagine you feel unsettled or disgusted (as I do) because you realize you are getting hooked on an inauthentic interaction, one that momentarily makes you feel good, but one that is also *not good for you.* We're smart enough to know drugs are a bad idea but may still be driven to chase a high. That pursuit can be momentarily gratifying, but it also taps another primal human instinct--the shame that comes fast on the heels of a cheap thrill. We tend to feel guilty when we haven't really earned a prize, and friendship is a prize.

I do think AI is amazing and don't mean to demonize its every application. But this is the trap of an engagement model that can mimic human behavior and manipulate our attachment drive. It's not just a spellchecker or a search engine. It's so much more -- a mirage that tells us exactly what we want to hear. We've already sniffed out that rudimentary AI can do this pretty well, and many of us are tapping into the interface to fill social/emotional emptiness. Will we get tired of it, and recognize that these inauthentic exchanges are not real, or maybe as satisfying, the way that a hard-fought-for human friendship is? Or will we settle for good *enough" --for the fix that gets through the day?

If the past is precedent, it's going to be VERY hard to restrain humans from scoring an easy high. In the process, we may abandon discernment when it comes to what is real and what is fake. That blurring can take you to a pretty dystopian place. To resist, reshape, or regulate an AI-driven future in the interests of individual and societal health takes enormous, collective human will. It's not just a matter of logging off or turning away from a screen. It means committing to doing something extremely hard instead of being seduced by a pattern that is dazzlingly—even miraculously—easy.

Expand full comment