So We’re Supposed to be as Empathic to Bots as We are to Humans? This Lady Philosopher is nuts!

by
Skip

These Artificial Intelligence researchers are demanding that we show their A.I. bots empathy, reciprocity, and trust and don’t understand why we humans refuse to do so. They are like Bruce Currie (Bernie-Bro and (seemingly former) frequent ‘Grok commenter) who insisted that he could lay obligations on the rest of us that matched his obligation to his Collective view of Society.

Nonsense. Here’s your answer, nice and simple: Artificial Intelligence bots aren’t human. I owe them NOTHING. Neither do you. So here’s the issue (reformatted, emphasis mine):

Empathy, of course, is a two-way street, and we humans don’t exhibit a whole lot more of it for bots than bots do for us. Numerous studies have found that when people are placed in a situation where they can cooperate with a benevolent A.I., they are less likely to do so than if the bot were an actual person.

And that’s how Normal people would react – bots are not human so why should we treat them as such? Watch, as this turns into the same stupid idea of a movement that “Nature” (animals, plants, rivers, mountains) have “Human Rights” and should be treated as such. Frankly, these people have 1) too much time on their hands and 2) too little regard for what it means to be human. Sure, Tom Hanks in Cast Away anthropomorphized a volleyball into being “Wilson” but that was just a movie but the point is made – we’re supposed to be “good” with inanimate objects?

“There seems to be something missing regarding reciprocity,” Ophelia Deroy, a philosopher at Ludwig Maximilian University, in Munich, told me. “We basically would treat a perfect stranger better than A.I.

And what kind of philosopher is Deroy that sets up an equivalence that AI bots are human? I know people who prize technology over humans but this is getting ridiculous.

In a recent study, Dr. Deroy and her neuroscientist colleagues set out to understand why that is. The researchers paired human subjects with unseen partners, sometimes human and sometimes A.I.; each pair then played a series of classic economic games — Trust, Prisoner’s Dilemma, Chicken and Stag Hunt, as well as one they created called Reciprocity — designed to gauge and reward cooperativeness.

Our lack of reciprocity toward A.I. is commonly assumed to reflect a lack of trust. It’s hyper-rational and unfeeling, after all, surely just out for itself, unlikely to cooperate, so why should we? Dr. Deroy and her colleagues reached a different and perhaps less comforting conclusion. Their study found that people were less likely to cooperate with a bot even when the bot was keen to cooperate. It’s not that we don’t trust the bot, it’s that we do: The bot is guaranteed benevolent, a capital-S sucker, so we exploit it.

Great, so now we’re supposed to treat it like a child because we’re playing a game with it of your own making? Sorry, I do that with my Grandson and other little ones – you DO treat them differently than with adults (er. more toward “ruthless” than “happy talk”) because adults have agency and self-will.

Bots don’t. But that doesn’t stop Deroy and her misaligned cohorts:

That conclusion was borne out by conversations afterward with the study’s participants. “Not only did they tend to not reciprocate the cooperative intentions of the artificial agents,” Dr. Deroy said, “but when they basically betrayed the trust of the bot, they didn’t report guilt, whereas with humans they did.” She added, “You can just ignore the bot and there is no feeling that you have broken any mutual obligation.”

Obligation. Guilt. Trust. Those are attributes we assign and attribute to the humans that we like (er, well, most. Not liking those humans that are trying to turn my world upside down – for them, there is no obligation that can be laid upon me, I feel no guilt at all for their machinations to pervert my sense of right and wrong as well as political outlook, and given how they’ve treated me and what they’ve said I am, there’s absolutely no trust towards them.

Sorry, Doc, I think that while your bot might be ok, you certainly are not. You need to get a grip on yourself and look at the obvious nature of humans instead having gotten well ahead of us on loving A.I. Remember: “Terminator”. If she envisions a Utopia, she need to be reminded that Dystopia can also just be around the next bend of the circuit board.

This could have real-world implications. When we think about A.I., we tend to think about the Alexas and Siris of our future world, with whom we might form some sort of faux-intimate relationship. But most of our interactions will be one-time, often wordless encounters. Imagine driving on the highway, and a car wants to merge in front of you. If you notice that the car is driverless, you’ll be far less likely to let it in. And if the A.I. doesn’t account for your bad behavior, an accident could ensue.

Yep, she’s gone off the edge but does show she believes that A.I. should be treated exactly the same way I do my best buddies. Wrong, wrong, WRONG! A hunk of metal and plastics with electrons whirring about is NOT equivalent to another human being. I have NEVER, EVERY thought of talking into my phone with a “faux-intimate relationship” especially when it is forever dialing the wrong number. Or “Sorry, I don’t understand”. At that point, my only interest is in seeing it aerodynamic performance relative to a Frisbee – no relationship required.

And get that “you bad behavior” bit – apparently, she’s never driving on Storrow Drive in Boston trying to get up onto 93N during rush hour – NOW you want to talk about bad behavior? There is only ONE behavior required: The Biggest and The Oldest – how much are you willing to risk?

“What sustains cooperation in society at any scale is the establishment of certain norms,” Dr. Deroy said. “The social function of guilt is exactly to make people follow social norms that lead them to make compromises, to cooperate with others. And we have not evolved to have social or moral norms for non-sentient creatures and bots.”

This is starting to sound like the SJW / BLM / CRT / LGBTQRSTUV model – you WILL be assimilated and you WILL behave according to OUR new norms that we are laying down to you (not FOR you, TO you). Sorry, but now I’m getting peeved. I own robots nothing unless they are bigger than me and can squash me like a bug if I don’t stay out of their way (like industrial robots with arms swinging and wheels rolling and a dope of the software engineer forgot to program in Asimov’s Rules of Robotics.

…There are similar consequences for A.I., too. “If people treat them badly, they’re programed to learn from what they experience,” she said. “An A.I. that was put on the road and programmed to be benevolent should start to be not that kind to humans, because otherwise it will be stuck in traffic forever.” (That’s the other half of the premise of “Westworld,” basically.)

There we have it: The true Turing test is road rage. When a self-driving car starts honking wildly from behind because you cut it off, you’ll know that humanity has reached the pinnacle of achievement. By then, hopefully, A.I therapy will be sophisticated enough to help driverless cars solve their anger-management issues.

Actually, they won’t be honking….

(H/T: Hot Air (with the original snippet I saw) to NYT (which had paywalled the whole thing) to DNYUZ (who posted the whole thing))

Author

  • Skip

    Co-founder of GraniteGrok, my concern is around Individual Liberty and Freedom and how the Government is taking that away. As an evangelical Christian and Conservative with small "L" libertarian leanings, my fight is with Progressives forcing a collectivized, secular humanistic future upon us. As a TEA Party activist, citizen journalist, and pundit!, my goal is to use the New Media to advance the radical notions of America's Founders back into our culture.

Share to...