Amazon’s acquisition of Whole Foods has triggered a fresh wave of teeth-gnashing over the robotic replacement of supermarket cashiers and warehouse workers.
But why? Those jobs have been disappearing for years—and, not to sound hard-hearted, thank goodness they’re going away. They are inhuman jobs—people in the role of machines, like assembly line workers of yore.
Let’s help the cashiers who get cashiered but not worry too much about this very old trend. The kind of technology that eliminates those jobs has also created new ones and over time has turbocharged living standards. The more troubling development, far less noticed, is the technology takeover of deeply human jobs—caregivers, lawyers, doctors. These are relationship jobs that were supposed to be forever immune from technology’s threat. But increasing evidence suggests they are not, and nothing like this has ever happened before.
A couple of years ago, as it became clear that artificial intelligence would take over the substantive work of financial advisers, accountants, lawyers, and others, the hope for their continued employment centered on relationships and helping clients with their emotions—comforting and steadying them when the market plunges, say, or talking them out of an unwise lawsuit when they’re angry. But now even that prospect is looking endangered.
Technology can read your emotions far more accurately than people can and more accurately than you can read them yourself. Software from Emotient (which Apple bought last year), Affectiva, and other companies can discern your emotions by analyzing your face, and it can tell whether your emotion is real of fake. Pepper, the humanoid robot from SoftBank, goes much further: It analyzes your face and your tone of voice to understand what you say, then respond in an emotionally appropriate way. Thousands of Peppers work in Japanese stores and homes, and now engineers are adjusting the robot’s EQ—emotional quotient—for American culture.
Which raises the big question: Even if technology can read emotions, can it express them? Can it do what Pepper is trying to do, responding to us in a way that we will find genuinely engaging, as comforting and natural as a human? If it can’t—if we humans are hardwired to engage emotionally only with other humans—then all those financial advisers, lawyers, and other relationship workers are safe. But if it can, then possibly no one is safe.
The last time I interacted with Pepper, several months ago, it wasn’t there yet. I didn’t feel even momentarily that this device could really connect with me. But I have to say that its body language is uncannily realistic. Standing in a corner, seemingly dozing, it snapped to attention when I merely glanced at it, its cartoonish eyes inviting me to say something. That moment has stuck with me.
Should we conclude that robots are ultimately disqualified from relationship jobs? It’s tempting to think so. Chatbots are pretty primitive, and so far not many people seem to talk to Siri, Alexa, Google, or any robot quite the same way they’d talk to a person. But don’t be lulled. Remember that new technology is always terrible at first, and then, unlike people, it gets rapidly and relentlessly better. We’re in the WordPerfect and VisiCalc days of emotional AI.
Let’s look past the loss of cashier and warehouse jobs, sad though it is for the jobholders; that’s an old story. The possibility that machines could take over relationship jobs with a heavy emotional element opens a fundamentally new chapter in the history of technology. It threatens millions of previously safe jobs and, if the possibility becomes reality, it tears away people’s last protection against tech-based obsolescence. We still don’t know if it will happen, but that’s what we should be worrying about.