Technology mimics humanity

The Uncanny Valley

In 1970, the robotics professor Masahiro Mori articulated the feeling of the uncanny valley, an emotional reaction against unsentient objects that have been designed to mimic human beings. While people can feel quite comfortable playfully anthropomorphizing objects that are obviously inhuman, they begin to feel a growing sense of uneasiness when they encounter machines that are simultaneously devoid of the subjectivity that is at the core of the human experience, but are nonetheless capable of actions that closely imitating behaviors that create the superficial appearance of a subjective, human-like identity.

Popular writing about digital technology often provokes the emotion of the uncanny valley by asserting that digital devices are beginning to have minds that can think and feel in a way that’s similar to a human mind. Two weeks ago, for instance, Fast Company breathlessly published an article claiming, “There’s a new AI that can tell how you feel just by watching you walk”, as if artificial intelligence is capable of conceptualizing what a feeling is. A recent “paid program” at Forbes goes even further, claiming that Affectiva is “creating a new world of machines that can feel”.

Such assertions by Emotion AI, the field that claims to be able to use artificial intelligence machine learning algorithms to “understand” human emotion, or even to create “feeling machines“, provoke immediate suspicion, and with good reason. A new scientific review of the foundations of Emotion AI concludes that technologies that claim to be able to detect human emotion are thoroughly flawed, relying on poorly designed studies that suffer from limited reliability and generalizability, as well as lack of specificity.

The emotion of the uncanny valley doesn’t arise out of the shoddy character of the scientific work that’s been used to prop up the Emotion AI industry, however. It’s based on the fear of human motivations that drive the development of Emotion AI systems. We fear that the corporations implementing systems that attempt to scan our emotions automatically, and without our permission, are seeking to manipulate us into behaviors that are profitable to the business while detrimental to our own wellbeing.

What kind of person, we wonder, would purposefully invent a machine that deceives people with false displays of simulated emotion, orchestrated by automated electronic scans that reduce our own subtle feelings into mathematical formulas? The answer that unnerves us, and provokes the uncanny valley, is that such a person regards emotion more as a tool of control than as an opportunity to build authentic connection.