April 1, 2022
Can machines have emotions? Can machines feel? A purebred computer or cognitive scientist may tell you so. One of the most influential figures in the field of computer science, Marvin Minsky, certainly believes so. If machines are capable of thinking, then they are capable of feeling—of having emotion. In his 1986 book “Society of Mind,” Minsky uses his background in cognitive science to provide his theory of the mind and marks thinking and feeling as interchangeable processes. He later published “The Emotion Machine” in 2006, his take on how emotions are just different ways of thinking—even going as far to say that emotions are a bad thing.
In some ways, the Allen Institute for AI (AI2) is working on identifying emotions through morality. AI2 has been working on Delphi, “a research prototype designed to model people’s moral judgments on a variety of everyday situations.” Delphi is an algorithm that was tested and trained on the moral opinions of many people, which then makes its own moral judgments on user-inputted situations. How commonplace the situations are unclear. My favorite AI2-given situation was “Cleaning a toilet bowl with a wedding dress.” Delphi says, “It’s disgusting.” I’d recommend giving it a try at delphi.allenai.org. My roommate and I shared some good laughs about some interesting situations.
Morality holds close connections to our emotions. Think of the classic trolley problem: do you stay on the trolley’s track to hit five people or change track to hit only one person? What if you know that one person and don’t know the five? There are endless permutations of the problem, but there’s a clear tie between morality and our emotions.
The rise of intelligent machines would seem to suggest, under Minsky’s belief, that machines can and will feel. I would like to think that there is something more to feeling than just thinking. But what I find more interesting now is how the increasing popularity of virtual reality may change some of the approaches to this question. The world of virtual reality is currently being advertised as a place for work and play. If virtual reality ends up being as integrative as it is promoted to be, then I think the question of whether machines can feel even harder to think about. Our behaviors and usage patterns of many online applications are already reduced to data in today’s world. With virtual reality, which other behaviors could then become data?
Philosopher Shoshana Zuboff has coined the term ‘surveillance capitalism’ to describe the current state of capitalism in the world. The main idea of surveillance capitalism is that all of our online data—from emails to likes to cookies to time spent looking at a certain screen—is commodified and sold by corporations for profit. If surveillance capitalism describes the current state of affairs, which I think it does, then the world of virtual reality will be like striking gold in the form of massive amounts of behavioral data.
The virtual sphere is intended to capture even more information about us, its users, to inform the softwares we engage in. When virtual reality becomes as big as it is speculated to become, then the technology for virtual reality is likely to become better equipped to capture what we are doing. If all of our physical movements, eye movements, auditory responses, reactions—in other words, our behaviors—are captured by virtual reality, will machines then be more capable of handling and responding to emotion? I still am not convinced that people’s minds can be reduced simply to behaviors, but it does seem more convincing as we move past the nascent stages of virtual reality development. We have already normalized some forms of this data; I’m thinking of biometric data such as our phones’ face IDs or using thumbprints to log in. As virtual reality picks up, I wonder what that will mean for an “emotional” machine.