Appropriate emotions for machines

Alistair Edwards asks: What emotions would be appropriate to a machine?

Having no biological reality (body) nor evolutionary history, it is impossible for a machine to feel the basic primal emotions, including fear, anger, and sexually-related drives. If it's impossible for the machine to feel these emotions, then it is not possible to use such feelings in decision-making (action). Therefore, it is inappropriate for the machine to express (or, more accurately, to pretend to express) those emotions.

On the other hand, some of the social emotions may be within bounds for a machine. For example, with stochastic processes now used for pattern-matching (speech recognition, facial recognition, voice biometrics), the concept of uncertainty or degrees of certainty come into play. With high probabilities that conditions of the current state are well-modeled by the machine, we can allow the machine to exhibit confidence. Indeed, the term "confidence" is used to describe high-probability "beliefs" on the part of the machine.

A confident machine can be expected to exhibit certain behaviors -- in the form of shorter turn-taking cues, faster machine speech, and certain prosodic inflections. As confidence goes down -- that is, the machine's "belief" in its own model of the current state is low -- then machine behaviors change. Emotions along this axis fall along a continuum from annoyance to embarrassment. If the context of the current state implies lack of confidence in the machine's own performance, then embarrassment is the appropriate emotion. The vocal cues are designed to extract cooperative behavior from the user aimed at "helping" the machine. If the context of the current state implies lack of confidence in the user, then annoyance is the result. The vocal cues are designed to induce embarrassment or shame in the user, leading to a "correction" of errorful or inattentive user behaviors.

Input/output maps could be designed in a 2-dimensional space that captures dialogue (and other internal and external) events to construct a representation of the machine's confidence in its own performance and in the user with which it is interacting. Behaviors along the triangle of emotions derived from confidence, annoyance (at the user), or embarrassment (at its own mistakes) would in turn modulate certain artificial speech parameters. With some careful experimentation, we may be able to find out which emotional behaviors cause the desired change in the user's emotion and subsequently on the user's speech behavior, creating a feedback loop of conversational stability.


Bruce Balentine
Posted 20th May 2011 at 11:07 AM

Comments

Whilst I agree that 'confidence' is a worthy internal (mental) state for a machine to express externally, I'm afraid that don't agree with the logic of your opening paragraph. First it's very important to distinguish 'emotions' from 'feelings' - according to the psychologists, the former is the consequence (output) of an appraisal process whearas the latter is the conscious awareness of an emotion. The jury is out as to whether a machine can ever be conscious (and thus exhibit 1st-person experiences), but for sure it needs to conduct appraisals in order to go about its tasks. If a machine makes a comparison between an internally-specified 'intention' and the perceived state of the internal/external world in order to select an action that brings the two together, then the difference between the two is the driving force behind behaviour - and we can call that 'emotion' (in this case, 'valence' or 'positive/negative'). If the machine invests significant effort in bringing the two together, then this would mean that it would ehibit high 'arousal/motivation'. If it invests low effort, then it would mean that it would exhibit low 'arousal/motivation'. It is well established that valence and arousal are the two prime dimensions for emotion, and that many subtle emotions corespond to particular regions of the V/A space. Now, just because an organism (machine) has these internal affective states, doesn't necessarily mean that they are manifest externally. Obviously, for a living organism, some affective states have physiological consequences, so they may become observable by a 3rd party. Other organisms may choose whether to allow such internal states to become observable. In my view, much of this applies to machines.

Roger Moore
Commented 20th September 2011 at 6:05 PM


Thanks, Roger, a very good comment, and it reminds me of the importance of definitions in these discussions. You are correct that emotions and feelings are very different phenomena, and a good working definition for this group is essential. I won't contribute here in this comment, but I will make a stab at some possibilities in the glossary section of this web site. Certainly all of the words that you mention are at the top of the list: * emotion * feeling * affect * valence * qualia * consciousness * self They are all interrelated, and many psychologists and neurophysiologists have their own (sometimes conflicting) definitions. If we can all agree on working definitions here, then the discussion should become well-grounded.

Bruce Balentine
Commented 3rd October 2011 at 2:48 AM