« Social » robot GRETA ©Catherine Pelachaud CNRS/ISIR  Sorbonne University.

COMETS publishes a Opinion “The phenomenon fo attachment to “social robots”. A call for vigilance among the scientific research community ” (n°2024-46), approved on july 1st, 2024.

Working group chaired by Christine Noiville
Rapporteur: Catherine Pelachaud
COMETS members: Patrice Debré, Christine Noiville, Catherine Pelachaud
Guest members: Raja Chatila, Emeritus Professor of Artificial Intelligence, Robotics and Ethics at Sorbonne University, Jean-Gabriel Ganascia, Emeritus Professor of Computer Science and Artificial Intelligence at Sorbonne University.

Read interview of Christine Noiville

Download the Opinion

SUMMARY – COMETS wished to address the tools known as ‘social’ robots, which it felt needed greater consideration by the public research community. The members decided to focus on the cognitive and psychological impacts of their increasing use in everyday life.

Chatbots, conversational agents and other ‘pet’ robots programmed using artificial intelligence (AI) techniques are embedded in a whole range of connected objects such as computers, telephones, watches and cars, and have thus become an integral part of our everyday environment. However, a growing number of them are designed to be emotionally interactive in order to act as companions, confidants, friends, health or well-being coaches, or even—in the case of deadbots—to simulate a deceased loved one. They often use attributes specific to humans (language, appearance, attitude), are capable of interacting with users in the same way as humans (through voice, intonation, gestures, facial expressions), and—using audio sensors or cameras—claim to detect a user’s emotions (Are you sad? You look worried!) and portray sentiments by crying or laughing with users or congratulating them, for example. Users then tend to attribute human qualities to the machine, considering it intelligent, conscious, benevolent and empathetic. They may also imagine that they can interact emotionally with it, developing the illusion that a close, trusting relationship is being established between them and the machine, leading to a bonding phenomenon.

While COMETS is aware of the benefits that may stem from such attachment, it is concerned about the individual and collective impacts that may occur, particularly in terms of emotional dependence, addiction, control, manipulation, lack of interaction with other people or even de-socialisation, etc.

It endorses the recommendations that have already been put forward in various contexts (CNPEN, CERNA, legal and ethical literature, etc.) aimed at manufacturers and the engineers who design social robots, on the one hand, and public authorities on the other. In particular, these devices need to be developed in a carefully thought out, responsible way right from the design stage to avoid the manipulation of users, to inform anyone communicating with a robot that they are in fact conversing with a machine, to avoid the technical possibility of malicious manipulation or threats by the robot, and the exploitation of emotions to the detriment of people’s integrity and autonomy, among other things.

COMETS furthermore considers it necessary to specifically call for vigilance on the part of researchers, learned societies and public research institutions, for two reasons.

Firstly, whether at the CNRS, INRIA, CEA or various universities, a number of studies in computer science, robotics, behavioural sciences and language processing are helping to reinforce the phenomenon of user attachment to social robots, without giving sufficient thought to the intended purposes and effects. While it is laudable to seek to improve human-agent interaction (HAI) for greater user ‘engagement’, we need to look more closely at the drawbacks associated with the anthropomorphising of robots (imitating human appearance and behaviour) and the associated emotional and psychological impacts.

Secondly, public research has a key role to play as a watchdog in monitoring and measuring the long-term consequences of the use of social robots. Now that these are being used on a large scale, we need to gauge their impact on the cognition, psyche and behaviour of users, as well as on users’ relationships with others and the world. We also need to build the knowledge foundation needed to respond to the challenges of using these tools and ensure that they are used both responsibly and freely.

COMETS therefore recommends that the public research community (particularly researchers in computer science and robotics, learned societies and research bodies):

  1. develop training in ethical issues (in scientific and technical courses and for the research staff concerned), become more familiar with the international literature on these issues and debate them collectively;
  2. examine the aims of the research, applications and design choices in this field, as well as the advantages and disadvantages of giving robots a humanoid form or behaviour, or the ability to detect and simulate emotions;
  3. conduct large-scale, long-term scientific studies in realistic situations and contexts on: (a) the relationship that users form with ‘their’ social robots and the implications for cognition, psyche, attachment, autonomy of action and decision-making; (b) the effects of the widespread use of social robots on relationships between humans;
  4. strengthen interdisciplinary and independent research to this end, combining work in computer science, robotics, behavioural sciences, language processing, etc. with research in psychology, neuroscience, linguistics, sociology, law, ethics, philosophy and anthropology;
  5. as part of an observatory, collect large-scale, long-term data on the use of social robots, how users appropriate them, and their impact on users’ emotional states and decisions; the aim is to provide input for scientific research and, in the longer term, to enlighten users and decision-makers on the conditions for the free and responsible development and use of these tools.