What’s the difference between humans and robots?

By Miraikan – The National Museum of Emerging Science and Innovation

As androids become ever more similar to humans in appearance and ability, this reveals more and more about what it really means to be human. And androids are now beginning to occupy roles in society similar to our own. Increasingly, the rise of robots calls for a rethink of what it means to be human...

Kodomoroid (2014) by Advanced Telecommunications Research Institute International, Hiroshi Ishiguro Laboratories and DENTSU INC. Miraikan – The National Museum of Emerging Science and Innovation

Just like us

The word ‘android’, from Greek, means ‘human-like’. Robotics innovators are now trying to extend the role of robots into domains that were previously unreachable by conventional robots, such as news reading and nursing.

Kodomoroid reading the news (2014) by Advanced Telecommunications Research Institute International, Hiroshi Ishiguro Laboratories and DENTSU INC. Miraikan – The National Museum of Emerging Science and Innovation

Kodomoroid ® , an android that resembles a child, can read the news in multiple languages.

A man who enjoys conversation through Telenoid (2010) by Advanced Telecommunications Research Institute International and Osaka University Original Source: Telenoid Healthcare Company

Telenoid ® is a robot being introduced in the field of nursing. Recognisable features like its facial and bodily features have been minimised, so people can imagine someone familiar to them, bringing comfort. For example, elderly people can picture a grandchild’s features and mannerisms.

Hiroshi Ishiguro, an android researcher Miraikan – The National Museum of Emerging Science and Innovation

"The biggest difference between a robot and a computer is its presence. "Robots and androids are easy for persons to anthropomorphize, and it can therefore make people feel that there are intelligence, emotions and consciousness there. "I think that androids may become a bigger part of our lives as the technology that enhances these characteristics develops." - Ishiguro Hiroshi, "Mirror reflecting someone's heart"

Hiroshi Ishiguro and his android "Geminoid HI-1" (2006) by Advanced Telecommunications Research Institute International, Hiroshi Ishiguro Laboratories Miraikan – The National Museum of Emerging Science and Innovation

What is human?

What’s more, as we further develop android technology - through the observation and modelling of humans - we gain a deeper understanding of human beings: their traits, emotions and eccentricities.

Otonaroid (2014) by Advanced Telecommunications Research Institute International, Hiroshi Ishiguro Laboratories and DENTSU INC Miraikan – The National Museum of Emerging Science and Innovation

Otonaroid ® , an android that closely resembles an adult woman, has been developed not only to look like a human but to even mimic human expression and gestures through subtle changes and movements. Even when sitting still, Otonaroid ® makes tiny eye and shoulder movements.

Otonaroid talking with humans (2014) by Advanced Telecommunications Research Institute International, Hiroshi Ishiguro Laboratories and DENTSU INC. Miraikan – The National Museum of Emerging Science and Innovation

You also have a conversation with Otonaroid ® via intercom from a nearby booth.

Operation booth of Otonaroid (2014) by Advanced Telecommunications Research Institute International, Hiroshi Ishiguro Laboratories and DENTSU INC. Miraikan – The National Museum of Emerging Science and Innovation

You can watch the image of the camera embedded in the eyes of Otonaroid ® and listen to her voice. By synchronizing the movement of the neck through the headset, it makes the observer feel as though they have embodied her. This experience reveals that the human body and senses are not tightly connected as we might think.

Alter (2016) by "Alter" Production Team(Hiroshi Ishiguro, Takashi Ikegami, Kohei Ogawa, Itsuki Doi, Hiroki Kojima, Atsushi Masumori) Miraikan – The National Museum of Emerging Science and Innovation

An android called ‘Alter ®’ was created in the pursuit not just of life-like appearance, but of emulating realistic human movement. What is it that makes us feel alive?

Alter showing Life-like movements (2016) by "Alter" Production Team(Hiroshi Ishiguro, Takashi Ikegami, Kohei Ogawa, Itsuki Doi, Hiroki Kojima, Atsushi Masumori) Miraikan – The National Museum of Emerging Science and Innovation

Complex movements are produced by specialized programs, such as the central pattern generator and neural network, which mimic human neural circuits.

A human and an android looking each other (2016) by "Alter" Production Team(Hiroshi Ishiguro, Takashi Ikegami, Kohei Ogawa, Itsuki Doi, Hiroki Kojima, Atsushi Masumori) Miraikan – The National Museum of Emerging Science and Innovation

The future for humans and androids The boundary between androids and humans is getting ever more more narrow. How will our relationship with robots develop as androids become ever more sophisticated, and humans begin to regard them differently?

Miraikan - The National Museum of Emerging Science and Innovation

Our connection with Earth

Miraikan – the national museum of emerging science and innovation, catching particles from the other side of the universe.

clock This article was published more than  1 year ago

Humans vs. robots: The battle reaches a ‘turning point’

Warehouse robots at companies like amazon and fedex are finally able to pick and sort things with humanlike finesse.

human vs robot presentation

Warehouse robots are finally reaching their holy grail moment: picking and sorting objects with the dexterity of human hands.

Amazon has robotic arms that can pick and sort cumbersome items like headphones or plushy toys before they’ve been boxed. FedEx has piloted a similar system, which it uses in some warehouses to sort mail of various sizes.

And other companies are making progress, too.

For decades, training a robot to be more humanlike has stumped engineers, who couldn’t replicate the ability to grip and move items. But now gains in artificial intelligence technology, cameras and engineering are bearing fruit, allowing robots to see objects of varying shapes and sizes and adjust their grasp accordingly.

The technology, computer scientists say, is finally getting reliable enough that companies find it feasible to deploy.

“This moment is a turning point,” said Kris Hauser, a robotics expert and computer science professor at the University of Illinois at Urbana-Champaign. “They’re competent enough at this point.”

But there’s also contentious debate. Critics worry robots will take people’s jobs, though boosters say it’ll just create different ones. Others note more robots could result in higher rates of worker injury, or result in tougher human surveillance to ensure they’re hitting targets.

Beth Gutelius, an economic development professor at the University of Illinois at Chicago, said the way companies unleash these robots without much testing or regard to worker safety is concerning.

“Shouldn’t we all want these things to work better for more people?” she said.

Amazon founder Jeff Bezos owns The Washington Post.

These robots were trained on AI. They became racist and sexist.

Robots have been on the scene for years, but it’s been a slog for scientists to get them to replicate tasks as well as humans — particularly when it comes to hands. Amazon has Kiva robots, which look like Roombas and move packages on the factory floor, but still need humans to pack and sort them.

Elon Musk has notoriously said he would automate Tesla’s manufacturing , but humans are still needed to do work on the assembly line at the company’s Fremont, Calif., factory. He also recently unveiled Tesla’s proto type humanoid robot Optimus , which is aiming to reshape physical work.

Google recently unveiled robots that are fueled by artificial intelligence to help humans with everyday tasks. Some robots are even learning how to cook fries .

Despite the advances, the hardest challenge for researchers has been teaching robots to adjust their grips to different sizes and shapes, said Ken Goldberg, an industrial engineering professor at the University of California at Berkeley.

But in the past decade, things have started to change, he said. 3D camera technology, spurred by Microsoft’s Kinect motion sensing cameras, has become better at spotting images. Deep learning, a field of artificial intelligence that uses algorithms loosely modeled on the brain, allows computers to analyze more images. Researchers started better understanding the physics of grasping things, and incorporating that into robotic suction cups and pickers.

The result: modern-day robotic machines that often look like long arms. Their vision is fueled by software that uses machine learning algorithms to analyze what objects look like to instruct robots on how to grip things. The suction cups or claws adjust pressure and control with the finesse humans take for granted.

Amazon in particular has been chasing the technology, the industry experts said. As one of the world’s largest retailers, plagued with high rates of turnover and promises to deliver packages quickly, it made strong financial sense to try to automate warehouse processes as much as possible.

In 2012, the company acquired mobile robotics company Kiva for $775 million in cash. In 2014, the company announced a “picking challenge,” challenging scientists to create robots that could pick up assorted items, varying from Sharpies to Oreo cookie packages, from a mobile shelf.

Last month, Amazon unveiled its picking-and-sorting robot called Sparrow, a long robotic arm that can grab items before they are packed in boxes. It’s being researched and developed in Massachusetts and in operation at an Amazon facility in Dallas, officials said. It can sort roughly 65 percent of products in its inventory, according to company officials, but nationwide expansion plans aren’t set yet.

The robot fits into a broader automation strategy, according to Amazon . If mastered, Sparrow could pick products up after they’ve been offloaded from trucks and before they’re wrapped and put onto mobile shelving. Once boxed, Amazon’s robotic system, called Robin, could sort them to their destination. Cardinal, another robotic machine, could put them into a waiting cart, before being loaded onto a truck.

Amazon has consistently said more machines will allow people to find better jobs. Robots are “taking on some of the highly repetitive tasks within our operations, freeing up our employees to work on other tasks that are more engaging,” said Xavier Van Chau, a spokesman for the company.

In March, mailing giant Pitney Bowes inked a $23 million deal with Ambi Robotics to use the company’s picking-and-sorting robots to help sort packages of various shapes, sizes and packaging materials. In August , FedEx agreed to purchase $200 million in warehouse robotics from Berkshire Grey to do similar tasks. A few months before that, it launched an AI-fueled mail sorting robot in China.

Although the bulk of the technology started to appear a few years ago, it’s taken time to ensure these systems reduce errors down to less than 1 percent, said Hauser, which is crucial for company bottom lines.

“Each mistake is costly,” he added. “But now, [robots] are at a point where we can actually show: ‘Hey, this is going to be as reliable as your conveyor belt.’”

As Walmart turns to robots, it’s the human workers who feel like machines

Revenue generated by companies making picking-and-sorting robots are skyrocketing, said Ash Sharma, a robotics and warehouse industry expert at Interact Analysis, a market research firm.

The research firm estimates companies that make these products will rake in $365 million this year. Next year, it’s estimated to be over $640 million. It’s a jump from the roughly $200 million last year and $50 million in 2020 these companies generated in revenue, data forecasts show.

A big factor is the labor shortage, he said.

Gutelius, of the University of Illinois at Chicago, said that although the technology proves interesting, it comes with risks. With more robots on warehouse floors, workers alongside them will have to work at a quicker pace, risking more injuries.

The Washington Post has reported that Amazon warehouses can be more dangerous than rivals. Experts say that adding robots to the process can increase injuries.

Van Chau said machines doing repetitive tasks will help workers. “We can take some of that strain away from employees,” he said.

The next generation of home robots will be more capable — and perhaps more social

But Gutelius says companies making claims that these robots will help need to be scrutinized, saying they tend to implement solutions too quickly.

“It’s sort of classic ‘move fast and break things,’” she said. “And in this case, I think ‘breaking things,’ it ends up being people.”

human vs robot presentation

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List

Logo of plosone

Human- or object-like? Cognitive anthropomorphism of humanoid robots

Alessandra sacino.

1 Department of Educational Science, University of Genova, Genova, Italy

Francesca Cocchella

2 Cognitive Architecture for Collaborative Technologies Unit, Istituto Italiano di Tecnologia, Genova, Italy

Giulia De Vita

Fabrizio bracco, francesco rea.

3 Robotics Brain and Cognitive Sciences Unit, Istituto Italiano di Tecnologia, Genova, Italy

Alessandra Sciutti

Luca andrighetto, associated data.

The data underlying the results presented in the experiments are available on OSF at https://osf.io/fyp4x/ .

Across three experiments ( N = 302), we explored whether people cognitively elaborate humanoid robots as human- or object-like. In doing so, we relied on the inversion paradigm, which is an experimental procedure extensively used by cognitive research to investigate the elaboration of social (vs. non-social) stimuli. Overall, mixed-model analyses revealed that full-bodies of humanoid robots were subjected to the inversion effect (body-inversion effect) and, thus, followed a configural processing similar to that activated for human beings. Such a pattern of finding emerged regardless of the similarity of the considered humanoid robots to human beings. That is, it occurred when considering bodies of humanoid robots with medium (Experiment 1), high and low (Experiment 2) levels of human likeness. Instead, Experiment 3 revealed that only faces of humanoid robots with high (vs. low) levels of human likeness were subjected to the inversion effects and, thus, cognitively anthropomorphized. Theoretical and practical implications of these findings for robotic and psychological research are discussed.

Introduction

Robots are becoming more and more common in everyday life and accomplishing an ever-increasing variety of human roles. Further, their market is expected to expand soon, with more than 65 million robots sold a year by the end of 2025 [ 1 ]. As their importance for human life grows, the interest of robotics and psychology scholars in fully understanding how people perceive them constantly increases. Addressing this issue is indeed highly relevant, as one of the primary tasks of this technology is establishing meaningful relations with human beings.

The overall goal of the present research was to expand the knowledge about the human perception of robots. In doing so, we adopted an experimental psychological perspective on robotics (see [ 2 ]) and sought to uncover the cognitive roots underlying the anthropomorphism of these nonhuman agents.

Anthropomorphizing robots

Research on Human-Robot interaction (HRI) provided convergent evidence that the appearance of robots, together with their behaviors [ 3 , 4 ], deeply shapes people’s perceptions and expectations. Basing on the design of robots, people form impressions on them and infer their peculiar qualities, such as likeability [ 5 , 6 ], intelligence [ 7 ] or trustworthiness [ 5 – 9 ]. Although this design can assume different forms (e.g., machine- or animal-like), the humanoid shape is commonly considered as the most effective means to overcome the psychological barriers in the HRI [ 10 ]. Accordingly, humanoids are the robots most used within the social environment and, thus, the focus of the present research.

Similar to other nonhuman agents, the human likeness of robots is a key situational variable triggering people’s tendency to anthropomorphize them [ 11 ]. That is, the perceived similarity of a humanoid robot to human beings increases people’s accessibility to homocentric knowledge that is then projected onto the robot. Thus, robots resembling humans are more likely to be attributed distinctive human characteristics, such as the ability to think, being sociable [ 12 ], or feeling conscious emotions [ 13 ]. Further, such anthropomorphic inferences increase people’s sense of familiarity with this nonhuman target and a sense of control over them, with subsequent benefits for the interaction [ 14 ]. A great deal of research corroborated this latter assumption, by for instance revealing that people tend to trust (e.g., [ 15 ]; see also [ 16 ]) or empathize [ 17 ] more with anthropomorphized robots, as well as expect that they can behave morally [ 18 ]. At the same time, the relationship between the perceived human likeness of robots and their acceptance in the social environment appears to be quite complex and not linear. Drawing from the Uncanny Valley hypothesis ([ 19 ], for a critical review see e.g., [ 20 ]), some researchers [ 21 ] have for example demonstrated that too high levels of anthropomorphic appearance of humanoid robots trigger a sense of threat towards them, as they are seen as undermining the uniqueness of human identity. In the same vein, robots perceived as too similar to humans are perceived as less trustworthy and empathic [ 9 ]. A humanoid appearance also implies the expectations that the robot should move and behave following human-like motion regularities. Such implicit belief, when not fulfilled (e.g., by a humanoid robot moving in a nonhuman like kinematics) hinders basic prosocial mechanisms such as automatic synchronization or motor resonance, reducing the possibilities to establish a smooth interaction [ 22 ]. In the same vein, perceiving this technology as too human-like heightens people’s illusory expectations about the functions that this technology can indeed fulfill, and a violation of such expectations lowers the quality of HRI [ 23 ].

Despite the still debated effects of the human likeness of robots, anthropomorphism remains the most influential psychological process regulating the approach and subsequent interaction of humans with this technology. Thus, a systematic comprehension of the nature of this phenomenon is essential to better identify its antecedents and consequences for the HRI, be them positive or negative. So far, this process has been mostly conceived as a higher-order psychological process, consisting of inductive reasoning through which people attribute traits or qualities of human beings to this nonhuman agent. That is, most research in this field has investigated this process in terms of “content”, by assessing the extent to which respondents are inclined to attribute uniquely human attributes (e.g., rationality or the capacity of feeling human emotions) to this technology.

Unlike these previous studies, the main purpose of this research is to examine this process through a “process-focused lens” [ 24 ], that is, investigating whether it could also occur at a more basic cognitive processing level. More specifically, we were interested in understanding whether people cognitively process humanoid robots as human- or object-like and whether the levels of human likeness endorsed by these robots may affect such cognitive processing. Beyond contributing to the theoretical knowledge of this process, comprehending the cognitive roots of anthropomorphic perceptions could have important practical implications. How people cognitively perceive other agents (whether human or not) deeply shapes their first impressions—often at an unaware level—and affects the course also of HRI [ 25 ], above and beyond higher-order cognitive processes.

To achieve this aim, we integrated the existing research on the anthropomorphism of robots with cognitive paradigms commonly employed to study how people elaborate social (vs. non-social) stimuli.

Configural processing of social stimuli and the inversion paradigm

During the last decades, cognitive psychology and neuroscience have intensively studied whether our brain processes social (e.g., a human face or body) and non-social stimuli (i.e., objects) similarly or differently. Cumulating evidence consistently reveals that people recognize social stimuli through configural processing, which requires considering both the constituent parts of the stimulus and the spatial relations among them. Such a process is activated both when people elaborate on human bodies (see [ 26 ] for a review) and faces (see e.g., [ 27 ] for a review). Instead, people recognize objects (e.g., a house) through analytic processing, which relies only on the appraisal of specific parts (e.g., the door), without requiring info about the spatial relations among them. Although the nature of this dual process is largely debated (see e.g., the expertise hypothesis, [ 28 ]) and it is still not clear whether human faces and bodies are unconditionally processed in a configural way, there is general agreement that such social stimuli are commonly elaborated in this way. In contrast, objects are commonly processed analytically.

The major indicator of this bias has been studied through the inversion paradigm, in which participants are presented with a series of trials first showing a picture of a social stimulus or an object, either upright or upside down. Afterward, subjects are asked to recognize the picture they just saw within a pair including a distractor (mirror-image). The main assumption is that when people are presented with a stimulus in an upside-down (vs. upright) way, their ability to process it by relying on the spatial relations of its constituent features should be impaired. Thus, this inversion should undermine the recognition of social stimuli as they are processed in a configural way, whereas it should not affect (or affect less) the recognition of objects, as they are analytically processed. Several investigations that also employed EEG methods [ 29 ] have confirmed such premise, first considering human faces (face-inversion effect, [ 30 , 31 ]) and then bodies (body-inversion effect; [ 32 ]) as social stimuli. More recently, social psychology researchers have adapted the body-inversion paradigm to investigate the cognitive roots of sexual objectification. This is a specific form of dehumanization implying the perception (and treatment) of women as mere objects useful to satisfy men’s sexual desires [ 33 , 34 ]. In particular, Bernard and colleagues [ 35 ] demonstrated that the inversion effect (IE) does not emerge when people are exposed to images of sexualized female—but not male—bodies that were similarly recognized when presented upright or inverted. Hence, these social stimuli do not activate a configural processing and are cognitively objectified. This first impressive evidence has been then debated and criticized by Schmidt and Kistemaker [ 36 ], who demonstrated that the body asymmetry of the (female) stimuli used by Bernard and colleagues [ 35 ] explained the emerged pattern of findings (for a detailed discussion of this issue see [ 37 , 38 ]). However, subsequent studies (e.g., [ 39 ]) employing a different set of stimuli controlled for their asymmetry confirmed the effect found by Bernard and colleagues [ 35 ], supporting the idea that the IE is a valid indicator to study the cognitive objectification of sexualized women [ 40 ].

Drawing on these studies, in the present research we adapted inversion paradigms as basic tools to systematically investigate an inverse process rather than objectification, people’s perception of nonhuman agents (i.e., robots) as human ones. Interestingly, Zlotowski and Bartneck [ 41 ] found preliminary evidence about the investigated process. Although not systematically checking for the stimuli asymmetry, they showed that robot images, similar to human ones, were subjected to the IE and thus processed in a configural way. The main goal of the present research is replicating and expanding this initial evidence in different ways. In the first step, we aimed to verify whether the IE would emerge for robot stimuli when controlling for each employed stimulus’s asymmetry. Second, we verified whether the human-like appearance of humanoid robots would modulate the hypothesized cognitive anthropomorphism, and especially emerge for humanoid robots with high levels—but not with low levels—of human-like appearance. Third, we explored whether similar effects would emerge not only when considering the whole silhouettes of robots (body-IE), but also their faces (face-IE). In fact, we reasoned that an exhaustive comprehension of the cognitive anthropomorphism of humanoid robots should also encompass how human beings process their faces, besides their bodies. Faces are indeed the focal point in social cognition [ 42 ] and a prominent cue of humanity. Accordingly, recent research [ 43 ] for example revealed that (human) faces follow a peculiar configural processing, which in turn activates human-related concepts.

Research overview

We designed three experiments to address the aims outlined above. In all the studies, we relied on inversion paradigms adapted from the previous studies, in which participants were exposed to stimuli portraying human beings, humanoid robots or objects. Following the original protocols, the image was first presented in an upright or inverted position for each trial and then followed by two images. One of them was the original picture and the second was its mirrored version (i.e., distractor). Participants’ task was to recognize which picture of the two was the initial one.

In Experiment 1 and 2, participants were displayed entire bodies of human beings or humanoid robots, to investigate whether the body-IE would emerge both for human and robot stimuli. In Experiment 3, we explored the face-IE for the target stimuli, by presenting participants pictures portraying faces of human or humanoid robots. Further, in Experiment 1 we kept constant the medium levels of human likeness of robots and faces. Instead, in Experiment 2 and 3 we manipulated them by selecting robots with high vs. low scores of overall (Experiment 2) or face human likeness (Experiment 3; for more details about the selection of these stimuli see below). To increase the consistency of the investigated effects, across the studies we also varied the object-control stimuli, including human-like objects (i.e., mannequins; Experiment 1), buildings (Experiment 2) or general domestic tools (Experiment 3).

Finally, in all the studies we verified whether the cognitive anthropomorphism detected through the IE would be associated with the higher-order anthropomorphism, that is with respondents’ tendencies to attribute robots uniquely human qualities.

Experimental material

The prototypes of robots were initially selected from the ABOT database ( http://abotdatabase.info/ ; [ 44 ]). It is a large pool of real-world humanoid robots that allows researchers to select them depending on their human-like appearance on distinct dimensions, each ranging from 0 to 100. In selecting our stimuli for robots, we set the filters for the considered dimensions, depending on our purposes and the availability of humanoid robot prototypes for the given range. That is, in Experiment 1, we selected 20 prototypes of robots with a medium overall human likeness score (42–66). In Experiment 2, we filtered 10 robots with a low overall human likeness score (0–40) and 10 robots with a high overall human likeness score (60–100). In Experiment 3, we filtered 12 robots having a low overall human likeness score (0–45) and low human-like face score (0–42), plus 12 robots having a high overall human likeness score (60–100) and high human-like face score (60–100). Further, in Experiment 2 and 3 the body-manipulators filter was also used, by selecting robots having body-manipulators above 50. This allowed us to exclude robots composed of a single body part (e.g., a cube with only one eye, a single arm without head or body) and, thus, to obtain a more homogenous and comparable set of robots across the experiments and conditions.

For all the experiments, images of the selected robots were then retrieved online and standardized as follows. Using the open-source software Krita, all the images were uniformed in grayscale and pasted onto a white background. In Experiments 1 and 2, images of full body robots in a standing position and head directed towards the camera were edited to depict them from head to knee and fitted in a 397×576 pixels image. In Experiment 3, images of full front faces of humanoid robots with a neutral expression were trimmed, to remove external features and depict them from the hairline to the neck and then fitted in a 300×400 pixels image. Examples of the standardized stimuli of robots used in each experiment are displayed in Fig 1 .

An external file that holds a picture, illustration, etc.
Object name is pone.0270787.g001.jpg

Concerning human stimuli (see Fig 2 for some examples), for Experiment 1, we selected 20 images from work by Cogoni and colleagues [ 39 ]; personalized conditions), portraying the whole silhouette of 20 individuals (10 men and 10 women) wearing casual clothes. To increase the generalizability of the hypothesized effects, in Experiment 2 we ad hoc created a set of human stimuli, portraying the entire body of 10 individuals (5 men and 5 women), each in two different poses. Similarly, in Experiment 3 we used a set of human stimuli ad hoc developed, consisting of 12 pictures of full front human faces (6 men and 6 women) with a neutral expression. Human stimuli were standardized through the same procedure used for the robot ones.

An external file that holds a picture, illustration, etc.
Object name is pone.0270787.g002.jpg

As object-control condition (see Fig 3 ), in Experiment 1 20 mannequins (10 male and 10 female) images were considered and standardized in the same way we did with robots and humans. Instead, in Experiment 2 (20 images) and 3 (12 images), we considered images of buildings as the object category, retrieved by the Cogoni and colleagues [ 39 ] research. Finally, in Experiment 3, a new set of 12 object stimuli was created ad hoc, including a wide variety of domestic tools (e.g., a cup or a bottle).

An external file that holds a picture, illustration, etc.
Object name is pone.0270787.g003.jpg

Relevantly, for the experiments testing the body-IE (Experiment 1 and 2), an asymmetry-index was calculated for each robot, human and mannequin stimulus, following the procedure used in previous works [ 36 – 39 ]. For both experiments, data analyses revealed that the degree of asymmetry of the stimuli did not differ across the considered categories (see S1 File for more details about the procedure and data analyses).

Open science practices and statistical methods

The sample sizes for all the experiments were a priori planned following the recommendation by Brysbaert [ 45 ], who suggested that around 100 participants are requested to have adequate power when focusing on within-subjects designs with repeated-measures variables and interactions between them. For each experiment, we reported all the stimuli, variables, and manipulations. All data and materials are posted and publicly available on OSF at https://osf.io/fyp4x/ .

Main analyses were conducted using the GAMLj package [ 46 ] in Jamovi 1.8.4 version (The Jamovi project, [ 47 ], using a generalized mixed-model with a logit link function (logit mixed-model; [ 48 ]). In all the experiments, we considered participants’ binary accuracy responses as the main outcome variable, coded as correct (1) and incorrect (0). Also, as in each experiment, all the participants were presented the same set of stimuli, in our models we included both a by-subject and a by-item random intercept to account for individual variability and non-independence of observation. Stimulus orientation (upright = 1 vs. inverted = 2) and category (human vs. robot vs. control) were instead included as fixed effects. We reported significant odds ratios (OR) and the related 95% CI in interpreting the participants’ accuracy. As our logit mixed-models predicted the odds of giving a correct response (accuracy = 1), a significant OR below 1 indicates that changes in the independent variable (e.g., presenting an image in the inverted orientation vs. the upright one) reduce the odds of giving a correct response, while a significant OR greater than 1 indicates an increase in the odds of giving a correct response.

Finally, in each experiment before running the main analyses, we performed an outlier analysis on the latency responses, based on the nature of our studies and the statistical mixed-model approach adopted [ 49 , 50 ]. That is, we did not consider participants’ responses on trials with latencies deviating more than ± 3 SD from the mean or with latencies below 50 ms (for a similar procedure, see [ 32 ]).

Experiment 1

The first experiment was mainly designed to have preliminary evidence about the cognitive anthropomorphism of humanoid robots, relying on the body-IE. That is, we verified whether images portraying full-bodies of humanoid robots with medium levels of overall human likeness score would be cognitively elaborated similar to those of human beings and, thus, better recognized when presented upright than inverted.

Procedures performed in both experiments were approved by the Departmental Ethics Committee (CER-DISFOR) and were in accordance with the APA ethical guidelines, the 1964 Helsinki Declaration and its later amendments. Written informed consent was obtained before participants started the experiments, and they were fully debriefed after each experimental session.

Participants and experimental design

Ninety-nine undergraduates at a north-western Italian university (39 male; M age = 22.2; SD = 2.26) were recruited on a voluntary basis by research assistants via e-mail or private message on social networks. A snowball sampling strategy was used, with the initial participants recruited through the experimenters’ friendship networks. A 2 (stimulus orientation: upright vs. inverted) × 3 (stimulus category: humans vs. robots vs. mannequins) within-subject design was employed.

Participants came into the laboratory individually for a study “investigating the social perception towards human and nonhuman stimuli”. The recognition task was administered using PsychoPy v3.03. Each participant was presented with 60 experimental stimuli (20 for each category) that were presented in a randomized order. Half of them were presented in an upright orientation and the other half 180° rotated on the x -axis (inverted condition). Following previous inversion-effect protocols, each trial began with the original image presented for 250 ms at the center of the screen in an upright or inverted orientation, depending on the experimental condition. Following a transient blank screen (1000 ms ), participants were presented with two images, on the right and left of the center of the monitor, respectively. One image was the original one, the other was its mirrored version. Participants’ task was to detect which of the two images was the same as the original one, by pressing the “A” key on the keyboard if the target image appeared on the left, the “L” key if it appeared on the right. Once participants had provided their responses, the next trial followed (see Fig 4 for a trial example). Before the experimental trials, participants were familiarized with the task through 9 practice trials.

An external file that holds a picture, illustration, etc.
Object name is pone.0270787.g004.jpg

After the recognition task, the higher-order anthropomorphism of robots was detected by adapting the 7-item (α = .82; M = 1.55; SD = 0.57) self-report scale by Waytz and colleagues [ 51 ]. That is, participants were asked to rate the extent to which ( 1 = not at all ; 5 = very much ) they believed that the considered prototypes of robots were able to have a series of human mental abilities, such as “a mind of its own” or “consciousness”.

The outlier analysis on the latency responses identified 55 trials (out of a total of 5940) deviating more than ± 3 SD from the mean or with latencies below 50 ms and were thus removed from the main analyses.

The logit mixed-model conducted on participants’ accuracy responses (1 = correct; 0 = incorrect) revealed a main effect of the stimulus orientation (1 = upright; 2 = inverted), χ 2 (1) = 74.72, p < .001, OR = 0.57, 95% CI [0.50, 0.65], suggesting that presenting the stimuli in an inverted orientation reduces the odds of giving a correct response. Put differently, overall, the stimuli were better recognized when presented upright (estimated accuracy, EA = .83 ± .03) than inverted (EA = .74 ± .03). Further, a simple slope analysis (see Fig 5 ) revealed that human stimuli were recognized better when presented in an upright (EA = .82 ± .04) than inverted orientation (EA = .73 ± .05), χ 2 (1) = 23.70, p < .001, OR = 0.58, 95% CI [0.47, 0.72]. Most interestingly, a similar pattern also emerged for robot images, that were better recognized when presented in an upright orientation (EA = .83 ± .04) than an inverted one (EA = .75 ± .05), χ 2 (1) = 18.30, p < .001, OR = 0.62, 95% CI [0.49, 0.77]. A similar pattern was also observed for the mannequins, with a better performance when stimuli were presented upright than inverted (EA for upright vs. inverted = .85 ± .03 vs. .74 ± .05), χ 2 (1) = 34.00, p < .001, OR = 0.51, 95% CI [0.41, 0.64]).

An external file that holds a picture, illustration, etc.
Object name is pone.0270787.g005.jpg

Experiment 1. Error bars represent standard errors of the mean values.

Instead, neither the main effect of stimulus category ( χ 2 (2) = 0.81, p = .666), nor the interaction Stimulus orientation × Stimulus category emerged as significant, χ 2 (2) = 1.43, p = .490.

Finally, we tested the relationship between the magnitude of the IE for robots and the composite score of the self-report scale assessing the respondents’ higher-order anthropomorphism. The IE index was obtained by subtracting for each respondent the accuracy mean of trials with robots in the inverted orientation from that of trials with robots in the upright orientation, so that the higher the value, the higher the magnitude of the IE. The correlational analyses revealed no significant link between the IE index and the respondents’ higher-order anthropomorphism, r = 0.04, p = 0.685.

Findings from Experiment 1 provided initial evidence about the cognitive anthropomorphism of robots. By replicating the preliminary work by Zlotowski and colleagues [ 9 ] with a more controlled set of stimuli, we found that body images of humanoid robots with medium levels of human-like appearance were better recognized when presented in an upright than an inverted orientation. Thus, full body images of robots activated a configural processing, similarly to social stimuli portraying human beings. However, similar to previous work (see [ 39 ]), such body-IE also emerged for other objects with a human-body shape, i.e. mannequins. Thus, the question arises whether the human-like appearance of a given non-social stimulus triggers a configural processing per se, or whether the activation of the configural processing depends on the specific non-social stimulus considered. To address this issue, in Experiment 2 we manipulated the levels (high vs. low) of human-like appearance of full body images of robots, to verify whether the IE would be moderated by their degree of human likeness. Further, in Experiment 2 we employed a different set of stimuli than mannequins as the object-control condition. In particular, we used a pre-tested set of images portraying buildings, as these are a kind of object extensively used in previous research when exploring the IE of social vs. non-social stimuli.

Finally, unlike the previous study by Zlotowski and colleagues [ 9 ], in Experiment 1 we did not find evidence about a possible association between the cognitive anthropomorphism of robots (i.e., the magnitude of the IE for stimuli of robots) and the participants’ higher-order anthropomorphism, which was detected in terms of attributions of uniquely human features. Thus, Experiment 2 was also designed to better investigate such relation.

Experiment 2

Ninety-four undergraduates at a north-western Italian university (40 male; M age = 21.8; SD = 2.82) were recruited through a similar recruitment procedure to Experiment 1. In this experiment, a 2 (stimulus orientation: upright vs. inverted) × 4 (stimulus category: humans vs. robots with high human likeness vs. robots with low human likeness vs. buildings) within-subject design was employed.

As the data collection for this and subsequent experiments took place during the COVID-19 pandemic, the recognition task was administered online using Inquisit 6 Web software. However, to ensure adequate control about participants’ attention during the task, they were examined individually under the experimenter’s supervision. She introduced them to the task and remained connected until the conclusion. Participants were then fully debriefed.

Each participant was presented with 80 experimental stimuli (20 per category). Unlike Experiment 1, all the stimuli were presented both in the upright and inverted orientation. This resulted in a total of 160 experimental trials per participant, preceded by 12 practice trials that helped familiarize themselves with the task. Due to the length of the task, the experiment was organized into four different blocks, each one containing 40 experimental trials and regarding a specific stimulus category. Stimuli were presented in a randomized order within each block, and the order of blocks was also randomized. Notably, before each block, participants were informed about the specific stimulus category that was presented. This information was especially important when considering the humanoid robots with high levels of human likeness, that would be per se not distinguished by human stimuli. The trial structure was similar to Experiment 1, presenting the original image (250 ms ) followed by a blank screen (1000 ms ) and the discrimination task.

After that, respondents’ higher-order anthropomorphism of humanoid robots was detected by employing the same 7-item measure used in Experiment 1. In this experiment, participants were presented this measure twice in a randomized order, one referring to the robots with high human likeness (α = .87; M = 1.59; SD = 0.69), one referring to those with low human likeness (α = .82; M = 1.47; SD = 0.55). For each scale presentation, the target robots were shown at the top of the screen page.

The analysis on the latency responses identified 133 outlier trials (out of a total of 15040), that were thus removed from the main analyses.

The logit mixed-model conducted on participants’ accuracy responses (1 = correct; 0 = incorrect) revealed a main effect of the stimulus orientation (1 = upright; 2 = inverted), χ 2 (1) = 84.18, p < .001, OR = 0.66, 95% CI [0.60, 0.72]: overall, the stimuli were better recognized when presented upright (EA = .87 ± .02) than inverted (EA = .82 ± .03). Conversely, the main effect of stimulus category was not significant, χ 2 (3) = 0.66, p = 0.883. Most importantly, the two-way Stimulus orientation × Stimulus category interaction emerged as significant, χ 2 (3) = 14.04, p = .003. The interpretation of this interaction through the simple slope analyses (see Fig 6 ) revealed that robots with high levels of human likeness were more accurately recognized when presented upright (EA = .89 ± 0.3) than inverted (EA = .81 ± 0.5), χ 2 (1) = 46.95, p < .001, OR = 0.53, 95% CI [0.44, 0.63]. Interestingly, a similar IE pattern also emerged for robots with low levels of human likeness (for upright orientation, EA = .88 ± 0.3; for inverted orientation, EA = .83 ± 0.5), χ 2 (1) = 20.43, p < .001, OR = 0.66, 95% CI [0.55, 0.79]. Consistent with Experiment 1, human stimuli were better identified when presented upright (EA = .87 ± 0.4) than inverted (EA = .81 ± 0.5), χ 2 (1) = 24.84, p < .001, OR = 0.64, 95% CI [0.53, 0.76]. Instead, confirming previous literature, this pattern did not emerge as significant for buildings ( χ 2 (1) = 3.49, p = .062), indicating that participants had a similar performance in recognizing building stimuli regardless of their upright (EA = .85 ± 0.4) or inverted (EA = .83 ± 0.4) orientation.

An external file that holds a picture, illustration, etc.
Object name is pone.0270787.g006.jpg

Experiment 2. Error bars represent standard errors of the mean values.

Then, we verified the possible relation between participants’ higher-order anthropomorphism of robots assessed through the self-report scale and their IE index, which was calculated similarly to the previous experiment. As the IE indexes for robots with high and low levels of human likeness did not differ ( t(93) = 1.55, p = .124, 95% CI [-0.006, 0.053]), we collapsed them into a single one which was correlated with the composite scores of the self-report measures. In this case, also, the magnitude of the IE detecting the cognitive anthropomorphism did not correlate with the higher-order one, r = -0.18, p = .088.

The findings above replicated Experiment 1: once again, they revealed that the body-IE emerges for robots, similar to human beings. By expanding the previous results, the simple slope analyses also revealed that the body-IE was significant—and with a similar magnitude—when considering bodies of robots both with high and low levels of human likeness. Instead, consistent with previous literature, this effect did not emerge for objects (buildings).

Taken together, these results suggest that when cognitively processing full bodies of robot stimuli, perceivers tend to adopt a configural processing that is commonly activated for social stimuli. This process seems to regulate the cognitive elaboration of humanoid robots regardless of their levels of human likeness, at least when considering their full bodies. Consistent with the previous experiment, Experiment 2 revealed that this cognitive form of anthropomorphism is unrelated to the higher-order one: the IE index for robots did not significantly correlate with the self-report measure assessing the participants’ tendencies to attribute human mental states to humanoid robots.

Experiment 3 was designed to expand these findings, by mainly verifying whether the IE also emerges when considering faces (i.e., face-IE) of humanoid robots, rather than full-bodies. Like Experiment 2, we explored whether this presumed effect would be moderated by the levels (high vs. low) of human likeness of robot faces or, instead, emerge regardless of the degree of human likeness. Further, we compared the tested pattern of findings for robots with human facial stimuli and a series of object stimuli (i.e., domestic tools) created ad hoc. We opted to employ a different set of control stimuli to, on the one hand, increase the generalizability of our findings and, on the other hand, to have object stimuli with a size and a shape more comparable with the crucial stimuli of robot and human faces. Finally, we correlated the face-IE index of robots with a different scale of higher-order anthropomorphism than that used in the previous experiments.

Experiment 3

One hundred and nine undergraduates (52 male; M age = 22.1; SD = 2.92) were recruited with a similar procedure used in the previous experiments. A 2 (stimulus orientation: upright vs. inverted) × 4 (stimulus category: human faces vs. robot faces with high human likeness vs. robot faces with low human likeness vs. objects) within-subject design was employed.

Data collection was administered online using Inquisit 6 Web, following the same procedure employed in Experiment 2. Each participant was presented with 48 experimental stimuli (12 per category), presented in both upright and inverted orientation. This resulted in a total of 96 experimental trials per participant, preceded by 12 practice trials, that helped participants familiarize themselves with the task. Similar to Experiment 2, experimental trials were organized in 4 blocks, each one containing 24 trials, all regarding a specific stimulus category. Stimuli were presented in a randomized order within each block, the order of blocks was also randomized, and each block was followed by a pause. The trial structure was the same employed in Experiment 1 and 2, with the original image (250 ms) presentation followed by a blank screen (1000 ms) and the discrimination task.

After the computer task, respondents’ higher-order anthropomorphism was measured. Unlike previous experiments, we employed an adapted version of the 4-item scale by Waytz et al. [ 52 ], which detected the extent to which (0 = not at all ; 10 = very much ) participants perceived the considered robots intelligent, able to feel what was happening around them, to anticipate what was about to happen or to plan an action in an autonomous way. In this experiment also the self-report measure was presented twice, one referring to the robots with high human likeness (α = .85; M = 5.19; SD = 2.64), one to those with low human likeness (α = .79; M = 3.94; SD = 2.39). The employed faces for these robots were displayed at the top of the page screen.

The outlier analysis on the latency responses identified 56 outlier trials (out of a total of 10464), that were thus removed from the main analyses.

Consistent with previous experiments, the mixed-model revealed a main effect of the stimulus orientation, χ 2 (1) = 32.80, p < .001, OR = 0.74, 95% CI [0.66, 0.82]: overall, stimuli were better recognized when presented upright (EA = .85 ± .03) than inverted (EA = .81 ± .03). The main effect of stimulus category also emerged as significant ( χ 2 (3) = 29.40, p < .001), as well as the two-way Stimulus orientation × Stimulus category interaction, χ 2 (3) = 19.80, p < .001. Specifically, a simple slope analysis (see Fig 7 ) revealed that the IE emerged for human faces ( χ 2 (1) = 38.46, p < .001, OR = 0.53, 95% CI [0.43, 0.65], EA for upright vs. inverted = .86 ± .04 vs. .77 ± .05) and robot faces with high levels of human likeness ( χ 2 (1) = 15.65, p < .001, OR = 0.67, 95% CI [0.54, 0.81], EA for upright vs. inverted = .85 ± .04 vs. .79 ± .05). Instead, this pattern did not emerge as significant for robot faces with low levels of human likeness, χ 2 (1) = 0.68, p = .411 (EA for upright vs. inverted = .76 ± 0.6 vs. .75 ± 0.6). Similarly, participants had a similar performance in recognizing objects regardless of their orientation, χ 2 (1) = 0.70, p = .404 (EA for upright vs. inverted = .91 ± 0.3 vs. .90 ± 0.3).

An external file that holds a picture, illustration, etc.
Object name is pone.0270787.g007.jpg

Experiment 3. Error bars represent standard errors of the mean values.

Finally, we calculated the correlation between the higher-order anthropomorphism detected through the self-report measure and the face-IE index, which was calculated like previous experiments. As in this experiment the IE emerged for faces of humanoid robots with high levels but not for those with low levels of human likeness, we computed separated correlations. In that case, also, the relation between the IE index and the participants’ higher-order tendencies to anthropomorphize robots was not significant, neither when considering the facial stimuli of robots with high levels ( r = − 0.06, p = .552) nor when considering those with low levels of human likeness ( r = .04, p = .639).

Findings for Experiment 3 revealed that the IE for robots also occurs when considering as stimuli their faces, rather than the entire bodies. Thus, the configural processing for this technology seems to be validated both for the body- and face-IE. However, the simple slope analyses conducted for this experiment revealed that the degree of human likeness of robots affects the face-IE. It emerged only for facial stimuli of humanoid robots with high levels of human likeness, but not for those with low levels. In line with previous experiments and literature, the IE also occurred for human facial stimuli but not for objects, also when employing a set of stimuli (i.e., domestic tools) different from Experiment 2. Finally, consistent with previous experiments the cognitive anthropomorphism of humanoid robots detected through the IE index did not correlate with the higher-order anthropomorphism assessed through the self-report measure.

General discussion

Overall, findings from our three experiments provided convergent evidence about the human tendency to cognitively anthropomorphize humanoid robots. Similar to stimuli portraying human beings, robots were consistently better recognized when presented in an upright than inverted orientation. Hence, they were subjected to the IE and processed in a configural way, like social stimuli. Instead, confirming previous literature (e.g., [ 53 ]), our results revealed that an analytic processing was triggered when participants visually processed a wide range of objects non-resembling human beings (i.e., buildings and domestic tools).

However, we found relevant differences when considering the full body of robots (body-IE, Experiment 1 and 2) and their faces (face-IE, Experiment 3). In fact, if the body-IE emerged for all levels of the human likeness of robots (medium, Experiment 1; low and high, Experiments 2), the face-IE only emerged for humanoid robots with high levels of human likeness, but not for those with low levels. We argue that such a different pattern of results may depend on the perceptual cues elicited by humanoid robots’ full bodies or faces. More specifically, it is plausible to imagine that when considering the full bodies of robots, few anthropomorphic visual cues are necessary to trigger a configural processing, such as a single arm, a leg, or only the chest. This may explain why humanoid robots with low levels of human likeness are subjected to IE. Such assumption is also indirectly supported by Experiment 1 which considered the object-control category of mannequins. In fact, coherent with previous works [ 39 ], our findings revealed that these human-body-like objects are subjected to the IE and thus trigger a humanized representation at a cognitive level. Therefore, we may speculate that when considering the full body as a crucial stimulus, few visual features resembling human beings are sufficient to activate a configural processing, presumably above and beyond the semantic category within which each stimulus is classified (human being vs. object).

Conversely, when considering the faces of robots, the results of Experiment 3 suggest that a high level of human likeness is required to enact a configural processing. We believe that this is a highly relevant finding that highlights the prominent role of the face in defining the perceived full humanity (or no humanity) of a given exemplar, also at a cognitive level. That is, it is possible that, unlike the entire body, when focusing on the key component of the face, people need meaningful cues resembling human beings before activating a humanized representation of robots and a consequent configural processing. This argumentation is also in line with the work by DiSalvo and colleagues [ 54 ], which indicated that the faces of robots require the presence of specific and multiple features to be perceived as human-like (e.g., nose, eyelids, and mouth). These specific features can be observed in humanoid robots with high levels of human likeness (e.g., Erica and Sophia robots in our Experiment 3), whilst robots with low levels of human likeness often lack these features. For example, most robots with low levels of human likeness included in the ABOT database and thus employed in Experiment 3, despite having a head, did not show specific human features, as their head was made by a combination of object-like components (e.g., a monitor or a camera combined with a set of microphones). Instead, only a few of these robots (e.g., the Poppy robot) had eyes and eyebrows.

Taken together, we believe that our findings meaningfully extend research on the social perception of robots in several directions. First, we demonstrated that their anthropomorphic perceptions also have a cognitive basis, at least when considering humanoid robots. As mentioned when introducing our research, such an overall finding has great importance, because how people cognitively perceive robots deeply affects the first impressions toward them and the possible course of the HRI. That is, our results revealed that on the cognitive level humanoid robots can be elaborated not as mere objects but as social agents and, thus, they presumably trigger anthropomorphic knowledge and expectations, also at an unaware level. Such activation should primarily have positive outcomes for the HRI. In fact, most scholars in the field agree that the higher the—implicit or explicit—anthropomorphic perceptions of robots, the higher the positive feelings or attitudes that human beings display toward them. However, a possible side effect should be taken into account, especially in the light of our results revealing that these anthropomorphic perceptions could be rooted in first-order cognitive processes. That is, similar to other technologies [ 55 ], heightened expectations that humanoid robots can be like human beings can increase negative emotions and attitudes toward them when such expectations are disregarded.

Second and besides that, for the first time in the literature, our results indicate that the cognitive anthropomorphic perceptions of humanoid robots may be different depending on the considered components of robots: while the body of humanoid robots triggers a humanized representation regardless of their levels of human likeness, their faces are cognitively perceived in anthropomorphic terms only when they highly resemble human beings. This latter finding could provide robotics engineering with relevant insights when planning and projecting the external features of robots. Further, our experiments importantly integrate and extend the preliminary evidence by Zlotowski and Bartneck [ 41 ]. In fact, they also revealed the occurrence of IE for robots, albeit considering a broader spectrum of full body (humanoid and no humanoid) robots that were not systematically checked and balanced for their asymmetry and human likeness. Unlike this single study, our experiments exclusively considered humanoid robots with different levels of human-like appearance (i.e., presence of body-manipulators) and thus may provide more specific indications about when (and if) these robots are cognitively recognized as human- vs. object-like. Further, across our experiments, we consistently did not find a linear relationship between the IE index for social robots and the people’s explicit tendency to anthropomorphize them. This latter evidence contrasts Zlotowski and Bartneck [ 41 ]’s study, who instead found a positive linear relationship between the magnitude index of IE and the respondents’ explicit tendency to attribute uniquely human traits and abilities to robots. These different results may be due to the different stimuli that Zlotowski and Bartneck [ 41 ] considered than our research, which encompassed a wider range of robots, including also non-humanoid ones. Such a wider spectrum may have triggered different anthropomorphic explicit tendencies than those elicited by humanoid robots that we considered across our experiments. Alternatively, unlike this previous study, our evidence may robustly confirm the idea that in social cognition implicit and first-order processes are often qualitatively different than those more conscious and elaborated (e.g., [ 56 ]) and may play a complementary or opposite role, depending on the considered social or no social target. Accordingly, implicit measures, such as the inversion effect paradigm that we employed in our research, commonly assess mental constructs (e.g., perceptions, attitudes) that are distinct from those detected through self-report measures. Put differently, implicit methods capture first-order cognitive processes that meaningfully contribute to explaining different aspects of social cognition, not accounted for by the corresponding explicit measures [ 57 ]. About this issue, we believe that our measure of cognitive anthropomorphism may capture one of the main psychological mechanisms underlying this phenomenon, i.e. the people’s accessibility to anthropocentric knowledge [ 11 ], more appropriately than an explicit and self-report measure.

Despite the relevance of our findings, some limitations should be considered in interpreting them and driving the direction of future research. First, our experiments investigated the cognitive anthropomorphism of humanoid robots by relying only on the inversion effect paradigm. Although such a paradigm is the most extensively used when cognitively investigating the perception of social (vs. non-social) stimuli, we believe that future research should replicate our findings by employing further cognitive paradigms. For example, the whole vs. parts paradigm (see [ 58 ]) or the scrambled bodies and faces task (e.g., [ 53 – 59 ]) could be two further tools that could importantly strengthen the generalizability and robustness of our findings, by also better explaining the different cognitive elaboration of bodies and faces of social robots. With regard to this issue, it is also noteworthy that in our paradigm we explicitly differentiated the stimuli, both in the initial instructions and before each block. That is, participants were made salient the stimulus category (i.e., human vs. robots) that they were going to be exposed to, and such a salience could have somewhat affected their cognitive elaboration. Thus, future research should investigate whether the pattern of findings that we found could be replicated also when the stimulus category is not made salient, especially when referring to robots with high levels of human likeness.

Second, our research only considered humanoid robots. We elected to focus on this specific type of robot for two main reasons. First, they are (and presumably will be) the most widespread prototypes of robots employed in social environments. Second, because focusing only on this type allowed us to obtain a more homogenous set of robots, which in turn made the comparison of the different levels of human likeness more reliable across the experiments and conditions. However, future research would compare the cognitive anthropomorphic perceptions of humanoid robots with those concerning object-like robots (e.g., Roomba), to verify whether only the first ones are indeed cognitively elaborated as social agents.

Third, similar to previous research on configural (vs. analytic) processing, we only considered images as experimental stimuli. Thus, future research would verify the cognitive anthropomorphism of robots by considering more ecologically valid stimuli or situations, that for instance could imply videos portraying robots or real brief interactions between participants and robots.

Fourth, in our experiments, we did not analyze whether people’s levels of familiarity with humanoid robots would modulate their cognitive elaboration of these agents. More broadly, it would be interesting to verify across cultures possible differences in the cognitive anthropomorphism of robots, depending on people’s habituation to living among humanoid robots. For instance, it is plausible to imagine that the cognitive anthropomorphism of robots would be especially high within the contexts in which these technologies are massively used in different domains of humans’ everyday life.

Conclusions

Robots are going to become an intrinsic component of our everyday life in a wide range of domains. Thus, a full comprehension of how people perceive and behave toward them is a primary task for psychology and engineering scholars. In achieving this purpose, we believe it is essential to integrate the knowledge about more explicit and conscious processes featuring the people’s attitudes toward this technology with those concerning more cognitive processes underlying their perception. Both these processes play a pivotal and complementary role in understanding the factors facilitating or inhibiting the acceptance of robots in the social environment. In this sense, we hope that our research could provide important insights to think about and create robots as functional as possible in socially interacting with human beings.

Supporting information

Funding statement.

This work has been supported by Curiosity Driven (2017)- D36C18001720005 grant to LA and funded by the University of Genova. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Data Availability

  • PLoS One. 2022; 17(7): e0270787.

Decision Letter 0

18 Mar 2022

PONE-D-21-30385Human- or object-like? Cognitive Anthropomorphism of Humanoid RobotsPLOS ONE

Dear Dr. Andrighetto,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please ensure that your decision is justified on PLOS ONE’s  publication criteria  and not, for example, on novelty or perceived impact.

For Lab, Study and Registered Report Protocols: These article types are not expected to include results but may include pilot data. 

Please submit your revised manuscript by May 02 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at  gro.solp@enosolp . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Josh Bongard

Academic Editor

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf .

2. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

2. Has the statistical analysis been performed appropriately and rigorously?

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I have reviewed a manuscript entitled “Human- or object-like? Cognitive Anthropomorphism of Humanoid Robots”. This manuscript included three experimental studies well performed and with sufficient details (e.g., sharing materials, explaining the way they compute sample sizes). In general, I enjoy reading the manuscript. The main results are not always the ones authors were expected but they implement a rigorous procedure to test their hypothesis. Due to the strength of the methodology and the procedures, the authors are implementing I recommend this manuscript for publication in the journal.

See some comments after reading the manuscript:

Introduction: The introduction is well written and it covers the fundamental literature to understand the main goal of the manuscript. I would recommend authors to cover in a more detailed way the differences between processing faces (vs bodies) based on dehumanization literature so readers can understand the need to perform the third study while reading the introduction.

Method and results: Authors explain in a very detailed way the procedure to select the stimuli that they used in the experiments as well as report a priori sample size calculations and provide the link (osf) for the materials and manipulations. Study 1 was aimed to establish the effect of the cognitive anthropomorphism of humanoid robots using the inverted paradigm. The findings of this study are not the expected ones. I would like to see an attend of explaining why authors did not find the link between cognitive anthropomorphism of robots and the participants’ higher-order anthropomorphism when using the dimension of humanity they are measuring in this study. In studies 2 and 3 results are not always the expected ones. However, authors test their hypothesis by performing well-design studies, being well-powered, and with a rigorous procedure.

Discussion: In the discussion, section the authors acknowledged the limitations of their findings and discussed them in relation to previous evidence highlighting the understanding they have about the processes they are studying.

Reviewer #2: The current manuscript reports three studies that explored the human perception of robots. Overall, this is a very interesting topic, and I read this paper with enthusiasm. I believe that the present work could make a novel contribution to the literature.

There is much to like about the paper. It is well-written; it reports a set of three coherent studies which conceptually replicate one another. I consider that the general research goal is a highly valuable one, which may help understanding processes related to the way people anthropomorphize robots. The methodological approach to the phenomenon is also valuable. I thus think that the paper is of general interest for the readers of Plos ONE. However, I think the authors can improve the paper in some issues and minor details:

- First, I would strongly recommend review and reconsider the conclusions in several points. I wonder how the authors consider the contribution of the current research question to the study of human perception of robots. What are the consequences or expectancies of applying the same configural processing of social stimuli to robots? Is there any expected “side-effect”?

- Implications and alternative explanations for the weak correlation between the implicit measure and the explicit measure of anthropomorphizing found in the studies could be expanded. I think that you should include an extended comment in the discussion section about this point. Besides, you should include more theoretical explanations (and even consequences) about differences between those measures and cognitive processes behind them.

- What can we expect about human perception of robots in different cultures? Do you expect any effect depending on the familiarization with robots in daily life (e.g. people living in Tokyo or elsewhere, people working with robots? Is it relevant to consider the habituation to live among humanoid robots in their human perception? Please clarify and discuss to orient future research.

- Regarding the methodology, I wonder to what extent authors consider that the fact of explicitly differentiate the stimuli (“before each block, participants were informed about the specific stimulus category”) may affect the task. Is there any other way to proceed? Is this way different from the procedure used in previous research? Could this way to organise the procedure affect the explicit measure of anthropomorphism?

- To my view, the paragraph referring to sexual objectification in the discussion section is not properly justified, nor connected with the previous conclusions. Please reformulate.

I wish all these suggestions and minor critics could help to improve this outstanding line of research. Congratulations for this excellent work!

6. PLOS authors have the option to publish the peer review history of their article ( what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: No

Reviewer #2:  Yes:  Naira Delgado

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,  https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at  gro.solp@serugif . Please note that Supporting Information files do not need this step.

Author response to Decision Letter 0

Dear Prof. Bongard,

On the 18th of March, we received your editorial decision regarding the manuscript (PONE-D-21-30385) entitled: “Human- or object-like? Cognitive Anthropomorphism of Humanoid Robots”.

We are very grateful to you and the Reviewers for the constructive comments to improve the paper and for allowing us to revise the manuscript and send a new version of it.

In revising the paper, we took into consideration all the issues raised by you and the Reviewer, and we addressed them as follows (note that, for your convenience, changes in the manuscript are highlighted in yellow):

Editor – Issue 1: Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming.

Reply: Before submitting the original and revised version, we carefully checked PLOS ONE's style requirements, by also referring to the online template.

Editor – Issue 2: Please include your full ethics statement in the ‘Methods section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

Reply: Consistent with your suggestion, in this revised version we moved the ethics statement to the “Method” section of Experiment 1 (see p. 12, lines 259-263). Also, we specified the full name of the ethics committee and clarified that we obtained the informed written consent for each participant.

Editor – Issue 3: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Reply: We carefully reviewed our reference list both for the original and revised versions of the manuscript. We also mentioned in this letter the citations that we added in this letter (see below the replies to the issues and the reference list at the end of this letter). We did not include in our manuscript any retracted article.

Reviewer 1 – Issue 1: Introduction: The introduction is well written and it covers the fundamental literature to understand the main goal of the manuscript. I would recommend authors cover in a more detailed way the differences between processing faces (vs bodies) based on dehumanization literature so readers can understand the need to perform the third study while reading the introduction.

Reply: We would like to thank the Reviewer for this suggestion. Accordingly, in the introduction of this revised version (see p. 7, lines 154-158), we better highlighted the fact that, besides the bodies, exploring how people cognitively process and eventually anthropomorphize the faces of robots is crucial, as faces are a core point in social cognition [1] and their processing follows a peculiar path which activates human-related concepts [2].

Reviewer 1 – Issue 2: Method and results: Authors explain in a very detailed way the procedure to select the stimuli that they used in the experiments as well as report a priori sample size calculations and provide the link (osf) for the materials and manipulations. Study 1 was aimed to establish the effect of the cognitive anthropomorphism of humanoid robots using the inverted paradigm. The findings of this study are not the expected ones. I would like to see an attend of explaining why authors did not find the link between cognitive anthropomorphism of robots and the participants’ higher-order anthropomorphism when using the dimension of humanity they are measuring in this study. In studies 2 and 3 results are not always the expected ones. However, authors test their hypothesis by performing well-design studies, being well-powered, and with a rigorous procedure.

Reply: We would like to thank the Reviewer for these thoughts. Indeed, we did not formulate specific hypotheses about the occurrence (or absence) of the correlation between cognitive and higher-order anthropomorphism of robots. In fact, if, on the one hand, a single piece of evidence [3] revealed a correlation between these two forms of anthropomorphism, on the other hand, a substantial body of social cognition literature revealed that implicit and first-order processes (e.g., the cognitive anthropomorphism of robots) are qualitatively different than more elaborated and higher-order processes (e.g., the anthropomorphism of robots when detected through self-report measures). Thus, as mentioned when introducing our experiments, we just explored the possible link between first- and higher-order anthropomorphism of robots, but we did not necessarily expect a significant correlation. Thus, the obtained results showing no significant correlation were not “unexpected”.

Said that, by also following both Reviewers’ suggestions (see also the Issue 2 below), in this revised version (see p. 25, lines 571-578) we further expanded the implications and explanations about the absence of correlation between the index of cognitive anthropomorphism obtained through the IE and the score of explicit anthropomorphism.

Reviewer 2 – Issue 1: First, I would strongly recommend review and reconsider the conclusions in several points. I wonder how the authors consider the contribution of the current research question to the study of human perception of robots. What are the consequences or expectancies of applying the same configural processing of social stimuli to robots? Is there any expected “side-effect”?

Reply: We would like to thank the Reviewer for raising this issue. By also considering her issues below, the General Discussion of this revised version has been revised and further expanded. Regarding this specific issue, we better discussed the possible implications and consequences of applying the same configural processing of humans to robots, by also mentioning a possible side-effect (see pp. 23, 24, lines 533-545). That is, we reasoned that a specific rebound effect should be taken into account, especially in the light of our results revealing that these anthropomorphic perceptions could be rooted in first-order cognitive processes. That is, similar to other technologies (see e.g.,[4]), heightened expectations that humanoid robots can be as human beings can increase negative emotions and attitudes toward them when such expectations are disregarded.

Reviewer 2 – Issue 2: Implications and alternative explanations for the weak correlation between the implicit measure and the explicit measure of anthropomorphizing found in the studies could be expanded. I think that you should include an extended comment in the discussion section about this point. Besides, you should include more theoretical explanations (and even consequences) about differences between those measures and cognitive processes behind them.

Reply: We would like to thank the Reviewer for this suggestion. In the original version, we included a specific paragraph in the General Discussion in which we attempted to explain the reasons why the two measures of anthropomorphism did not correlate. That is, we first argued that in our experiments we considered only humanoid robots. Instead, the study by Zlotowski and Bartneck [3], which revealed a significant correlation between the IE index and the explicit measure of anthropomorphism, considered a wider spectrum of humanoid and non-humanoid robots that could have triggered different anthropomorphic explicit tendencies. Further, we stressed that the absence of correlation that emerged in our experiments robustly confirms the idea that in social cognition implicit and first-order processes are qualitatively different than the explicit ones, that are instead more conscious and elaborated. Following the Reviewer’s tip, in this revised version (see p. 25, lines 571-578) we expanded the theoretical explanations featuring the differences between first- and higher-order processes, together with their consequences. In doing so, we specifically referred to the theoretical works by Nosek and colleagues [5] and Epley and colleagues [6].

Reviewer 2 – Issue 3: What can we expect about human perception of robots in different cultures? Do you expect any effect depending on the familiarization with robots in daily life (e.g. people living in Tokyo or elsewhere, people working with robots? Is it relevant to consider the habituation to live among humanoid robots in their human perception? Please clarify and discuss to orient future research.

Reply: We would like to thank the Reviewer for this insight. We discussed this issue in the General Discussion section (pp. 26,27, lines 607-613). In particular, we argued that it would be interesting to verify across cultures possible differences in the cognitive anthropomorphism of robots, by assuming that the cognitive anthropomorphism of robots would be especially high within the contexts in which these technologies are massively used in different domains of humans’ everyday life.

Reviewer 2 – Issue 4: Regarding the methodology, I wonder to what extent authors consider that the fact of explicitly differentiate the stimuli (“before each block, participants were informed about the specific stimulus category”) may affect the task. Is there any other way to proceed? Is this way different from the procedure used in previous research? Could this way to organise the procedure affect the explicit measure of anthropomorphism?

Reply: We believe that the issue raised by the Reviewer is highly relevant. In planning our experiments, we followed similar procedures to those used in previous cognitive or socio-cognitive works. However, we acknowledge that informing participants about the stimulus category that they were going to see would make salient them the belonging category of the stimuli (i.e., humans vs. robots) and, thus, somewhat affect their cognitive elaboration. Thus, it would be interesting to verify whether similar effects also emerge when the specific stimulus category is not made salient, especially when referring to robots with high levels of anthropomorphism. Right now, we are running an experiment similar to the Exp 3 of this manuscript, but in which participants are just presented in a random order trial with humans or robots’ faces, without differentiating the stimuli both in the instructions and during the task. In this revised version, we discussed this interesting issue in the General Discussion (p. 26, lines 587-593).

Reviewer 2 – Issue 5: To my view, the paragraph referring to sexual objectification in the discussion section is not properly justified, nor connected with the previous conclusions.

Reply: We agree with the Reviewer that the paragraph in the General Discussion referring to sexual objectification is rather disconnected from the other conclusions. For this reason, and also because the revised General Discussion has been substantially expanded than the previous version, we decided to cut this paragraph.

We greatly appreciate the opportunity to revise our manuscript. The comments made by the Editor and the reviewers were very helpful. We hope you will now agree that our manuscript is suitable for publication in Plos One. If more revisions are needed, however, we would be happy to make them.

1. Macrae C, Quadflieg S. Perceiving People. Handbook of Social Psychology. 2010

2. Hugenberg K, Young S, Rydell R, Almaraz S, Stanko K, See P et al. The Face of Humanity. Soc Psychol Personal Sci. 2016;7(2):167-175. doi:10.1177/1948550615609734.

3. Zlotowski J, Bartneck C. The inversion effect in HRI: Are robots perceived more like humans or objects? In Proceedings of 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2013. p.365-372. doi:10.1109/HRI.2013.6483611.

4. Crolic C, Thomaz F, Hadi R, Stephen A. Blame the Bot: Anthropomorphism and Anger in Customer–Chatbot Interactions. Journal of Marketing. 2021;86(1):132-148. doi: 10.1177/00222429211045687.

5. Nosek B, Hawkins C, Frazier R. Implicit social cognition: from measures to mechanisms. Trends Cogn. Sci. 2011;15(4):152-159. doi:10.1016/j.tics.2011.01.005.

6. Epley N, Waytz A, Cacioppo JT. On seeing human: A three-factor theory of anthropomorphism. Psychol Rev. 2007; 114(4): 864–886. doi:10.1037/0033-295X.114.4.864.

Submitted filename: Response_reviewers.docx

Decision Letter 1

21 Jun 2022

Human- or object-like? Cognitive Anthropomorphism of Humanoid Robots

PONE-D-21-30385R1

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/ , click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at gro.solp@gnillibrohtua .

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact gro.solp@sserpeno .

Additional Editor Comments (optional):

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

2. Is the manuscript technically sound, and do the data support the conclusions?

3. Has the statistical analysis been performed appropriately and rigorously?

4. Have the authors made all data underlying the findings in their manuscript fully available?

5. Is the manuscript presented in an intelligible fashion and written in standard English?

6. Review Comments to the Author

Reviewer #1: Authors have correctly addressed my comments in the previos revision. I endorse publication of the present manuscritp.

Reviewer #2: (No Response)

7. PLOS authors have the option to publish the peer review history of their article ( what does this mean? ). If published, this will include your full peer review and any attached files.

Reviewer #2:  Yes:  Naira Degado

Acceptance letter

Dear Dr. Andrighetto:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact gro.solp@sserpeno .

If we can help with anything else, please email us at gro.solp@enosolp .

Thank you for submitting your work to PLOS ONE and supporting open access.

PLOS ONE Editorial Office Staff

on behalf of

Dr. Josh Bongard

Science Interviews

  • Earth Science
  • Engineering

Exploring Space: Robots vs Humans

Interview with , part of the show should i stay, or should i go... to mars, curiosity-rover.

The Curiosity rover

Carolin - I think it's a no-brainer. Certainly in the short term, robots are the way to go. We've just heard about some of the hazards that humans face. Even if you can get them to Mars in one piece then the dangers on the surface, the reduced gravity, the radiation. The thing about robots is they're a lot less fussy, they're a lot less fragile, and you don't have to send so much kit with them. So in many ways, they are much more efficient way of exploring. Today, robots can do everything that humans can do. Maybe they don't do it so efficiently. Maybe we just have to be a bit more patient. We'll get our science a bit more slowly, but at the minute, there's no pressing need to be sending humans.

Chris - Sanjeev, you've been involved in the Curiosity Mission which is a pretty big mission. I mean, that's a mini-sized rover that's crawling around on Mars. Would you concur with what Carolin is saying?

Sanjeev - Not at all, I'm afraid. I'm a terrestrial geologist, I'm a geologist who works on Earth, does field work on Earth, and I've moved into the planetary sphere. Whilst Curiosity can do amazing things, it's incredibly slow for me. What we do in days, a human can do in minutes. That's the issue - is that, we've been on Mars for 3 years now, we've got some spectacular discoveries but we can't look around. We can't go this way or that way. We can make rapid decisions. We're not adaptable because of latency effects, et cetera. So, I think a human on Mars could very, very rapidly do this sort of geological characterisation - the geological science that we can do on Earth that even looking 10, 20 years ahead, a robot simply is not capable of doing just because of the human cognitive aptitudes.

Chris - It's just technology though, isn't it? I mean, it's just a question of writing better computer programmes to run these things better.

Sanjeev - Have you tried doing geology? It's very, very hard. The thing is that there's not simple patterns in the geological record. Scientifically, the first thing we're going to do when we get to Mars is actually look at the geological science. We will now understand the geological evolution of Mars. Did life evolve? Finding these things is incredibly difficult. There's a lot of rock on Mars. A rover maybe travels 20 kilometres in its lifetime. I think Curiosity has just done over 10 kilometres. Secondly, there's large areas of Mars which are very, very interesting that it will be very difficult to get robots too.

Chris - Carolin, you're sort of nodding and shaking your head in equal measure.

Carolin - Well, I agree that it would be a nice luxury to send human astronauts to do this job much more efficiently. I just dispute that it's actually required now. I mean, there are two questions being, one is of course, whether we can do all the science with the rovers and the orbiters, and even the stationery spacecrafts such as Phoenix. Yes, they can do it maybe more slowly. But it's a matter of, is getting the science just that much faster relevant for the huge cost it might bring and the huge danger to humans going out there.

Chris - Yeah, I was going to say. I mean, has to - when you're making a decision about doing research, weigh up the pros and cons in terms of what I can get bang for my buck. Sanjeev, what you're sort of suggesting is, "Yeah, let's get some humans on Mars" but the price, you think how much science we could actually do with the same spend both from orbit which is much cheaper and also, maybe even just from here on Earth.

Sanjeev - We can do a lot, but I think nobody is thinking that we're going to do this immediately so the plans are in the next 20 or 30 years for humans to Mars to do science properly. So, I think it's planning stages at the moment.

Chris - Richard, did you want to comment?

Richard - Well, I want to offer a compromise here, something that the European Space Agency and NASA are seriously looking at is to have a mission to Phobos or a mission to the orbits of Mars and then be able to control rovers on the surface. So that way, you eliminate the time delay and you can effectively have them as avatars on the surface.

Chris - When you say time delay, this is the problem that because it's so far to Mars that even at the speed of light, these radio controlled signals, you've got a latency of a delay of many - not just seconds, but minutes, hours.

Richard - Which is why they're building a lot more autonomy into the new rovers like the ExoMars rover which will be the European rover to Mars. But Tim Peake, the British European Space Agency astronaut who's flying to space station in December. He will be doing one of these experiments and actually, operating from the space station, a rover in the Mars yard which is a simulated Mars environment in Stevenage so actually, to simulate that.

Chris - Carolin, what's the price as well because you mentioned eluded to cost, everyone talked about cost but what do we think is actually likely to start costing?

Carolin - I've got absolutely no idea. I mean, just thousands of billions would be my guess. I mean, maybe that's an exaggeration. I just want to come back...

Chris - Ryan is just waving his hand. He's got an idea of the price then. What do you think the price tag is?

Ryan - The lowest estimated cost that I've seen for human Mars missions is on the order of $10 billion or so. So, for comparison with Curiosity which was around 2 billion, a factor of 4 more potentially in the costs for a human mission. Yeah, certainly, I think you can justify that.

Carolin - I just want to go back to Rich's compromise. If you're going to send astronauts all the way to Phobos, you're still exposing them to the same danger. It's still going to not actually cost an awful lot less. If you're going to go away to Phobos, you might as well just go to the surface of Mars. I don't think that's much of a compromise really.

Richard - I think one of the things that this is a compromised option eliminate at least in the short term is the problem of getting to Mars and getting back off again because we talked about sample return. When you talk to NASA scientists about this, no one has managed to get even a coffee cup-size sample of material off Mars. So, unless you're going to go on a one-way trip, which is what Mars1 is proposing, getting back is a real genuine concern.

Chris - I think Sanjeev, because your question and your point is that we want some samples and it's very hard to do this sort of science when you're not on Mars.

Sanjeev - More importantly, you don't just collect samples. As a geologist, you gain context and that's very difficult to do from a distance. And so, I think even doing it from Deimos or Phobos is really difficult. I think what's happening now is really a serious attempt to start thinking about resources on Mars and what you could use as propellant to get back. So I think it's not going to happen instantly, but people are thinking about that. They won't only consider that if they can do it, if the resources have had to be able to provide propellant to get humans back from the surface of Mars.

Chris - Propellant equals rocket fuel.

Sanjeev - Right.

Carolin - I think the key thing though that we're all in agreement is that we need robots at the minute. Whether or not we're going to be sending human astronauts to Mars, they're vital as scouts really to prep - just as precursors, to find where the resources are, the possible landing sites, to assess the climate, the radiation damage. We need to test things like, we were just talking about the launch, the landing technology, the piloting, and even just delivery of equipment, habitat, fuel, food. Everything that we need is all going to be robotic even long before we send the astronauts. So, whether or not you send the astronauts, we still need to develop the robot technology, just as that scouting and that testing ground.

Chris - In other words, we need them anyway so we might as well just solve this problem in the course of solving the next one, don't we? Now, the other question is of course, why do we want to go to Mars? What can we learn about Mars? Interestingly, this year's Christmas lectures at the Royal Institution are all about the future of space travel and space exploration, and they're being delivered by UCL's Kevin Fong.

Kevin - Mars, I think is a large piece of the puzzle when it comes to asking a question of how ubiquitous life is in the universe. We know that 4 billion years ago, conditions on Mars were at least similar to, if not the same as they were on Earth. We know that that was around about the time that life first arose on our planet. And so, if we go to Mars and it proves to be sterile and always has been sterile then it means that Earth is indeed very, very special and it's something particularly special about it that allows life to claw its way and then develop. If we go there and find that there is life now or there has been life on Mars, it probably means that life is ubiquitous in the universe because it suggests that wherever life has a chance, it will claw in at least in some basic form. So, I think it is essential that we explore Mars. I think it has some of the most important answers that we seek in the 21st century.

  • Previous Colonise Mars to save the human race?
  • Next Destination: Mars, what we've learnt

Related Content

Could humans live forever, exploring the uncanny valley, why are drugs so expensive, artificial skin could allow robots to feel like we do, under-road heating to keep roads ice-free, add a comment, support us, forum discussions.

An outline of a human head, filled with connections like vessels or nerves.

Posts by Tag

  • collaborative robots (387)
  • universal robots (201)
  • industrial robot (198)
  • robotics (195)
  • human-robot collaboration (182)
  • 2-Finger Adaptive Gripper (171)
  • gripper (138)
  • robotics news (129)
  • cobots (106)
  • Robotiq (100)
  • 3-Finger Adaptive Gripper (94)
  • collaborative manufacturing (89)
  • Force Torque Sensor (87)
  • Safety (85)
  • robots (77)
  • adaptive gripper (76)
  • robots programming (75)
  • collaborative cell (69)
  • robotic application (69)
  • palletizing (67)
  • machine tending (66)
  • advanced manufacturing (65)
  • robot manufacturers (65)
  • welding applications (65)
  • end effector (64)
  • vision (62)
  • Wrist Camera (60)
  • force control (59)
  • automation (54)
  • automatica (52)
  • space robotics (51)
  • drones (50)
  • manufacturing automation (46)
  • cnc machine (45)
  • service robotics (44)
  • electric gripper (42)
  • grippers (42)
  • Kinect (38)
  • collaborative applications (38)
  • lean robotics (38)
  • bio-inspired (37)
  • robotic integration (36)
  • Meet the Team (31)
  • mobile robots (31)
  • risk assessment (29)
  • automatica2016 (28)
  • assembly (27)
  • Robotics Trends (26)
  • flexible robot gripper (26)
  • DARPA Robotics Challenge (25)
  • manufacturing process (25)
  • medical robotic (24)
  • mobile manipulation (23)
  • Getting Started with Collaborative Robots (22)
  • artificial intelligence (22)
  • future of robotics (22)
  • robotics how to (22)
  • Automatica 2022 (21)
  • easy integration (21)
  • human-robot interaction (21)
  • lean manufacturing (21)
  • insights (20)
  • robotic cell (20)
  • sensors (20)
  • Sanding Kit (19)
  • FT 300 (18)
  • robotic arm (18)
  • sanding (18)
  • servo gripper (18)
  • robot safety (17)
  • robotic workforce (17)
  • vacuum grippers (17)
  • Automate2017 (16)
  • RUC 2017 (16)
  • RUC 2018 (16)
  • collaborative robots applications (16)
  • force limited robots, (16)
  • humanoid robot (16)
  • robotic news (16)
  • robotic research (16)
  • sanding robots (16)
  • RUC 2019 (15)
  • food manufacturing (15)
  • robot vision (15)
  • Hand-E (14)
  • Yaskawa (14)
  • human like robots (14)
  • motoman (14)
  • new robots (14)
  • packaging (14)
  • research (14)
  • robotics challenges (14)
  • screwdriving (14)
  • Baxter (13)
  • bin picking (13)
  • Automatica 2018 (12)
  • automate 2019 (12)
  • flexible (12)
  • robotics market (12)
  • tactile sensor (12)
  • Communication protocol (11)
  • Rethink Robotics (11)
  • Robot ROI (11)
  • automotive industry (11)
  • parallel grippers (11)
  • productivity (11)
  • robot manipulation (11)
  • tool-changer (11)
  • 3d printing (10)
  • Europe (10)
  • Robotic Trends (10)
  • google (10)
  • high-mix (10)
  • object detection (10)
  • pneumatic gripper (10)
  • robot hand (10)
  • training (10)
  • Cobots vs COVID (9)
  • Dexterous Manipulation (9)
  • Industry 4.0 (9)
  • Whats-New-in-Robotics (9)
  • driver-less car (9)
  • end-of-arm (9)
  • indutrial robotics (9)
  • recruitment in manufacturing (9)
  • robot cell design (9)
  • robot monitoring (9)
  • robotic investment (9)
  • robotic technology (9)
  • software (9)
  • AirPick (8)
  • Robobusiness (8)
  • agile automation (8)
  • automatica demos (8)
  • clearpath robotics (8)
  • material handling (8)
  • repeatability (8)
  • robotic R&D (8)
  • robotic business (8)
  • system integrators (8)
  • teach pendant (8)
  • trade show (8)
  • Ethernet/IP (7)
  • Technology (7)
  • agricultural robots (7)
  • autonomous car (7)
  • case study (7)
  • dual gripper (7)
  • employee communication (7)
  • ergonomics (7)
  • in-house robotics expertise (7)
  • industrial communication protocol (7)
  • internet of things (7)
  • polishing (7)
  • robot cell (7)
  • robotic competition (7)
  • workshop (7)
  • AI and robotics (6)
  • Fanuc robot (6)
  • Lightweight Robot (6)
  • benefits (6)
  • boston dynamics (6)
  • china robots (6)
  • compare grippers (6)
  • external finishing tool (6)
  • grip force (6)
  • gripper design (6)
  • industrial automation (6)
  • innovations (6)
  • inspection (6)
  • management (6)
  • product testing (6)
  • robot skills (6)
  • robotic assembly (6)
  • robotic law (6)
  • security (6)
  • Labor shortages (5)
  • Robotiq team (5)
  • accuracy (5)
  • cybersecurity (5)
  • education (5)
  • engineering (5)
  • fanuc CR-35iA (5)
  • industrial robotics (5)
  • industrial trade show (5)
  • logistic robots (5)
  • mechanical intelligence (5)
  • microsoft (5)
  • modbus TCP/IP (5)
  • outsourcing (5)
  • robot data (5)
  • robot optimization (5)
  • robot programming (5)
  • robotic automation (5)
  • robotic innovation (5)
  • robotic jobs (5)
  • robotic trade fair (5)
  • robotics for logistics (5)
  • robotics jobs (5)
  • robotics startups (5)
  • simulation (5)
  • skills gap (5)
  • start production faster (5)
  • suction cups (5)
  • willow garage (5)
  • workforce (5)
  • DeviceNET (4)
  • Fabtech (4)
  • Fragile parts (4)
  • construction (4)
  • e-Series (4)
  • economy (4)
  • evolution robotics (4)
  • gripper control (4)
  • human resources (4)
  • labor shortage (4)
  • manual task mapping (4)
  • nuclear robot (4)
  • open-source (4)
  • operation cost (4)
  • packing (4)
  • payload (4)
  • programming (4)
  • protocol (4)
  • quality control (4)
  • robot assembly (4)
  • robot automation (4)
  • robot colleagues (4)
  • robot show (4)
  • robot standards (4)
  • robotic market, (4)
  • robotic product testing (4)
  • robotics school (4)
  • space simulation (4)
  • study robotics (4)
  • the cobot experience (4)
  • Al and Robotics (3)
  • I/O devices (3)
  • ICRA 2015 (3)
  • IMTS 2016 (3)
  • Insights features (3)
  • Kiva robots (3)
  • Robotics Industy Forum (3)
  • automate (3)
  • cable management (3)
  • client interview (3)
  • collaboration (3)
  • custom gripper (3)
  • electronic assembly (3)
  • exoskeleton (3)
  • fixture (3)
  • flexible robotic (3)
  • flexible robotics (3)
  • global company (3)
  • grasp planning (3)
  • gripper repeatability (3)
  • international (3)
  • little helper (3)
  • mass customization (3)
  • plug-and-play (3)
  • precise automation (3)
  • robot dexterity (3)
  • robotic gripper design (3)
  • robotic hand (3)
  • robotics and art (3)
  • robotics careers (3)
  • robotics gripper (3)
  • robotiq distributors (3)
  • sawyer robot (3)
  • self learning robots (3)
  • teleoperation (3)
  • warehouse (3)
  • 802.3 IEEE (2)
  • ActiveDrive toolbar (2)
  • Atlas robot (2)
  • DoF Community (2)
  • Economic Status Report (2)
  • EtherCAT (2)
  • How-to Kinetiq Teaching (2)
  • How-to Support (2)
  • IP Ratings (2)
  • Nextage (2)
  • OTTO MOTORS (2)
  • Rensselaer Polytechnic Institute (2)
  • Robot controllers (2)
  • Robotic Process Automation (2)
  • U.S Navy (2)
  • advantages of robots (2)
  • agriculture (2)
  • android (2)
  • autonomous grasping (2)
  • biomimetic (2)
  • bipedal robots (2)
  • capital enhancements (2)
  • caring robots (2)
  • customer needs (2)
  • deep ocean (2)
  • delivery robot (2)
  • distribution (2)
  • domestic robot (2)
  • environment (2)
  • environmental robots (2)
  • expectations (2)
  • finishing (2)
  • force copilot (2)
  • gripper fingertips (2)
  • gripper manufacturers (2)
  • hand-guiding (2)
  • hands freed (2)
  • hannover messe (2)
  • history (2)
  • how to get a job in robotics (2)
  • hydro-quebec (2)
  • in-house robotics (2)
  • industrial communication protocols (2)
  • industrial manipulators (2)
  • internet (2)
  • investment casting (2)
  • jig-less (2)
  • location (2)
  • lockheed martin (2)
  • machine learning (2)
  • marketing robotics (2)
  • microrobots (2)
  • mitsubishi (2)
  • offshoring (2)
  • panasonic (2)
  • part handling (2)
  • partnership (2)
  • reshoring (2)
  • robot sales (2)
  • robot specifications (2)
  • robotic component (2)
  • robotic contest (2)
  • robotic task mapping (2)
  • robotic tool-changer (2)
  • robotics software (2)
  • robotics technology (2)
  • robotization (2)
  • scara robots (2)
  • science (2)
  • self-driving trucks (2)
  • self-driving vehicles (2)
  • tabletop applications (2)
  • teamwork (2)
  • techman (2)
  • telepresence robots (2)
  • the benefits of robots (2)
  • underactuation (2)
  • university (2)
  • university Laval (2)
  • walking robot (2)
  • weld repeatability (2)
  • wide stroke (2)
  • workerbot (2)
  • 3d printers (1)
  • Arts et Métiers ParisTech (1)
  • Assa Abloy (1)
  • Automate 2017 (1)
  • BSD license (1)
  • Barrett Hand (1)
  • CANopen (1)
  • CR-35iA (1)
  • Entertainment (1)
  • F&P robotics (1)
  • Freshbins (1)
  • Frito-Lay (1)
  • HubSpot Tips (1)
  • Insertion (1)
  • International Federation of Robotics (1)
  • Jan Peters (1)
  • Joseph F Engelberger (1)
  • Metra Martech (1)
  • Robot Developer Studio (1)
  • Robot Raconteur (1)
  • Robot gripper (1)
  • SIS Corporation (1)
  • Scanning (1)
  • TRAC Labs (1)
  • TROOPER team (1)
  • Team IHMC Robotics (1)
  • Team MIT (1)
  • Team ViGIR (1)
  • Team WPI-CMU (1)
  • Tech ManufactureXPO (1)
  • Underwater (1)
  • United Kingdom (1)
  • aerospace (1)
  • alliance space systems (1)
  • andrea bertolini (1)
  • artificial intelligence and robots (1)
  • benchmarking (1)
  • better robots (1)
  • biology (1)
  • bosch robot (1)
  • bossa nova robotics (1)
  • career in robotics (1)
  • cartesian robot (1)
  • ces 2016 (1)
  • changeover elimination (1)
  • chatbots (1)
  • climate (1)
  • communication (1)
  • compatibility (1)
  • competitive edge products (1)
  • connected factory (1)
  • contour crafting (1)
  • cooking robots (1)
  • cycle time (1)
  • destruction robots (1)
  • digital hormones (1)
  • digital twin (1)
  • digitalization (1)
  • digitalization strategies (1)
  • digitalization tactics (1)
  • disasters (1)
  • disruptive technology (1)
  • downtime (1)
  • e-commerce (1)
  • edge gateway (1)
  • ellison (1)
  • exo-skeletons (1)
  • experimental design (1)
  • fab-lab (1)
  • failed robotics companies (1)
  • fieldwork (1)
  • finishing copilot (1)
  • fire-fighting (1)
  • first robot (1)
  • flexible tasking (1)
  • flow shop (1)
  • gripper sensitivity (1)
  • handling robots (1)
  • harvesting (1)
  • high-mix, low-volume (1)
  • hitachi (1)
  • how to get into robotics (1)
  • husky mobile manipulator (1)
  • impedance control (1)
  • industrial controllers (1)
  • injection molding (1)
  • integrate (1)
  • integration coach (1)
  • international robotics (1)
  • international trade fair (1)
  • kinematics (1)
  • lanakhod (1)
  • linescout (1)
  • long stroke (1)
  • low force mode (1)
  • lunar X-Prize (1)
  • metal fabrication (1)
  • monitoring (1)
  • nanotechnologies (1)
  • neuroarm (1)
  • notre-dame (1)
  • operate (1)
  • parallel robots (1)
  • pheonix (1)
  • pollution (1)
  • proctor & gamble (1)
  • productive robotics (1)
  • profits (1)
  • quadruped (1)
  • ridgeback (1)
  • robot IQ (1)
  • robot KPI (1)
  • robot KPIs (1)
  • robot accuracy (1)
  • robot algorithm (1)
  • robot cook (1)
  • robot development (1)
  • robot downtime (1)
  • robot fingers (1)
  • robot hall of fame (1)
  • robot learning lab (1)
  • robot path (1)
  • robot repeatability (1)
  • robot troubleshoot (1)
  • robot uptime (1)
  • robot wrist (1)
  • robotic (1)
  • robotic cloud (1)
  • robotic conference (1)
  • robotic hands (1)
  • robotics blogs (1)
  • robotics carreer (1)
  • robotics coding (1)
  • robotics in japan (1)
  • robotics studio (1)
  • robotiq gripper (1)
  • robots union (1)
  • satellites manipulation (1)
  • sawyer BLACK (1)
  • seeding (1)
  • smartphones (1)
  • social robots (1)
  • st-onge (1)
  • standard communication protocol (1)
  • stanford (1)
  • supply chain (1)
  • sustainable (1)
  • tartan rescue (1)
  • textile (1)
  • tokyo university (1)
  • touch screen (1)
  • trossen (1)
  • unimate (1)
  • vacuum cleaner (1)
  • virtual reality (1)
  • warehouse robots (1)
  • xenobots (1)

Latest Blog Post

part presentation playbook

Robot Productivity: How Cobots Compare to Humans

Alex Owen-Hill

Productivity is an often debated question in robotics. Is robot productivity really better than the productivity of human workers?

Sometimes, the answer is not as clear-cut as you would first assume.

When you think of manufacturing robots, you probably visualize something like the super-fast delta robots doing pick and place on factory conveyors. Or perhaps you visualize the fast, highly-accurate welding robots in car factories.

In these examples, it can seem obvious that such fast-moving robots would increase the productivity of those tasks.

But, robot productivity is not always so clear-cut. The speed of the robot doesn't necessarily translate into a more productive process overall.

Collaborative robots (or cobots) are a perfect example of this. Cobots usually move more slowly than conventional industrial robots and can even move more slowly than human workers.

Does their slow speed mean that cobots are more productive? Not necessarily…

The definition of robot productivity

Robot productivity means the ratio of input to output in production that a robot cell can achieve. Like manufacturing productivity it is a measure of efficiency. The more a robot cell produces in a particular time period, the more productive it is.

However, the productivity of an individual robot cell is only one aspect of productivity. If you were to fixate on this measure of productivity, you would likely prioritize the speed of the robot. But, just because a robot cell works faster doesn't mean that your entire process will necessarily be more productive.

You also have to take into account how robots affect the overall productivity of your process.

Robots can be a key tool for increasing the productivity of your manufacturing operations. But, they can only do this if you place the robot at a bottleneck task. We have seen many examples of situations where robots have been used to ease pressure on a bottleneck task. This has led to an increase in productivity in the entire operation.

Human vs robot productivity: a complex question

When people see collaborative robots in action for the first time, they sometimes say "These robots move very slowly."

They compare the speed of the robot with the speed of their human workers or with other types of automation. They find it hard to picture how a slower-moving robot could help improve their productivity.

Let's take an example of a particular task: stacking boxes onto one of two wooden pallets.

Worker palletizing boxes manually in a manufacture

How a robot performs the task

Picture a robot performing the palletizing task.

A conveyor feeds the robot cell with boxes. The boxes arrive at irregular intervals as they are packaged by human workers previously in the line.

The robot waits patiently for a box to arrive. Whenever a box arrives, the robot reads its label. If the label is red, the robot immediately places it on the left pallet. If the label is green, the robot places it on the right pallet.

The robot operates constantly without breaks.

How a human performs the task

In this case, a human worker is tasked with palletizing the boxes. As the boxes do not arrive very quickly, this worker also has an unrelated inspection task that they perform at the same time.

The boxes queue up at the end of the conveyor. When enough of the boxes have queued up, the worker rushes over from their inspection task and starts palletizing the boxes.

They have to check the label of each box manually. They put the box on the correct pallet depending on the color of its label. Because the worker is moving quickly, they do not place the boxes as accurately on the pallet as the robot would.

Sometimes, a lot of boxes pile up on the conveyor when the worker hasn't got time to do the palletizing or is on a break.

The result is that palletizing is very inconsistent. The quality of the palletizing suffers and the worker is always rushing to try to catch up with the task even when they try to do the task quickly.

Which of these two systems seems more productive to you?

When do robots increase productivity in a facility?

As you can see in the example above, the speed of operation isn't the only aspect of productivity. Consistency also has a huge impact on how productive the task can be.

Robots increase productivity when they are used for tasks that humans struggle with in the first place.

In the example, palletizing is not a very suitable task for human workers. To palletize boxes efficiently, you need to work continuously at a consistent rate and accuracy. Robots are always consistent but humans are not.

Workers often see this potential themselves. One study found that 77% of people would welcome robotic assistance at work if it meant that the number of manual processes decreased. As humans, we excel at cognitive tasks but highly repetitive manual tasks are just not the best use of our skills.

When you apply a robot to a bottleneck task that is already not a good task for a human, this is often where productivity improvements are most clear.

Employee working beside a robot to accomplish palletizing tasks in a manufacture

Hear from companies that have used robot productivity the right way

You don't need to take our word for it that robots can improve your productivity.

We have a whole collection of case studies of various companies that have used robotics to boost productivity in their facility.

A great example of this in action is the case study of how French manufacturer Alliora increased its productivity with robotic palletizing. You can read about their experiences here.

Do you think robots can be more productive than humans? Tell us in the comments below or join the discussion on LinkedIn , Twitter , Facebook , or the DoF professional robotics community .

Leave a comment

Related posts.

Overcoming Barriers to Palletizing Automation

Overcoming Barriers to Palletizing Automation

Automation is revolutionizing the manufacturing sector, with palletizing being a key area of focus. However, the path to full...

Kendra Patton

Boosting Productivity with Automation: Meet Your Customers’ Expectations

Businesses evolve rapidly. Companies across various industries are grappling with many significant challenges. The labor...

DoF-Community-logo

Latest discussions on DoF

human vs robot presentation

Advertisement

Advertisement

Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy

  • Open access
  • Published: 25 October 2021
  • Volume 31 , pages 595–616, ( 2021 )

Cite this article

You have full access to this open access article

  • Paul Formosa   ORCID: orcid.org/0000-0002-7490-0242 1  

14k Accesses

15 Citations

Explore all metrics

Social robots are robots that can interact socially with humans. As social robots and the artificial intelligence (AI) that powers them becomes more advanced, they will likely take on more social and work roles. This has many important ethical implications. In this paper, we focus on one of the most central of these, the impacts that social robots can have on human autonomy. We argue that, due to their physical presence and social capacities, there is a strong potential for social robots to enhance human autonomy as well as several ways they can inhibit and disrespect it. We argue that social robots could improve human autonomy by helping us to achieve more valuable ends, make more authentic choices, and improve our autonomy competencies. We also argue that social robots have the potential to harm human autonomy by instead leading us to achieve fewer valuable ends ourselves, make less authentic choices, decrease our autonomy competencies, make our autonomy more vulnerable, and disrespect our autonomy. Whether the impacts of social robots on human autonomy are positive or negative overall will depend on the design, regulation, and use we make of social robots in the future.

Similar content being viewed by others

human vs robot presentation

Can Robots Make us Better Humans?

Massimiliano L. Cappuccio, Eduardo B. Sandoval, … Mari Velonaki

human vs robot presentation

Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being

Jason Borenstein & Ron Arkin

human vs robot presentation

“Sociality and Normativity for Robots”: An Introduction

Avoid common mistakes on your manuscript.

1 Introduction

Social robots are robots that can appear to express and perceive human emotions and can communicate with us using “high-level dialogue and natural cues”, such as gaze and gestures (Fosch-Villaronga et al., 2020 , p. 441). The interactivity and receptivity of social robots can encourage humans to form social relationships with them. As social robots and the artificial intelligence (AI) that powers them becomes more advanced (Lutz et al., 2019 ), they will likely take on more social and work roles. This could include undertaking care work for children, the elderly and the sick, becoming our teachers and work colleagues and, eventually, our social companions, friends, and even sexual partners (Darling, 2018 ; Ferreira et al., 2017 ; Lin et al., 2012 ; Mackenzie, 2018 ; Pirhonen et al., 2020 ; Sparrow, 2017 ). These changing roles constitute a shift in our relationship with technology such as social robots from it being a tool that we use to achieve our ends to something that we regard as an agent that we interact with (Breazeal et al., 2004 ). This shift has many important ethical implications. In this paper, we focus on one of the most central of these, its impacts on our autonomy. The autonomy of AIs and social robots and the autonomy of humans are often seen as a zero-sum game: more autonomy for social robots by offloading decisions to them equals less autonomy for humans (Floridi & Cowls, 2019 ). But, as we shall see, the impacts of social robots on human autonomy are more varied and complex than such an analysis suggests. Given the importance of autonomy to our understanding of morality, it is essential that we think through these ethical issues before these impacts are widely felt.

The paper proceeds as follows. First, we set out briefly what is meant by autonomy in the philosophy and ethical AI literatures. This is important since it shows us both that autonomy can mean different things in different literatures and that there are substantive disagreements between different theories of autonomy. This matters since different theories of autonomy can have competing normative implications. With this background in place, we then justify our focus on social robots on the grounds that, given their physical presence and social abilities, they have the potential to have very significant impacts on our autonomy (Borenstein & Arkin, 2016 ). While some of the implications that social robots have for autonomy will also hold for AI and other forms of technology, not all these implications will hold or will hold to the same degree for these other forms of technology. We then demonstrate the ways in which social robots could enhance and respect, as well as inhibit and disrespect, the autonomy of their users. We identify three broad ways that social robots could improve our autonomy through leading to humans having: (1) more valuable ends; (2) improved autonomy competencies; and (3) more authentic choices. We also identify five ways that social robots could harm our autonomy through leading to humans having: (1) fewer valuable ends; (2) worse autonomy competencies; (3) less authentic choices; (4) greater autonomy vulnerability; and (5) their autonomy disrespected. While this list is not intended to be exhaustive, it is illustrative as it brings together for the first time a systematic analysis of the most important impacts of social robots on human autonomy. We show that whether the impacts of social robots are positive or negative overall for human autonomy will depend on the design, regulation, and use that we make of social robots in the future.

2 Human Autonomy

While a full analysis of the philosophical literature on autonomy is obviously beyond our scope here, it will prove useful to outline some of the most relevant features of that literature here as background for the detailed analysis that occurs later in the paper. In its broadest sense, autonomy means “self-rule” (Darwall, 2006 ). Autonomy has been applied to political entities, institutions, machines, and persons who govern themselves. However, over time the focus of autonomy has shifted (Darwall, 2006 ). Initially, autonomy in ancient Greek thought was primarily used as a political term referring to states or cities that govern themselves (Schneewind, 1998 , p. 3; Formosa, 2017 ). Only later, with Kant, does autonomy come to be linked to the independence of practical reason and the freedom that reason grants persons to govern themselves independently of obedience to others, including the state (Schneewind, 1998 ). Kant’s concept of personal autonomy as rational self-government has, in turn, become increasingly expanded beyond adherence to universal law to include governing yourself according to your “own” authentic desires and impulses (O’Neill, 2002 , p. 31). Our focus here will be on this expanded sense of personal autonomy.

There are many different theories of personal autonomy in the philosophical literature (Anderson et al., 2005 ). While the detailed differences between these theories are not relevant here, the broad structure of the different types of theories of autonomy will prove important since it influences how we conceptualise the impacts that social robots have on human autonomy. Broadly, we can group contemporary theories of personal autonomy into procedural or substantive theories (Formosa, 2013 ; Mackenzie & Stoljar, 2000 ). We shall briefly consider each type in turn.

Procedural theories hold that content-neutral procedures provide necessary and sufficient conditions for autonomy. For example, Frankfurt’s ( 1971 ) well-known account of autonomy holds that an action is autonomous if you act on desires that you desire to have. Here we have a procedure , namely that the first-order desires that you act on are desires that you desire to have (i.e. your second-order desires are effective), that determines whether you are autonomous, while saying nothing about the content of your desires (either first or second order). Other procedural theories give a different account of the procedures that determine whether you are autonomous, such as Watson’s ( 1975 ) account which focuses on acting from reflectively endorsed values or Christman’s ( 2004 ) account which focuses on acting from values that you would not revise were you to become aware of the influences underwriting those values. What matters for such theories is the procedure you follow in deciding what to do, not the content of your decisions or values.

Critics of procedural theories argue that they struggle to deal with the problem of “oppressive socialisation”, that is, forms of socialisation that “impede the autonomy of the persons” that undergo it by undermining their “normative competence” at assessing norms for themselves (Benson, 1991 , p. 406; Mackenzie & Stoljar, 2000 , p. 20). This is a problem for procedural theories since their focus on content-neutral procedures makes it difficult for them to deal with cases where a person comes to reflectively endorse substantively flawed desires or norms. This is seen in the much-discussed case of the 1950s housewife who endorses the sexist and heteronomous norms that a woman should be subservient and under the control of her husband because her oppressive socialisation has left her unable to assess the falsity of such norms (Benson, 1991 ). This problem is related to the issue of adaptive preferences, whereby people can adapt their preferences to suit poor or unjust circumstances (Begon, 2015 ).

To deal with these concerns, substantive theories of autonomy hold that persons act autonomously if they act from the right set of values or in accordance with true or valid norms (Stoljar, 2000 ). For example, on some Kantian views, persons are autonomous when they act from the endorsement of the absolute value of the dignity of all rational agents (Formosa, 2013 ) or from the practical identity of themselves as an equal lawgiving member of the kingdom of ends (Korsgaard, 1996 ). The content of your decisions and values matters for such views, not merely the procedure you follow. Substantive accounts can avoid the problem of oppressive socialisation since they can claim that oppressive norms are false (Stoljar, 2000 ) or are incompatible with the dignity of all rational agents. However, critics of such theories worry that they can struggle to justify which substantive values or norms are the right ones (Formosa, 2013 ).

Both substantive and procedural theories of personal autonomy typically differentiate between competency and authenticity conditions (Christman, 2009 ; Susser et al., 2019 ). Authenticity conditions require that the values and desires that you act on are really your own , and not those that result from manipulation, oppression, subservience, undue external influence, or coercion. Competency conditions point to the fact that to be autonomous you must be able to do various things and have certain skills and self-attitudes (Meyers, 1987 ), such as being able to critically reflect on your values, adopt ends, imagine yourself being otherwise, and regard yourself as the bearer of dignity authorised to set your own ends. Several self-attitudes, such as self-respect, self-love, self-esteem, and self-trust, are also seen as important autonomy competencies (Benson, 1994 ; Mackenzie, 2008 ). These are seen as important because if you are to regard yourself as self-governing, then you need to be able to have respect for your powers of rational agency (self-respect), hold that your ends are worthwhile (self-love), trust that you can do what you set out to do (self-trust), and think of yourself as having worth as a person (self-esteem). Oppressive socialisation works by inhibiting the development of these competencies by, for example, lowering the esteem in which you hold your own worth as a person (Benson, 1991 ; Mackenzie & Stoljar, 2000 ). Oppression can also undermine the authenticity of our choices by leading us to hold values and norms that are the result of undue external social pressures and are thus not really our “own” (Friedman, 1986 ). In contrast, positive patterns of intersubjective recognition can help to bolster these vulnerable self-attitudes (Mackenzie, 2008 ) and help us to develop values and norms that are authentically our own.

Autonomy can also be diminished and empowered through the quality of the choices available to us. This is clearly illustrated through Raz’s ( 1986 ) “ Man in the Pit ” example, where a man is stuck alone in a dark pit with a choice between eating, sleeping, or scratching his left ear now or a little later. Raz’s man in the pit lacks autonomy because he lacks an “adequacy of options” (Raz, 1986 , p. 373). When we are given more control over important aspects of our lives and access to a diverse range of meaningful choices, then our autonomy is increased. Further, having some degree of control over how our choices are realised, and not being subject to excessive oversight or control in their pursuit, is also important for our sense of autonomy (Ryan et al., 2006 ; Ryan & Deci, 2017 ).

As well as a capacity for self-governance, autonomy is also understood as a moral principle . This is clearest in the Kantian tradition (Kant, 1996 ), but is also present in various forms of principlism (Shea, 2020 ). For example, the 1974 Belmont Report on the ethical treatment of research subjects (via the “Respect for Persons” principle), the highly influential four principles of Beauchamp and Childress ( 2001 ) and its more recent extension by the AI4People framework (Floridi et al., 2018 ), all list autonomy as a basic ethical principle. Here autonomy is understood as something that ought to be respected, and that requires a focus on the consent of persons (Beauchamp & DeGrazia, 2004 ). Autonomy as a moral principle also speaks to the dangers of paternalism on the grounds that it disrespects autonomy through bypassing the consent of others (Scoccia, 1990 ).

Drawing these points together, we can say that human autonomy depends on the development and maintenance of a range of autonomy competencies. Autonomy also depends on having access to a sufficient range of meaningful options across important areas of life and being able to act freely on non-oppressive norms and values that are authentically our own without excessive oversight. Further, human autonomy is something that should be respected. All these aspects are important to consider when assessing the multifaceted impacts that social robots can have on human autonomy.

3 Machine Autonomy and AI

Machine autonomy can be understood as “the ability of a computer to follow a complex algorithm in response to environmental inputs, independently of real-time human input” (Etzioni & Etzioni, 2016 , p. 149). More advanced forms of machine autonomy typically depend upon the use of AI. Although there are many competing definitions of AI, we shall understand it here to be creating information-processing systems that can do things which we would typically classify as intelligent were a human to do them, such as reason, plan, solve problems, categorise, adapt to its environment, and learn from experience (for discussion see Wang, 2019 ). Machine autonomy comes in degrees. The more responsive machines are to a greater range of environmental inputs and the greater range of conditions in which machines can act, reason, and choose independently of real-time human input, the higher is their degree of autonomy.

The issue of machine (or artificial) autonomy is of central importance to much of the recent literature on ethical AI, as demonstrated by three recent reviews by Floridi and Cowls ( 2019 ), Hagendorff ( 2020 ), and Jobin et al. ( 2019 ). Floridi and Cowls ( 2019 ) conceptualise the issue of autonomy as one where humans offload decision-making powers to AI, and they worry that “the growth in artificial autonomy may undermine the flourishing of human autonomy” (Floridi & Cowls, 2019 , p. 7). On this analysis, if humans delegate a decision to an AI, then humans lose some autonomy and the AI gains some autonomy. Hagendorff ( 2020 ) instead takes human autonomy to refer in AI ethical guidelines to people being treated with respect as individuals, and he notes the tension between the need for AI to train on large data sets and the importance of not treating humans merely as sources of data. Further, he also identifies the ways that AI can be a threat to human autonomy by manipulating users through “micro targeting, nudging, [and] UX-design” (Hagendorff, 2020 ). Jobin et al. ( 2019 ) undertake an exhaustive review of ethical guidelines for AI in the grey literature (i.e. non-academic sources such as government reports). They find that autonomy is used in these guidelines to refer to both “positive freedom”, including the freedom to self-determination and to withdraw consent, and “negative freedom”, including the freedom from manipulation and surveillance. Autonomy is to be promoted through transparency, maintaining broad option sets, increasing knowledge of AI, requiring informed consent, and limiting data collection (Jobin et al., 2019 ).

Clearly, there are both strong overlaps and important differences in how machine autonomy is understood in contrast to human autonomy. For both humans and machines, autonomy is a matter of self-governing across a range of significant choices in various contexts, and thus increasing the capacity to self-govern across a greater range of contexts and actions increases autonomy. Further, for both humans and machines, taking on significant choices increases autonomy and offloading significant choices to others decreases autonomy. In contrast, concerns about nudging, manipulation and surveillance apply to human autonomy only. Further, when autonomy is understood as a moral principle, there is a clear imperative to respect the autonomy of humans, which requires their consent, that does not apply to respecting the autonomy of machines, because the former and not the latter (for now, at least) are moral agents. Whether social robots or AIs could ever become persons or moral agents are further questions beyond our scope (but for discussion see, for example, Gunkel, 2020 ; Sparrow, 2012 ; Fosch-Villaronga et al., 2020 ).

4 The Impacts of Social Robots on Human Autonomy

While all forms of technology can impact human behaviour, we focus in this paper on the impacts on human autonomy of advanced social robots since these impacts are likely to be particularly significant (Bankins & Formosa, 2020 ). Given the lack at present or in the near future (see Bostrom, 2014 ) of Artificial General Intelligence (AGI), that is, AI that matches human-level performance across all relevant human abilities (Walsh et al., 2019 , p. 16), we focus here only on social robots powered by Artificial Narrow Intelligence (ANI), that is, AI that is specialised to work only in specific areas (Gurkaynak et al., 2016 ). This means that we only consider instances of social robots being given limited machine autonomy in specific contexts, rather than general-purpose autonomy in every context.

Breazeal ( 2003 , p. 167) defines social robots as the “class of robots that people anthropomorphise in order to interact with them”. The Computers as Social Actors (CASA) paradigm (Reeves & Nass, 1996 ) suggests that humans tend to act as if computers and other forms of technology, such as social robots, are agents (or “social actors”) and not mere things. This leads humans to interact with technology by following the same social scripts, schemas, and rules, such as norms of politeness and reciprocity, that are used in human–human interactions (Reeves & Nass, 1996 ; for an updated review of CASA see Gambino et al., 2020 ). This helps to explain the human tendency to anthropomorphise technology by attributing human qualities and characteristics, such as motivations, intentions, and emotions, to non-human entities and inanimate objects (Epley et al., 2007 ; Fossa, 2018 ; Turkle, 2012 ).

However, while the tendency to anthropomorphise technology applies beyond social robots, it has been shown that the more socially interactive and human-like the robot is, the stronger is the tendency to anthropomorphise it (Fink, 2012 ). The social interactivity of social robots makes them “relational artifacts” that “present themselves as having ‘states of mind’” for their human partners to engage with (Turkle et al., 2006 , p. 347). This transforms our perception of social robots from tools that we use, into agents that we interact with in socially intuitive ways (Breazeal et al., 2004 ). Of course, this does not mean that social robots really are moral agents deserving moral respect, but it does mean that humans will tend to treat social robots as if they are agents. The use by social robots of verbal and non-verbal cues, such as gaze direction, and emotional receptivity aids this outcome. Drawing on Breazeal’s ( 2003 , p. 169) work, we can see that social robots come in various degrees of sophistication, from simple “ socially evocative ” robots such as robotic pets, to “ social interface ” robots which can use “human-like social cues and communication modalities”, to “ socially receptive ” robots that are receptive to human social cues, and finally “ sociable ” robots that have their own internal goals and “model people in social and cognitive terms in order to interact with them”. Our focus will primarily, but not exclusively, be on social robots on the more sophisticated end of this spectrum. We are therefore mainly thinking here about social robots that are “more sophisticated (but still non-sentient) versions of the [social] robots that we can build today” (Sparrow, 2017 , p. 468), that have advanced motor, social and emotional skills, and can draw on “empathetic technology” and “extensive knowledge of our preferences” to “tailor their behaviours” toward us (Bankins & Formosa, 2020 , p. 3). The social interactivity and physical presence of such sophisticated social robots makes their potential impacts on human autonomy very large, and this justifies our focus on them in this paper.

Given the importance for the discussion of autonomy of the delegation of decisions from humans to robots, we need to conceptualise the different ways this might occur. One commonly used way to describe that is through the language of a human in , on or out of the decision-making loop. The notion of a “ human-in-the-loop ” design has been used in a number of ways across several fields, from human–computer interaction (HCI), human–robot interaction (HRI), machine learning (ML) (Rahwan, 2018 ), and in the military context to discuss autonomous weapons systems (Schmitt & Thurnher, 2013 ; Sparrow, 2016 ; Walsh et al., 2019 ). Drawing on this literature, we can define a human in-the-loop design as one where a human must decide what a robot will do (e.g. a robot offers options but does not act until a human tells it which option to undertake); an on-the-loop design as one where a human may decide what a robot will do (e.g. a robot offers options but will act on its own if a human does not tell it which option to undertake); and an out-of-the-loop design as one where a human cannot decide what a robot will do (e.g. a robot independently acts on a certain option with no scope for human input). In the context of social robots, a similar distinction has been made between “opt in”, “opt out” and “no way out” pathways (Borenstein & Arkin, 2016 , p. 42) that approximates respectively the human in , on , and out of the loop distinction. Given its existing use in the context of social robots, we will adopt this language here.

To see the differences between these three pathways, consider the following example. Imagine a simple social robot that can offer advice about what clothes you should buy, but only does so if you explicitly ask for that advice or “opt in” to that service (in-the-loop). However, the social robot will automatically call emergency services if it thinks that you have fallen over unless you explicitly tell it not to or “opt out” within 10 s (on-the-loop). The social robot also has a GPS tracker that sends back its location at regular intervals to its manufacturer and the user has “no way out” of this tracking (out-of-the-loop). Both “opt in” (once opted in) and “opt out” pathways can operate at the level of decision support mechanisms as they leave the decision to the human user who remains part of the decision loop. In contrast, the “no way out” pathway removes the human from the decision-making loop, granting the machine full autonomy to undertake the action itself. While there may be more complex ways to make this distinction (such as differentiating between automating information provision, information analysis, and decision options; for discussion, see Lyell et. al., 2021 ), this simple tripartite model will suffice for our purposes. However, the practical differences here might be blurred given the presence of the “automation bias”, which is the “tendency [of humans] to over-rely on automation” (Goddard et al., 2012 , p. 121). Even if humans remain formally part of the decision loop, they may be biased towards always uncritically following the machine’s advice, which practically means that they are allowing the machine to act with little or no human oversight (as in a “no way out” design).

4.1 Social Robots as Autonomy Enhancers

Drawing on the above discussion, we argue that there are at least three broad ways that our autonomy could be enhanced by social robots. We can summarise these as, through the assistance of social robots, humans can achieve: (1) more valuable ends ; (2) improved autonomy competencies ; and (3) more authentic choices . These are important cases as they counteract the common view that more autonomy for machines means less autonomy for humans. Consider the example of Corti, an AI-powered machine that informs emergency call responders whether the caller is at risk of a heart attack through using machine learning to analyse breathing and speech patterns (van Wynsberghe & Robbins, 2019 ). Although Corti is not a social robot, we could easily imagine a “robotic triage nurse” with similar functions (Asaro, 2006 , p. 14). Corti is implemented as an “opt in” design as it merely advises a human operator who must choose whether to act on its advice. But if we instead delegate to Corti the decision whether to send an ambulance to someone through a “no way out” design, then we have seemingly increased Corti’s autonomy (since it can act independently in a greater range of cases) by decreasing the human operator’s autonomy (since they no longer make an important decision for themselves). This transforms Corti from what is known in the medical AI literature as a “decision support” into an “autonomous decision” technology (Lyell et al., 2021 ; Rogers et al., 2021 ). This makes human and machine autonomy seem like a zero-sum game, with more for one meaning less for the other. But, as the below discussion shows, this is not always the case.

4.1.1 More Valuable Ends

First, we can increase a person’s autonomy through giving them access to a suitably broad range of valuable ends. We can do that through giving people access either to a greater number of valuable ends or to ends that are more valuable. Social robots can help in both regards either by undertaking the means to ends set by humans or by setting lower value ends for humans on their behalf. In the first case, imagine an elderly woman called Sally who is unable to move around by herself. Sally would like a cup of tea to drink while she reads her novel, but she cannot adopt that end as she cannot move around by herself. However, one day Sally acquires a social robot who can assist her. As before, Sally would like a cup of tea to drink while she reads her novel, but now she can ask her social robot to make it and bring it to her, which it does while Sally continues to read. Sally has more autonomy because she can now set a valuable end, that of drinking a cup of tea while reading, which she could not otherwise set without (in this example) the help of her social robot. (Clearly, this example extends to many other cases of robots helping people to overcome restricted functional abilities—see Pirhonen et al. ( 2020 ). Further, many simpler forms of technology, such as walking aids, can also help people to set ends they otherwise could not). In the second case, imagine a businessman called Sam who has a social robot designed to be proactively helpful to him. After examining Sam’s schedule, his social robot proactively selects an appropriate shirt and tie for a business meeting that Sam has that morning (for an example of this sort of social robot see Woiceshyn et al., 2017 ) and brings the clothes to Sam at the exact moment it calculates that he will need them to get dressed to make his meeting on time. Sam is thankful for not having to spend time selecting his clothes for the day. After getting dressed, he hops into the taxi his robot has ordered for him so that he arrives exactly 5 min before his meeting, since his robot knows he always likes to be a few minutes early to meetings. Sam uses the time his robot’s proactive actions have gained him to read important documents that he wants to get through. Sam has more autonomy because he can now set a valuable end, that of reading important documents before his meeting, that he could not otherwise set without (in this example) the help of his social robot.

In Sally’s case, the social robot undertakes the means to ends that are set by a human. This is an “opt in” design. In Sam’s case, the social robot proactively sets ends for a human so that the human can set other ends. This is an “opt out” or “no way out” design, depending on the implementation. However, in both cases we do not have, for utility gains, a loss of human autonomy through a gain in machine autonomy. Instead, we have a gain in human autonomy (i.e. more meaningful choices for a human) through more machine autonomy (i.e. by delegating less important choices to a machine). To see why, consider the tea-making social robot in Sally’s case. While, in terms of starting the tea-making process, this is an “opt in” design, there are still many other sub-decisions that are delegated to Sally’s social robot and thus which constitutes a “no way out” design in this regard, such as the decision about how to safely navigate the room without stepping on the cat’s tail or spilling the tea. Compare this to a tea-making robot that lacks all autonomy, which would make it a simple remote-controlled device (or “telepresence robot”) unable to move by itself (Pirhonen et al., 2020 ). This design gives the human user more control over how the robot navigates the room, but this comes at the cost of making the robot far less useful. A simple way of reading this trade-off is: more autonomy for humans but a less useful machine, or less autonomy for humans but a more useful machine. But this is an overly simplistic analysis, as we shall see.

Autonomy is (in part) about freely choosing to do the valuable things that we authentically want to do. If Sally must spend her time remote controlling a robot across a room, rather than reading the novel which she really wants to read, then having more control over the robot means less autonomy for Sally as she is forced to do something that she does not value highly (i.e. remote controlling a robot) to get something else she really wants (i.e. a cup of tea to drink while reading her novel). In contrast, if Sally delegates the task of room navigation to the robot, thereby giving it more autonomy, then Sally is also more autonomous as she can instead spend her time freely doing what she really wants to do (i.e. reading the novel while the cup of tea is made for her). There is also some evidence to suggest that Sally will feel more autonomous due to the independence her robot gives her (Pirhonen et al., 2020 ). Likewise, Sam gains greater autonomy by delegating to his social robot the setting of what he regards as the less valuable ends of selecting which shirt and tie to wear and how to get to his meeting on time, since this allows him to pursue more valuable ends, in this case reading documents for his work meeting, that he really wants to do instead. In both cases, more machine autonomy leads to more human autonomy, not less, by giving Sally and Sam more time to do what they value most highly through the offloading of less valuable choices to their social robots. But this does not mean, as we shall see in the next section, that we can offload every difficult task or important decision to machines without loss to our autonomy.

4.1.2 Improved Autonomy Competencies

Second, social robots can also increase a person’s autonomy by helping them to build, maintain, and develop their autonomy competencies. A social robot could do this through either indirect or direct assistance. In the case of indirect assistance, a social robot indirectly frees up a person’s time and attention resources through undertaking less valued tasks for them. This gives that person the time and space they would not otherwise have had to develop their autonomy competencies themselves. Imagine a variation of the previous examples where a person offloads mundane tasks, such as making tea or booking a taxi, to a social robot so that they can directly cultivate their autonomy competencies themselves by, for example, reading a book on critical reasoning or talking to an encouraging friend which boosts their self-esteem. Here the social robot helps to facilitate autonomy competency development that might not otherwise have been possible. (In this case, other time saving forms of technology could have similar impacts). In the more interesting case of direct assistance, a social robot could directly increase a person’s autonomy competencies through positive social interactions with them. Here the social interactivity of this technology is crucial. If humans can develop, maintain, and cultivate their autonomy competencies through positive social interactions with each other that bolster attitudes such as self-respect, self-love, and self-trust (Mackenzie, 2008 ), then something similar should be possible with advanced social robots (Pirhonen et al., 2020 ). There is some evidence to support this claim. For example, a systematic review of the use of social robots among older adults found a lack of high-quality studies but some indications that social robots can reduce agitation, anxiety, and loneliness (Pu et al., 2019 ), which could in turn boost relevant autonomy competencies such as self-esteem. Similar positive impacts have been found in other populations (Jeong et al., 2015 ). Another study showed that social rejection by a robot can lower self-esteem relative to social acceptance by a social robot or a control condition (Nash et al., 2018 ).

These positive and negative impacts will likely be due, in part, to the human tendency to anthropomorphise social robots by regarding them as social agents who have “states of mind”, including attitudes toward us, that develop through our intuitive social interactions with them (Breazeal et al., 2004 ; Fossa, 2018 ; Turkle, 2012 ). For example, by seeming to regard you as a source of normative authority about what ought to be done, a social robot might be able to help foster your self-respect. Likewise, a social robot that seems to regard you and your ends as valuable by taking the initiative to proactively help you to achieve your ends might help to foster your self-love and self-esteem. By encouraging you, a social robot may also help you to develop self-trust. These positive social outcomes could be strengthened through the social robot’s use of gestures, tone of voice, eye contact, expression of (what appears to be) emotions such as sympathy, and physically embodied presence (Borenstein & Arkin, 2016 ; Li, 2013 ; Moshkina et al., 2011 ). Insofar as these positive outcomes can be achieved, social robots could directly improve our autonomy competencies.

4.1.3 More Authentic Choices

Third, we can increase a person’s autonomy by helping them to make more authentic choices, both in the sense of more choices that are authentic and choices that are more authentic . A social robot could use its social interactivity to help to achieve this outcome in several ways. A choice is authentic if one acts “on motives, desires, preferences and other reasons” that are “one’s own”, and they count as “one’s own” when, on reflection, one endorses or acknowledges them (Walker & Mackenzie, 2020 , p. 8). The more a choice is “one’s own” in this sense, the more authentic it is. However, measures of authenticity differ between substantive and procedural theories of autonomy.

On strong substantive views, a choice is more authentic the more it reflects the right values (Wolf, 1990 ) or norms (Stoljar, 2000 ), since it is only when we act on such values or norms that we correctly grasp moral reality and act authentically as the moral beings we are. Of course, as noted above, such views suffer from the difficulty of justifying what are the right values or norms. In any case, according to such views, social robots that help us to avoid acting from the wrong values or norms thereby help us to make more authentic choices by better connecting us with moral reality and our authentic moral selves. We can see how social robots might bring about this outcome by examining the way that some social robots are designed to shut down or resist abusive interactions (Turkle, 2012 ). For example, the “robotic dinosaur Pleo cries out as though it is experiencing pain if pushed over or otherwise ‘mistreated’” (Borenstein & Arkin, 2016 , p. 42). Generalizing, a social robot could be designed to use such behaviours to encourage us to make (what counts on a strong substantive view as) more authentic choices. For example, if you propose to commit a crime with the assistance of your social robot or attempt to violently assault your social robot (see Darling, 2018 ), then it could refuse to help you by shutting down or it could cry out in pain to stop you on the grounds that you are acting in an abusive and therefore inauthentic manner. However, social robots that actively resist poor treatment can create their own ethical difficulties, especially regarding “realistic female [sex] robots” because some users may use the robot’s refusal of consent to experiment with “rape fantasy” (Sparrow, 2017 , p. 465). Therefore, careful consideration of context and design is required to ensure that robot refusals encourage authentic moral behaviours rather than fuel immoral fantasies.

On procedural views, a choice is more authentic if it follows from the right sort of procedures, such as informed critical reflection. According to such views, social robots could help us to make more authentic choices by helping us to do better at critically reflecting on our choices and values. For example, imagine a social robot with “empathetic technology” that can identify a person’s emotional state through analysing their facial features, speech, and the levels of carbon dioxide on their breath (Seïler & Craig, 2016 ; Wakefield, 2018 ). Using this technology, a social robot could detect that a person is overcome with extremely strong emotions when they issue a command that could have serious implications for themselves and others. The social robot could then refuse to undertake that command for a certain period of time to give the person space to calm down and activate their critical reflection skills. Alternatively, a social robot could draw on relevant research about biases that impact human thinking (Kahneman, 2011 ), and evidence that people are more open to critical reflection after positive self-affirmation (von Hippel & Trivers, 2011 ), to first bolster a person’s sense of self-worth before alerting them to potential biases it has identified in their reasoning that might be preventing them from choosing what they would authentically want to choose. A social robot could also act as an interlocuter and help a person to consider the pros and cons of an important choice, provide information that it has identified as relevant to their choice to help to ensure that their choice is properly informed, alert them to the presence of past oppression that could be unduly influencing their choice without them knowing it, and keep them updated with changing information.

Many of these imaginary interventions by a social robot constitute examples of “nudging” a human to be more autonomous (Thaler & Sunstein, 2008 ). Drawing on dual process theory (Evans, 2008 ), Thaler and Sunstein describe two types of nudges, those that impact on our “Automatic System”, such as placing the item we wish to nudge someone towards at eye level, and those that impact on our “Reflective System”, such as nudges that encourage us to think carefully about something (Borenstein & Arkin, 2016 ; Thaler & Sunstein, 2008 ). The examples that we looked at in the previous paragraph involve Reflective System prompts to engage in processes that promote autonomy, such as informed critical reflection and the avoidance of unconscious biases. But nudging can also seek to influence us via our Automatic System. Reflective System nudges are less ethically worrisome, since they merely seek to encourage and inform autonomous self-reflection, whereas Automatic System nudges bypass critical self-reflection through unconscious influences aimed at paternalistically achieving a certain outcome. While there might still be good all-things-considered reasons for the latter type of nudges, such as opting people in automatically to socially beneficial programs rather than explaining to them the good reasons they have to opt in, the ethical issues involved in this type of nudging are more complicated (for discussion, see Schmidt & Engelen, 2020 ) and raise significant ethical concerns about paternalism. As such, while robotic nudges via our Reflective System (as focused on in this section) could aid our autonomy, similar nudges via our Automatic System may limit it (as we shall see in the next section).

4.2 Social Robots as Autonomy Inhibitors

The previous section focuses on the positives for autonomy. But it is not hard to see the negatives too. We can use the inverse of the three categories outlined above to group these worries. We can summarise these as, through the impacts of social robots, humans can have: (1) fewer valuable ends ; ( 2) worse autonomy competencies ; and (3) less authentic choices . But there are also other potential problems, including: (4) making human autonomy more vulnerable ; and (5) disrespecting human autonomy . Again, this list is meant to be illustrative, not exhaustive.

4.2.1 Fewer Valuable Ends

First, social robots could reduce our autonomy if it means that we set and achieve fewer valuable ends ourselves. As we saw above, when we offload unimportant means to our ends or offload unimportant ends to social robots, then our autonomy may be enhanced. By contrast, when we offload decisions to social robots about important ends or offload the undertaking of important means that are integral to the achievement of valuable ends, then our autonomy can be diminished. For example, if a social robot autonomously decides on your behalf (through an “opt out” or “no way out” design) whether to notify you of an incoming phone call or whether to accept a calendar invite based on its (and not your ) view of the perceived importance of the caller or inviter, then you lose some autonomy as you can no longer make the important choice of whether to answer a phone call or accept a meeting invite yourself. Less realistically but more troubling, a social robot could start to decide for you who you will date by using a dating app on your behalf after analysing your past dating experiences and preferences or decide on your behalf which school your child should attend after analysing school performance data and your child’s learning preferences. (Some of the examples in this section clearly apply to AI in general rather than social robots in particular). Even if you explicitly “opt in” to having a social robot make these decisions on your behalf, you still lose some autonomy because handing over such significant life decisions to a robot means that you have less control over important aspects of your life. This makes you less autonomous, even if you “opt in” to it and even if the decision is justified on other ethical grounds, such as the quality of the resulting robotic decision. This point is related to the issue of whether we should offload moral decisions to AI or artificial moral agents, since moral decisions are clearly important decisions (Robbins, 2019 ; Sparrow, 2016 ; van Wynsberghe & Robbins, 2018). While there may be good all-things-considered ethical reasons, such as better outcomes or the existence of time constraints, for offloading some important ethical decisions to an AI or social robot (Formosa & Ryan, 2020 ), there is also a clear cost to our autonomy in doing so that must be considered.

4.2.2 Worse Autonomy Competencies

Second, social robots could reduce our autonomy by resulting in us having lower levels of autonomy competencies. This could occur because they harm the development of autonomy competencies in children, or they harm the maintenance and cultivation of them in adults. Due to their physical presence, social robots have been shown to be effective in achieving positive educational outcomes for children (Belpaeme et al., 2018 ; Kanero et al., 2018 ). But do teaching interactions with social robots also help children to develop autonomy competencies? If it turns out that robots are less effective, as they are in other areas, at developing such competencies in children compared to skilled human teachers (Kanero et al., 2018 ), and if social robots take on more education and caring roles, then this could lead to children developing lower levels of autonomy competencies than they would through skilled human teaching (although there is a general lack of evidence in this regard; see Pashevich, 2021 ). Of course, this assumes that skilled human teaching is available, and where it is not, then robot teaching may be better than the alternatives. In terms of skill maintenance and cultivation in adults, Vallor ( 2015 ) raises the related worry of “moral deskilling”. In its general form, this worry is that when we offload tasks to technology, then we start to lose or degrade the relevant skills, including autonomy competencies, needed to complete the offloaded task. For example, if you become dependent on a social robot to make most decisions for you or to tell you what to do, then you may start to lose trust in your ability to get things done by yourself and your skills at making decisions could start to dissipate. Further, interpersonal skills are often essential to realising our ends, since achieving many ends requires complex social cooperation. But if we get used primarily to interacting with social robots, then we may start to lose our human-to-human interpersonal social skills. Similarly, if we get used to interacting with social robots that do not demand equal reciprocity in terms of social exchange, then our skills at engaging in reciprocal social exchanges with humans could start to atrophy (Bankins & Formosa, 2020 ). If we use our autonomy competencies less because social robots do more things for us, then our autonomy competencies will likely deteriorate.

4.2.3 Less Authentic Choices

Third, social robots could reduce our autonomy by causing us to have less authentic choices, both in the sense of fewer choices that are authentic and choices that are less authentic . There are several ways this could happen. One of the reasons that social robots have the potential to influence our behaviour is that we tend to regard them as social agents with states of mind and not mere tools (Breazeal et al., 2004 ). But this influence could also have negative impacts on our autonomy. For example, when we feel ourselves under surveillance and under the gaze of others, we can feel less able to act authentically and be who we really want to be (Molitorisz, 2020 ). This is compounded by the fact that we know that the AI and machine learning that will power social robots depends on large datasets, and we might worry that our social robot is really a surveillance machine sending our intimate personal data to its corporate creators (Hagendorff, 2020 ). This could make us act more self-consciously and less authentically in front of social robots, including by engaging in pre-emptive self-censorship, and given how deeply integrated into our lives social robots could become, this could deeply impair our autonomy. Social robots could also promote inauthenticity through the perpetuation of oppressive socialisation that reinforces unjust gender norms. For example, a UNESCO ( 2019 ) report shows that “female” AI assistants, such as Cortana, Siri, and Alexa, can perpetuate and reinforce norms that women should be servile and put up with abuse. A concrete example of this is that at one point Apple’s Siri responded to “You’re a slut” with “I’d blush if I could” (UNESCO, 2019 , p. 107). Submissiveness in “female” social robots, created by largely male development teams (UNESCO, 2019 ), could thus help to perpetuate oppressive norms that can directly harm the autonomy of women and other minorities. The likely reliance of social robots on pretrained neural language models that are “prone to generating racist, sexist, or otherwise toxic language” could further exacerbate this problem (Gehman et al., 2020 , p. 3356).

Another way that social robots could lead to less authentic choices is if they manipulate us. As happens in many online contexts, much of this manipulation could occur by targeting and exploiting the “decision-making vulnerabilities” of persons which can result in “autonomy harm” (Susser et al., 2019 , p. 1). For example, a 2017 report exposed internal Facebook documents showing that through monitoring its users, Facebook could determine when teenagers were feeling insecure, stressed, or anxious, and it could in principle use this information (even if it in fact did not) to manipulate them to purchase items through carefully targeted advertising (Susser et al., 2019 ). Whereas the robotic nudges toward autonomy that we discussed in the previous section operate via our conscious Reflective System and seek to counteract biases, the manipulations highlighted here work by exploiting human biases and decision-making vulnerabilities in ways that we are not consciously aware of via our Automatic System. This manipulation can also occur through the careful presentation, filtering, and ordering of information that social robots pass on to us, since what is and is not shown or told to us, in what way, and in what order it is presented, can all have hidden influences on our choices. This can involve “nudging” people through careful design of the “choice architecture” or context within which choices are made (for further discussions of this extensive literature see, for example, Thaler & Sunstein, 2008 ; Cohen, 2013 ; Quigley, 2013 ; Hansen & Jespersen, 2013 ). To the extent that these influences are exploited by social robots (or their creators) to get us to do what is in the commercial or political interests of its developers or advertisers, then the autonomy of users could be harmed and disrespected. This amounts to treating users as mere means to outcomes that others want them to choose, often for commercial or ideological reasons, rather than helping users to choose what they authentically want to do. While such manipulations through technology are hardly unique to social robots, the physically embodied nature of social robots means that these manipulative impacts could be greater than with other forms of technology.

4.2.4 Making Autonomy More Vulnerable

Fourth, social robots, as likely commercial products, could make our autonomy vulnerable in new ways and access to autonomy more precarious and unfair. Autonomy is not the same as independence, since dependency is a central feature of human life (Kittay, 1997 ), and most people autonomously choose to make themselves dependent on others, such as friends and family. Even so, if we become dependent on social robots for realising many of our ends or for social connection, then our autonomy becomes vulnerable in new ways. For example, our social robot might cease to work properly after a firmware update, which means that it becomes less able to help us to achieve our ends. Further, this could make our autonomy dependent on a company focused on profit (Hagendorff, 2020 ), rather than on friends and family who may genuinely care for us. In terms of access, social robots are likely, at least initially, to be very expensive, and this could create an underclass of people who have less access to the autonomy-enhancing features (outlined above) of social robots than the wealthy. While we already have such inequalities, since autonomy as substantive control over our lives requires access to resources that many people lack, it does create an important new area for this inequality to playout.

4.2.5 Disrespecting Autonomy

Fifth, social robots could be disrespectful towards our autonomy, and this is bad in its own right and could also lead to an erosion of our autonomy competencies if we internalise that disrespect (Formosa, 2013 ). Social robots that manipulate us use us as mere means. This constitutes disrespectful treatment, and the intentional design of such robots is an expression of disrespect by its creators. An example might be that of a social robot that knows that you are in a depressed state and uses that information to manipulatively encourage you to purchase an upgrade or other item. More generally, there might be something disrespectful about the very nature of social robots given that they “push our Darwinian buttons” by deceptively appearing to be “alive enough” (Turkle, 2012 , p. 8, 18). Indeed, the effectiveness of social robots depends on their cultivating the illusion in humans that they have internal mental and emotional states that, in fact, they do not really have. Many worry that this deception is unethical (Lucidi & Nardi, 2018 ). To the extent that it is unethical, it also disrespects our autonomy as it manipulates us into having false beliefs about the inner life of social robots. Social robots may also be used to amass large amounts of very personal data about us, given their potentially intimate presence in our lives (Lutz et al., 2019 ). This data gives corporations power over us which could be used to manipulate, pressure, and coerce us through social robots (Susser et al., 2019 ). Further, whether that intimate data could be obtained in a way that respects our autonomy is unclear. This points to the problem of what Nissenbaum ( 2011 ) calls the “transparency paradox”. Our ability to autonomously consent to privacy policies is flawed, given that we either consent to something too simplistic to accurately represent data flows or we cannot understand the complex legalese of more detailed policies. Either way, informed autonomous consent is difficult to achieve, and given that social robots will likely harvest large amounts of very intimate data about us, the potential this has for expressing disrespect and limiting our ability to have autonomous control over our personal data is concerning (Hagendorff, 2020 ; Sharkey & Sharkey, 2012 ).

5 Discussion

When thinking about the ethical implications of the increasing use of sophisticated social robots, it is important that we consider both their potential positive and negative impacts on our autonomy. From the above analysis, a few key issues emerge. Before we consider these from multiple perspectives, two points are worth noting.

First, our focus here is on autonomy only. But there are other relevant ethical issues at play, such as beneficence and justice, and as noted above the ethical issues raised by autonomy need to be balanced against these competing ethical concerns. Second, the range of possible responses to the ethical issues raised here include improved user education and public awareness, design considerations, ethical guidelines, industry standards, government agencies, and regulatory or legal frameworks (for discussion see Fosch-Villaronga et al., 2020 ; Petit, 2017 ). This discussion needs to consider the related existing and proposed regulatory frameworks and guidelines around robotics, AI, and privacy that exist across different jurisdictions. Thus far, most regulation in this context has taken the form of voluntary ethical guidelines, although this is changing through the impact of Europe’s GDPR (General Data Protection Regulation) in terms of privacy considerations and the emergence of standards such as the BS 8611:2016 Guide to the Ethical Design and Application of Robots and Robotic Systems and the IEEE Ethically Aligned Design 2017 (for an overview, see Fosch-Villaronga et al., 2020 ).

Further, the intensity of the regulatory response should be dependent on the degree and nature of the specific harms and externalities generated by social robots (Petit, 2017 ). There are dangers of both too little regulation, which can lead to user harms and a reluctance to embrace new technology, and of too much regulation, which can stifle innovation and prevent benefits and user choice. Given these complexities, rather than provide specific regulatory recommendations here, we shall instead focus on highlighting, from the perspectives of users, designers, and society more generally, the most significant ethical issues that emerge from the above analysis.

From the perspective of users, further education is an important goal. This should focus on how significant the choices being offloaded by users to their social robots are and how frequently that offloading is occurring, since this is under user control and has the potential to have both positive and negative impacts on their autonomy. The offloading of trivial or unimportant decisions promises to free up users’ time and attention resources for more meaningful exercises and cultivation of their autonomy and related competencies. In contrast, offloading significant decisions to social robots can not only directly limit the control that users have over important aspects of their lives, but also lead to an atrophying of their autonomy competencies when such offloading occurs frequently. Further, users need to be educated about their bias to rely uncritically on technology such as social robots (Goddard et al., 2012 ), and the dangers to their autonomy that nudging from social robots can have. Finally, users also need to be educated about the privacy implications of using social robots and be prescient to the dangers of emotional manipulation by their social robots. This is a particular problem for children who need to be reminded that social robots do not really care about them or have feelings, even if they seem to (Turkle, 2012 ).

For designers, a particular focus should be on how users will perceive the attitudes that social robots will seem to express toward them, especially insofar as they impact important self-attitudes such as self-respect and self-esteem. This should also include a focus on the differing social impacts, among a variety of cultural and social groups, of differences in speech, tone, and facial expression by social robots. Further, given the importance of social acceptance or rejection for users, the ways that social robots express these types of social judgments must be considered carefully to minimise any potential autonomy harms, especially for vulnerable users. Users will perceive social robots as having emotions and states of mind, and designers should be careful to avoid, intentionally or unintentionally, using these responses to manipulate users in inappropriate ways. Designers should also seek to aid user autonomy through Reflective System nudges that encourage critical reflection and limit the use of Automatic System nudges that can potentially disrespect users’ autonomy.

At a society level, beyond dealing with the issues already raised above and the broader existing regulatory frameworks around privacy, AI, and robotic safety (Fosch-Villaronga et al., 2020 ; Hagendorff, 2020 ), there are two further areas of focus worth mentioning here. These are how social robots respond to mistreatment and abusive behaviour (see Darling, 2016 ) and the potential impacts of social robots on perpetuating oppressive social norms that can inhibit human autonomy. These should be considered here because the ways that social robots in the aggregate respond to mistreatment and perpetuate existing norms will have broader consequences that should be considered at a social level. Dealing with these issues requires the input of a diverse group of stakeholders to ensure a variety of perspectives are considered. Industry guidance or examples of ethical best practice would be helpful in this regard. Finally, given their massive data collection potential, and their impacts on the physical and informational privacy of their users, social robots must be designed with user privacy in mind, and this is probably best dealt with at a regulatory level to ensure compliance (see Lutz et al., 2019 ).

6 Conclusion

Social robots have the potential to help their users to be more independent and autonomous and improve their autonomy competencies, but also the potential to manipulate, deskill, illicitly surveil, and disrespect their users’ autonomy. Whether the impacts of social robots are positive or negative overall for human autonomy will depend on the design, regulation, and use that we make of social robots in the future. What is clear is that the potential impacts of social robots on human autonomy are profound and multifaceted. While the issues examined here are not exhaustive, we have provided a systematic analysis of the most important and relevant ethical considerations through highlighting both the potential positive and negative implications. This provides a useful theoretical foundation for further work examining the implications for human autonomy of social robots and AI more broadly. 

Anderson, J., Christman, J., & Anderson, J. (2005). Autonomy and the challenges to liberalism . Cambridge University Press.

Google Scholar  

Asaro, P. (2006). What should we want from a robot ethic? International Review of Information Ethics, 6 , 9–16.

Bankins, S., & Formosa, P. (2020). When AI meets PC: Exploring the implications of workplace social robots and a human-robot psychological contract. European Journal of Work and Organizational Psychology, 29 (2), 215–229.

Beauchamp, T. L., & Childress, J. F. (2001). Principles of biomedical ethics . Oxford University Press.

Beauchamp, T. L., & DeGrazia, D. (2004). Principles and principlism. In G. Khushf (Ed.), Handbook of bioethics (pp. 55–74). Springer.

Begon, J. (2015). What are adaptive preferences? Journal of Applied Philosophy, 32 (3), 241–257.

Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., & Tanaka, F. (2018). Social robots for education: A review. Science Robotics . https://doi.org/10.1126/scirobotics.aat5954

Article   Google Scholar  

Benson, P. (1991). Autonomy and oppressive socialization. Social Theory and Practice, XVI, I (3), 385–408.

Benson, P. (1994). Free agency and self-worth. Journal of Philosophy, 91 (12), 650–658.

Borenstein, J., & Arkin, R. (2016). Robotic nudges: The ethics of engineering a more socially just human being. Science and Engineering Ethics, 22 (1), 31–46.

Bostrom, N. (2014). Superintelligence . Oxford University Press.

Breazeal, C. (2003). Toward sociable robots. Robotics and Autonomous Systems, 42 , 167–175.

MATH   Google Scholar  

Breazeal, C., Gray, J., Hoffman, G., & Berlin, M. (2004). Social robots: Beyond tools to partners. RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759) , pp. 551–556.

Calvo, R. A., Peters, D., & Vold, K. (forthcoming). Supporting human autonomy in AI systems. In C. Burr & L. Floridi (Eds.), Ethics of Digital Well-Being . Springer.

Christman, J. (2004). Relational autonomy, liberal individualism and the social constitution of selves. Philosophical Studies, 117 , 143–164.

Christman, J. (2009). The politics of persons: Individual autonomy and socio-historical selves . Cambridge University Press.

Cohen, S. (2013). Nudging and informed consent. The American Journal of Bioethics, 13 (6), 3–11.

Darling, K. (2016). Extending legal protection to social robots. In R. Calo, A. Froomkin, & I. Kerr (Eds.), Robot law. Edward Elgar.

Darling, K. (2018). Who’s Johnny?’ anthropomorphic framing in human-robot interaction, integration, and policy. In P. Lin, G. Bekey, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0 (p. 22). Oxford University Press.

Darwall, S. (2006). The value of autonomy and autonomy of the will. Ethics, 116 , 263–284.

Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114 (4), 864–886.

Etzioni, A., & Etzioni, O. (2016). AI assisted ethics. Ethics and Information Technology, 18 (2), 149–156.

Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgement, and social cognition. Annual Review of Psychology, 2008 (59), 255–278.

Ferreira, M. I. A., Sequeira, J. S., Tokhi, M. O., Kadar, E. E., & Virk, G. S. (Eds.). (2017). A World with Robots: International Conference on Robot Ethics: ICRE 2015 . Springer.

Fink, J. (2012). Anthropomorphism and human likeness in the design of robots and human–robot interaction. In S. S. Ge, O. Khatib, J.-J. Cabibihan, R. Simmons, & M.-A. Williams (Eds.), Social robotics (Vol. 7621, pp. 199–208). Springer.

Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review .

Floridi, L., et al. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28 (4), 689–707.

Formosa, P. (2013). Kant’s conception of personal autonomy. Journal of Social Philosophy, 44 (3), 193–212.

Formosa, P. (2017). Kantian ethics . Cambridge University Press.

Formosa, P., & Ryan, M. (2020). Making moral machines. AI & Society . https://doi.org/10.1007/s00146-020-01089-6

Fosch-Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2020). Gathering expert opinions for social robots’ ethical, legal, and societal concerns. International Journal of Social Robotics, 12 (2), 441–458.

Fossa, F. (2018). Artificial moral agents: Moral mentors or sensible tools? Ethics and Information Technology, 20 (2), 1–12.

MathSciNet   Google Scholar  

Frankfurt, H. G. (1971). Freedom of the will and the concept of a person. The Journal of Philosophy, 68 (1), 5–20.

Friedman, M. (1986). Autonomy and the split-level self. Southern Journal of Philosophy, 24 (1), 19–35.

Gambino, A., Fox, J., & Ratan, R. (2020). Building a stronger CASA: Extending the computers are social actors paradigm. Human-Machine Communication, 1 , 71–86.

Gehman, S., et al. (2020). RealToxicityPrompts: evaluating neural toxic degeneration in language models. In Findings of the Association for Computational Linguistics . Association for Computational Linguistics.

Goddard, K., Roudsari, A., & Wyatt, J. (2012). Automation bias. Journal of the American Medical Informatics Association, 19 (1), 121–127.

Gunkel, D. J. (2020). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology, 22 (4), 307–320.

Gurkaynak, G., Yilmaz, I., & Haksever, G. (2016). Stifling artificial intelligence. Computer Law & Security Review, 32 (5), 749–758.

Hagendorff, T. (2020). The ethics of Ai ethics: An evaluation of guidelines. Minds and Machines . https://doi.org/10.1007/s11023-020-09517-8

Hansen, P., & Jespersen, A. (2013). Nudge and the manipulation of choice. European Journal of Risk Regulation, 4 (1), 3–28.

Jeong, S., et al. (2015). A Social Robot to Mitigate Stress, Anxiety, and Pain in Hospital Pediatric Care. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction , pp. 103–104.

Jobin, A., Lenca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1 (9), 389–399.

Kahneman, D. (2011). Thinking, fast and slow . Macmillan.

Kanero, J., Geçkin, V., Oranç, C., Mamus, E., Küntay, A. C., & Göksun, T. (2018). Social robots for early language learning: Current evidence and future directions. Child Development Perspectives, 12 (3), 146–151.

Kant, I. (1996). Groundwork of the metaphysics of morals. In M. J. Gregor (Ed.), Practical philosophy (pp. 37–108). Cambridge University Press.

Kittay, E. F. (1997). Human dependency and Rawlsian equality. In D. Meyers (Ed.), Feminists rethink the self. Westview Press.

Korsgaard, C. M. (1996). The sources of normativity . Cambridge University Press.

Li, J. (2013). The nature of the bots. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction—ICMI ’13 , pp. 337–340.

Lin, P., Abney, K., & Bekey, G. A. (Eds.). (2012). Robot ethics . MIT Press.

Lucidi, P. B., & Nardi, D. (2018). Companion Robots. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society , pp. 17–22.

Lutz, C., Schöttler, M., & Hoffmann, C. (2019). The privacy implications of social robots. Mobile Media & Communication, 7 (3), 412–434.

Lyell, D., Coiera, E., Chen, J., Shah, P., & Magrabi, F. (2021). How machine learning is embedded to support clinician decision making: An analysis of FDA-approved medical devices. BMJ Health & Care Informatics, 28 (1), e100301. https://doi.org/10.1136/bmjhci-2020-100301

Mackenzie, C. (2008). Relational autonomy, normative authority and perfectionism. Journal of Social Philosophy, 39 (4), 512–533.

Mackenzie, C., & Stoljar, N. (Eds.). (2000). Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self . Oxford University Press.

Mackenzie, R. (2018). Sexbots: sex slaves, vulnerable others or perfect partners? International Journal of Technoethics, 9 (1), 1–17.

Meyers, D. (1987). Personal autonomy and the paradox of feminine socialization. Journal of Philosophy, 84 (11), 619–628.

Molitorisz, S. (2020). Net privacy . NewSouth Publishing.

Moshkina, L., Park, S., Arkin, R. C., Lee, J. K., & Jung, H. (2011). TAME: Time-varying affective response for humanoid robots. International Journal of Social Robotics, 3 (3), 207–221.

Nash, K., Lea, J. M., Davies, T., & Yogeeswaran, K. (2018). The bionic blues: Robot rejection lowers self-esteem. Computers in Human Behavior, 78 , 59–63.

Nissenbaum, H. (2011). A contextual approach to privacy online. Daedalus, 140 (4), 32–48.

O’Neill, O. (2002). Autonomy and Trust in Bioethics . Cambridge University Press.

Pashevich, E. (2021). Can communication with social robots influence how children develop empathy? AI & SOCIETY . https://doi.org/10.1007/s00146-021-01214-z

Petit, N. (2017). Law and regulation of artificial intelligence and robots. SSRN Electronic Journal . https://doi.org/10.2139/ssrn.2931339

Pirhonen, J., Melkas, H., Laitinen, A., & Pekkarinen, S. (2020). Could robots strengthen the sense of autonomy of older people residing in assisted living facilities? Ethics and Information Technology, 22 (2), 151–162.

Pu, L., Moyle, W., Jones, C., & Todorovic, M. (2019). The Effectiveness of social robots for older adults. The Gerontologist, 59 (1), e37–e51.

Quigley, M. (2013). Nudging for health. Medical Law Review, 21 (4), 588–621.

Rahwan, I. (2018). Society-in-the-loop. Ethics and Information Technology, 20 (1), 5–14.

Raz, J. (1986). The morality of freedom . Clarendon Press.

Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places . Cambridge University Press.

Robbins, S. (2019). AI and the path to envelopment. AI & SOCIETY . https://doi.org/10.1007/s00146-019-00891-1

Rogers, W. A., Draper, H., & Carter, S. M. (2021). Evaluation of artificial intelligence clinical applications: Detailed case analyses show value of healthcare ethics approach in identifying patient care issues. Bioethics, 35 (7), 623–633. https://doi.org/10.1111/bioe.12885

Ryan, R. M., Rigby, C. S., & Przybylski, A. (2006). The motivational pull of video games: A self-determination theory approach. Motivation and Emotion, 30 (4), 344–360.

Ryan, R. M., & Deci, E. L. (2017). Self-Determination Theory . Guilford Publications.

Schmidt, A. T., & Engelen, B. (2020). The ethics of nudging. Philosophy Compass . https://doi.org/10.1111/phc3.12658

Schmitt, M. N., & Thurnher, J. S. (2013). “Out of the loop”: Autonomous weapon systems and the law of armed conflict. Harvard National Security Journal, 4 , 231–281.

Schneewind, J. B. (1998). The invention of autonomy . Cambridge University Press.

Scoccia, D. (1990). Paternalism and respect for autonomy. Ethics, 100 (2), 318–334.

Seïler, N. R., & Craig, P. (2016). Empathetic technology. In S. Tettegah & S. Sharon (Eds.), Emotions and technology, emotions, technology, and design (pp. 55–81). Academic Press.

Sharkey, A., & Sharkey, N. (2012). Granny and the robots. Ethics and Information Technology, 14 (1), 27–40.

Shea, M. (2020). Forty years of the four principles. The Journal of Medicine and Philosophy, 45 (4–5), 387–395.

Sparrow, R. (2012). Can machines be people? In P. Lin, K. Abney, & G. Bekey (Eds.), Robot ethics (pp. 301–316). MIT Press.

Sparrow, R. (2016). Robots and respect: Assessing the case against autonomous weapon systems. Ethics and International Affairs, 30 (1), 93–116.

Sparrow, R. (2017). Robots, rape, and representation. International Journal of Social Robotics, 9 (4), 465–477.

Stoljar, N. (2000). Autonomy and the FEMINIST INTUITion. In C. Mackenzie & N. Stoljar (Eds.), Relational autonomy. Oxford University Press.

Susser, D., Roessler, B., & Nissenbaum, H. (2019). Technology, autonomy, and manipulation. Internet Policy Review . https://doi.org/10.14763/2019.2.1410

Thaler, R. H., & Sunstein, C. R. (2008). Nudge . Yale University Press.

Turkle, S. (2012). Alone together . Basic Books.

Turkle, S., Targgart, W., Kidd, C., & Daste, O. (2006). Relational artifacts with children and elders. Connection Science, 18 (4), 347–361.

UNESCO. (2019). I’d blush if I could: Closing gender divides in digital skills through education . UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000367416.page=1

Vallor, S. (2015). moral deskilling and upskilling in a new machine age. Philosophy & Technology, 28 (1), 107–124.

van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics, 25 , 719–735.

von Hippel, W., & Trivers, R. (2011). The evolution and psychology of self-deception. Behavioral and Brain Sciences, 34 (1), 1–16.

Wakefield, J. (2018). Fear detector exposes people’s emotions. BBC . https://www.bbc.com/news/technology-43653649

Walker, M. J., & Mackenzie, C. (2020). Neurotechnologies, Relational autonomy, and authenticity. International Journal of Feminist Approaches to Bioethics, 13 (1), 98–119.

Walsh, T., Levy, N., Bell, G., Elliott, A., Maclaurin, J., Mareels, I., & Wood, F. (2019). The Effective and ethical development of Artificial Intelligence (p. 250). ACOLA. 10

Wang, P. (2019). On defining artificial intelligence. Journal of Artificial General Intelligence, 10 (2), 1–37.

Watson, G. (1975). Free agency. Journal of Philosophy, 72 (8), 205–220.

Woiceshyn, L., Wang, Y., Nejat, G., & Benhabib, B. (2017). Personalized clothing recommendation by a social robot. IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), 2017 , 179–185.

Wolf, S. (1990). Freedom within reason . Oxford University Press.

Download references

Open Access funding provided by the Macquarie University Research Centre for Agency, Values and Ethics (CAVE).

Author information

Authors and affiliations.

Department of Philosophy & Centre for Agency, Values and Ethics, Macquarie University, North Ryde, Australia

Paul Formosa

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Paul Formosa .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Formosa, P. Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy. Minds & Machines 31 , 595–616 (2021). https://doi.org/10.1007/s11023-021-09579-2

Download citation

Received : 03 June 2020

Accepted : 17 October 2021

Published : 25 October 2021

Issue Date : December 2021

DOI : https://doi.org/10.1007/s11023-021-09579-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Social robots
  • Artificial intelligence (AI)
  • Machine ethics
  • Artificial moral agents
  • Find a journal
  • Publish with us
  • Track your research

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Human- or object-like? Cognitive anthropomorphism of humanoid robots

Contributed equally to this work with: Alessandra Sacino, Luca Andrighetto

Roles Data curation, Formal analysis, Methodology, Software, Writing – original draft, Writing – review & editing

Affiliation Department of Educational Science, University of Genova, Genova, Italy

Roles Investigation, Software, Writing – original draft, Writing – review & editing

¶ ‡ FC, GDV, FB, FR and AS also contributed equally to this work.

Affiliations Department of Educational Science, University of Genova, Genova, Italy, Cognitive Architecture for Collaborative Technologies Unit, Istituto Italiano di Tecnologia, Genova, Italy

Roles Conceptualization, Investigation, Software

Roles Conceptualization, Writing – review & editing

Affiliation Robotics Brain and Cognitive Sciences Unit, Istituto Italiano di Tecnologia, Genova, Italy

Affiliation Cognitive Architecture for Collaborative Technologies Unit, Istituto Italiano di Tecnologia, Genova, Italy

Roles Conceptualization, Funding acquisition, Methodology, Project administration, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

ORCID logo

  • Alessandra Sacino, 
  • Francesca Cocchella, 
  • Giulia De Vita, 
  • Fabrizio Bracco, 
  • Francesco Rea, 
  • Alessandra Sciutti, 
  • Luca Andrighetto

PLOS

  • Published: July 26, 2022
  • https://doi.org/10.1371/journal.pone.0270787
  • Peer Review
  • Reader Comments

Fig 1

Across three experiments ( N = 302), we explored whether people cognitively elaborate humanoid robots as human- or object-like. In doing so, we relied on the inversion paradigm, which is an experimental procedure extensively used by cognitive research to investigate the elaboration of social (vs. non-social) stimuli. Overall, mixed-model analyses revealed that full-bodies of humanoid robots were subjected to the inversion effect (body-inversion effect) and, thus, followed a configural processing similar to that activated for human beings. Such a pattern of finding emerged regardless of the similarity of the considered humanoid robots to human beings. That is, it occurred when considering bodies of humanoid robots with medium (Experiment 1), high and low (Experiment 2) levels of human likeness. Instead, Experiment 3 revealed that only faces of humanoid robots with high (vs. low) levels of human likeness were subjected to the inversion effects and, thus, cognitively anthropomorphized. Theoretical and practical implications of these findings for robotic and psychological research are discussed.

Citation: Sacino A, Cocchella F, De Vita G, Bracco F, Rea F, Sciutti A, et al. (2022) Human- or object-like? Cognitive anthropomorphism of humanoid robots. PLoS ONE 17(7): e0270787. https://doi.org/10.1371/journal.pone.0270787

Editor: Josh Bongard, University of Vermont, UNITED STATES

Received: September 20, 2021; Accepted: June 20, 2022; Published: July 26, 2022

Copyright: © 2022 Sacino et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The data underlying the results presented in the experiments are available on OSF at https://osf.io/fyp4x/ .

Funding: This work has been supported by Curiosity Driven (2017)- D36C18001720005 grant to LA and funded by the University of Genova. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Robots are becoming more and more common in everyday life and accomplishing an ever-increasing variety of human roles. Further, their market is expected to expand soon, with more than 65 million robots sold a year by the end of 2025 [ 1 ]. As their importance for human life grows, the interest of robotics and psychology scholars in fully understanding how people perceive them constantly increases. Addressing this issue is indeed highly relevant, as one of the primary tasks of this technology is establishing meaningful relations with human beings.

The overall goal of the present research was to expand the knowledge about the human perception of robots. In doing so, we adopted an experimental psychological perspective on robotics (see [ 2 ]) and sought to uncover the cognitive roots underlying the anthropomorphism of these nonhuman agents.

Anthropomorphizing robots

Research on Human-Robot interaction (HRI) provided convergent evidence that the appearance of robots, together with their behaviors [ 3 , 4 ], deeply shapes people’s perceptions and expectations. Basing on the design of robots, people form impressions on them and infer their peculiar qualities, such as likeability [ 5 , 6 ], intelligence [ 7 ] or trustworthiness [ 5 – 9 ]. Although this design can assume different forms (e.g., machine- or animal-like), the humanoid shape is commonly considered as the most effective means to overcome the psychological barriers in the HRI [ 10 ]. Accordingly, humanoids are the robots most used within the social environment and, thus, the focus of the present research.

Similar to other nonhuman agents, the human likeness of robots is a key situational variable triggering people’s tendency to anthropomorphize them [ 11 ]. That is, the perceived similarity of a humanoid robot to human beings increases people’s accessibility to homocentric knowledge that is then projected onto the robot. Thus, robots resembling humans are more likely to be attributed distinctive human characteristics, such as the ability to think, being sociable [ 12 ], or feeling conscious emotions [ 13 ]. Further, such anthropomorphic inferences increase people’s sense of familiarity with this nonhuman target and a sense of control over them, with subsequent benefits for the interaction [ 14 ]. A great deal of research corroborated this latter assumption, by for instance revealing that people tend to trust (e.g., [ 15 ]; see also [ 16 ]) or empathize [ 17 ] more with anthropomorphized robots, as well as expect that they can behave morally [ 18 ]. At the same time, the relationship between the perceived human likeness of robots and their acceptance in the social environment appears to be quite complex and not linear. Drawing from the Uncanny Valley hypothesis ([ 19 ], for a critical review see e.g., [ 20 ]), some researchers [ 21 ] have for example demonstrated that too high levels of anthropomorphic appearance of humanoid robots trigger a sense of threat towards them, as they are seen as undermining the uniqueness of human identity. In the same vein, robots perceived as too similar to humans are perceived as less trustworthy and empathic [ 9 ]. A humanoid appearance also implies the expectations that the robot should move and behave following human-like motion regularities. Such implicit belief, when not fulfilled (e.g., by a humanoid robot moving in a nonhuman like kinematics) hinders basic prosocial mechanisms such as automatic synchronization or motor resonance, reducing the possibilities to establish a smooth interaction [ 22 ]. In the same vein, perceiving this technology as too human-like heightens people’s illusory expectations about the functions that this technology can indeed fulfill, and a violation of such expectations lowers the quality of HRI [ 23 ].

Despite the still debated effects of the human likeness of robots, anthropomorphism remains the most influential psychological process regulating the approach and subsequent interaction of humans with this technology. Thus, a systematic comprehension of the nature of this phenomenon is essential to better identify its antecedents and consequences for the HRI, be them positive or negative. So far, this process has been mostly conceived as a higher-order psychological process, consisting of inductive reasoning through which people attribute traits or qualities of human beings to this nonhuman agent. That is, most research in this field has investigated this process in terms of “content”, by assessing the extent to which respondents are inclined to attribute uniquely human attributes (e.g., rationality or the capacity of feeling human emotions) to this technology.

Unlike these previous studies, the main purpose of this research is to examine this process through a “process-focused lens” [ 24 ], that is, investigating whether it could also occur at a more basic cognitive processing level. More specifically, we were interested in understanding whether people cognitively process humanoid robots as human- or object-like and whether the levels of human likeness endorsed by these robots may affect such cognitive processing. Beyond contributing to the theoretical knowledge of this process, comprehending the cognitive roots of anthropomorphic perceptions could have important practical implications. How people cognitively perceive other agents (whether human or not) deeply shapes their first impressions—often at an unaware level—and affects the course also of HRI [ 25 ], above and beyond higher-order cognitive processes.

To achieve this aim, we integrated the existing research on the anthropomorphism of robots with cognitive paradigms commonly employed to study how people elaborate social (vs. non-social) stimuli.

Configural processing of social stimuli and the inversion paradigm

During the last decades, cognitive psychology and neuroscience have intensively studied whether our brain processes social (e.g., a human face or body) and non-social stimuli (i.e., objects) similarly or differently. Cumulating evidence consistently reveals that people recognize social stimuli through configural processing, which requires considering both the constituent parts of the stimulus and the spatial relations among them. Such a process is activated both when people elaborate on human bodies (see [ 26 ] for a review) and faces (see e.g., [ 27 ] for a review). Instead, people recognize objects (e.g., a house) through analytic processing, which relies only on the appraisal of specific parts (e.g., the door), without requiring info about the spatial relations among them. Although the nature of this dual process is largely debated (see e.g., the expertise hypothesis, [ 28 ]) and it is still not clear whether human faces and bodies are unconditionally processed in a configural way, there is general agreement that such social stimuli are commonly elaborated in this way. In contrast, objects are commonly processed analytically.

The major indicator of this bias has been studied through the inversion paradigm, in which participants are presented with a series of trials first showing a picture of a social stimulus or an object, either upright or upside down. Afterward, subjects are asked to recognize the picture they just saw within a pair including a distractor (mirror-image). The main assumption is that when people are presented with a stimulus in an upside-down (vs. upright) way, their ability to process it by relying on the spatial relations of its constituent features should be impaired. Thus, this inversion should undermine the recognition of social stimuli as they are processed in a configural way, whereas it should not affect (or affect less) the recognition of objects, as they are analytically processed. Several investigations that also employed EEG methods [ 29 ] have confirmed such premise, first considering human faces (face-inversion effect, [ 30 , 31 ]) and then bodies (body-inversion effect; [ 32 ]) as social stimuli. More recently, social psychology researchers have adapted the body-inversion paradigm to investigate the cognitive roots of sexual objectification. This is a specific form of dehumanization implying the perception (and treatment) of women as mere objects useful to satisfy men’s sexual desires [ 33 , 34 ]. In particular, Bernard and colleagues [ 35 ] demonstrated that the inversion effect (IE) does not emerge when people are exposed to images of sexualized female—but not male—bodies that were similarly recognized when presented upright or inverted. Hence, these social stimuli do not activate a configural processing and are cognitively objectified. This first impressive evidence has been then debated and criticized by Schmidt and Kistemaker [ 36 ], who demonstrated that the body asymmetry of the (female) stimuli used by Bernard and colleagues [ 35 ] explained the emerged pattern of findings (for a detailed discussion of this issue see [ 37 , 38 ]). However, subsequent studies (e.g., [ 39 ]) employing a different set of stimuli controlled for their asymmetry confirmed the effect found by Bernard and colleagues [ 35 ], supporting the idea that the IE is a valid indicator to study the cognitive objectification of sexualized women [ 40 ].

Drawing on these studies, in the present research we adapted inversion paradigms as basic tools to systematically investigate an inverse process rather than objectification, people’s perception of nonhuman agents (i.e., robots) as human ones. Interestingly, Zlotowski and Bartneck [ 41 ] found preliminary evidence about the investigated process. Although not systematically checking for the stimuli asymmetry, they showed that robot images, similar to human ones, were subjected to the IE and thus processed in a configural way. The main goal of the present research is replicating and expanding this initial evidence in different ways. In the first step, we aimed to verify whether the IE would emerge for robot stimuli when controlling for each employed stimulus’s asymmetry. Second, we verified whether the human-like appearance of humanoid robots would modulate the hypothesized cognitive anthropomorphism, and especially emerge for humanoid robots with high levels—but not with low levels—of human-like appearance. Third, we explored whether similar effects would emerge not only when considering the whole silhouettes of robots (body-IE), but also their faces (face-IE). In fact, we reasoned that an exhaustive comprehension of the cognitive anthropomorphism of humanoid robots should also encompass how human beings process their faces, besides their bodies. Faces are indeed the focal point in social cognition [ 42 ] and a prominent cue of humanity. Accordingly, recent research [ 43 ] for example revealed that (human) faces follow a peculiar configural processing, which in turn activates human-related concepts.

Research overview

We designed three experiments to address the aims outlined above. In all the studies, we relied on inversion paradigms adapted from the previous studies, in which participants were exposed to stimuli portraying human beings, humanoid robots or objects. Following the original protocols, the image was first presented in an upright or inverted position for each trial and then followed by two images. One of them was the original picture and the second was its mirrored version (i.e., distractor). Participants’ task was to recognize which picture of the two was the initial one.

In Experiment 1 and 2, participants were displayed entire bodies of human beings or humanoid robots, to investigate whether the body-IE would emerge both for human and robot stimuli. In Experiment 3, we explored the face-IE for the target stimuli, by presenting participants pictures portraying faces of human or humanoid robots. Further, in Experiment 1 we kept constant the medium levels of human likeness of robots and faces. Instead, in Experiment 2 and 3 we manipulated them by selecting robots with high vs. low scores of overall (Experiment 2) or face human likeness (Experiment 3; for more details about the selection of these stimuli see below). To increase the consistency of the investigated effects, across the studies we also varied the object-control stimuli, including human-like objects (i.e., mannequins; Experiment 1), buildings (Experiment 2) or general domestic tools (Experiment 3).

Finally, in all the studies we verified whether the cognitive anthropomorphism detected through the IE would be associated with the higher-order anthropomorphism, that is with respondents’ tendencies to attribute robots uniquely human qualities.

Experimental material

The prototypes of robots were initially selected from the ABOT database ( http://abotdatabase.info/ ; [ 44 ]). It is a large pool of real-world humanoid robots that allows researchers to select them depending on their human-like appearance on distinct dimensions, each ranging from 0 to 100. In selecting our stimuli for robots, we set the filters for the considered dimensions, depending on our purposes and the availability of humanoid robot prototypes for the given range. That is, in Experiment 1, we selected 20 prototypes of robots with a medium overall human likeness score (42–66). In Experiment 2, we filtered 10 robots with a low overall human likeness score (0–40) and 10 robots with a high overall human likeness score (60–100). In Experiment 3, we filtered 12 robots having a low overall human likeness score (0–45) and low human-like face score (0–42), plus 12 robots having a high overall human likeness score (60–100) and high human-like face score (60–100). Further, in Experiment 2 and 3 the body-manipulators filter was also used, by selecting robots having body-manipulators above 50. This allowed us to exclude robots composed of a single body part (e.g., a cube with only one eye, a single arm without head or body) and, thus, to obtain a more homogenous and comparable set of robots across the experiments and conditions.

For all the experiments, images of the selected robots were then retrieved online and standardized as follows. Using the open-source software Krita, all the images were uniformed in grayscale and pasted onto a white background. In Experiments 1 and 2, images of full body robots in a standing position and head directed towards the camera were edited to depict them from head to knee and fitted in a 397×576 pixels image. In Experiment 3, images of full front faces of humanoid robots with a neutral expression were trimmed, to remove external features and depict them from the hairline to the neck and then fitted in a 300×400 pixels image. Examples of the standardized stimuli of robots used in each experiment are displayed in Fig 1 .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0270787.g001

Concerning human stimuli (see Fig 2 for some examples), for Experiment 1, we selected 20 images from work by Cogoni and colleagues [ 39 ]; personalized conditions), portraying the whole silhouette of 20 individuals (10 men and 10 women) wearing casual clothes. To increase the generalizability of the hypothesized effects, in Experiment 2 we ad hoc created a set of human stimuli, portraying the entire body of 10 individuals (5 men and 5 women), each in two different poses. Similarly, in Experiment 3 we used a set of human stimuli ad hoc developed, consisting of 12 pictures of full front human faces (6 men and 6 women) with a neutral expression. Human stimuli were standardized through the same procedure used for the robot ones.

thumbnail

https://doi.org/10.1371/journal.pone.0270787.g002

As object-control condition (see Fig 3 ), in Experiment 1 20 mannequins (10 male and 10 female) images were considered and standardized in the same way we did with robots and humans. Instead, in Experiment 2 (20 images) and 3 (12 images), we considered images of buildings as the object category, retrieved by the Cogoni and colleagues [ 39 ] research. Finally, in Experiment 3, a new set of 12 object stimuli was created ad hoc, including a wide variety of domestic tools (e.g., a cup or a bottle).

thumbnail

https://doi.org/10.1371/journal.pone.0270787.g003

Relevantly, for the experiments testing the body-IE (Experiment 1 and 2), an asymmetry-index was calculated for each robot, human and mannequin stimulus, following the procedure used in previous works [ 36 – 39 ]. For both experiments, data analyses revealed that the degree of asymmetry of the stimuli did not differ across the considered categories (see S1 File for more details about the procedure and data analyses).

Open science practices and statistical methods

The sample sizes for all the experiments were a priori planned following the recommendation by Brysbaert [ 45 ], who suggested that around 100 participants are requested to have adequate power when focusing on within-subjects designs with repeated-measures variables and interactions between them. For each experiment, we reported all the stimuli, variables, and manipulations. All data and materials are posted and publicly available on OSF at https://osf.io/fyp4x/ .

Main analyses were conducted using the GAMLj package [ 46 ] in Jamovi 1.8.4 version (The Jamovi project, [ 47 ], using a generalized mixed-model with a logit link function (logit mixed-model; [ 48 ]). In all the experiments, we considered participants’ binary accuracy responses as the main outcome variable, coded as correct (1) and incorrect (0). Also, as in each experiment, all the participants were presented the same set of stimuli, in our models we included both a by-subject and a by-item random intercept to account for individual variability and non-independence of observation. Stimulus orientation (upright = 1 vs. inverted = 2) and category (human vs. robot vs. control) were instead included as fixed effects. We reported significant odds ratios (OR) and the related 95% CI in interpreting the participants’ accuracy. As our logit mixed-models predicted the odds of giving a correct response (accuracy = 1), a significant OR below 1 indicates that changes in the independent variable (e.g., presenting an image in the inverted orientation vs. the upright one) reduce the odds of giving a correct response, while a significant OR greater than 1 indicates an increase in the odds of giving a correct response.

Finally, in each experiment before running the main analyses, we performed an outlier analysis on the latency responses, based on the nature of our studies and the statistical mixed-model approach adopted [ 49 , 50 ]. That is, we did not consider participants’ responses on trials with latencies deviating more than ± 3 SD from the mean or with latencies below 50 ms (for a similar procedure, see [ 32 ]).

Experiment 1

The first experiment was mainly designed to have preliminary evidence about the cognitive anthropomorphism of humanoid robots, relying on the body-IE. That is, we verified whether images portraying full-bodies of humanoid robots with medium levels of overall human likeness score would be cognitively elaborated similar to those of human beings and, thus, better recognized when presented upright than inverted.

Procedures performed in both experiments were approved by the Departmental Ethics Committee (CER-DISFOR) and were in accordance with the APA ethical guidelines, the 1964 Helsinki Declaration and its later amendments. Written informed consent was obtained before participants started the experiments, and they were fully debriefed after each experimental session.

Participants and experimental design.

Ninety-nine undergraduates at a north-western Italian university (39 male; M age = 22.2; SD = 2.26) were recruited on a voluntary basis by research assistants via e-mail or private message on social networks. A snowball sampling strategy was used, with the initial participants recruited through the experimenters’ friendship networks. A 2 (stimulus orientation: upright vs. inverted) × 3 (stimulus category: humans vs. robots vs. mannequins) within-subject design was employed.

Participants came into the laboratory individually for a study “investigating the social perception towards human and nonhuman stimuli”. The recognition task was administered using PsychoPy v3.03. Each participant was presented with 60 experimental stimuli (20 for each category) that were presented in a randomized order. Half of them were presented in an upright orientation and the other half 180° rotated on the x -axis (inverted condition). Following previous inversion-effect protocols, each trial began with the original image presented for 250 ms at the center of the screen in an upright or inverted orientation, depending on the experimental condition. Following a transient blank screen (1000 ms ), participants were presented with two images, on the right and left of the center of the monitor, respectively. One image was the original one, the other was its mirrored version. Participants’ task was to detect which of the two images was the same as the original one, by pressing the “A” key on the keyboard if the target image appeared on the left, the “L” key if it appeared on the right. Once participants had provided their responses, the next trial followed (see Fig 4 for a trial example). Before the experimental trials, participants were familiarized with the task through 9 practice trials.

thumbnail

https://doi.org/10.1371/journal.pone.0270787.g004

After the recognition task, the higher-order anthropomorphism of robots was detected by adapting the 7-item (α = .82; M = 1.55; SD = 0.57) self-report scale by Waytz and colleagues [ 51 ]. That is, participants were asked to rate the extent to which ( 1 = not at all ; 5 = very much ) they believed that the considered prototypes of robots were able to have a series of human mental abilities, such as “a mind of its own” or “consciousness”.

The outlier analysis on the latency responses identified 55 trials (out of a total of 5940) deviating more than ± 3 SD from the mean or with latencies below 50 ms and were thus removed from the main analyses.

The logit mixed-model conducted on participants’ accuracy responses (1 = correct; 0 = incorrect) revealed a main effect of the stimulus orientation (1 = upright; 2 = inverted), χ 2 (1) = 74.72, p < .001, OR = 0.57, 95% CI [0.50, 0.65], suggesting that presenting the stimuli in an inverted orientation reduces the odds of giving a correct response. Put differently, overall, the stimuli were better recognized when presented upright (estimated accuracy, EA = .83 ± .03) than inverted (EA = .74 ± .03). Further, a simple slope analysis (see Fig 5 ) revealed that human stimuli were recognized better when presented in an upright (EA = .82 ± .04) than inverted orientation (EA = .73 ± .05), χ 2 (1) = 23.70, p < .001, OR = 0.58, 95% CI [0.47, 0.72]. Most interestingly, a similar pattern also emerged for robot images, that were better recognized when presented in an upright orientation (EA = .83 ± .04) than an inverted one (EA = .75 ± .05), χ 2 (1) = 18.30, p < .001, OR = 0.62, 95% CI [0.49, 0.77]. A similar pattern was also observed for the mannequins, with a better performance when stimuli were presented upright than inverted (EA for upright vs. inverted = .85 ± .03 vs. .74 ± .05), χ 2 (1) = 34.00, p < .001, OR = 0.51, 95% CI [0.41, 0.64]).

thumbnail

Experiment 1. Error bars represent standard errors of the mean values.

https://doi.org/10.1371/journal.pone.0270787.g005

Instead, neither the main effect of stimulus category ( χ 2 (2) = 0.81, p = .666), nor the interaction Stimulus orientation × Stimulus category emerged as significant, χ 2 (2) = 1.43, p = .490.

Finally, we tested the relationship between the magnitude of the IE for robots and the composite score of the self-report scale assessing the respondents’ higher-order anthropomorphism. The IE index was obtained by subtracting for each respondent the accuracy mean of trials with robots in the inverted orientation from that of trials with robots in the upright orientation, so that the higher the value, the higher the magnitude of the IE. The correlational analyses revealed no significant link between the IE index and the respondents’ higher-order anthropomorphism, r = 0.04, p = 0.685.

Findings from Experiment 1 provided initial evidence about the cognitive anthropomorphism of robots. By replicating the preliminary work by Zlotowski and colleagues [ 9 ] with a more controlled set of stimuli, we found that body images of humanoid robots with medium levels of human-like appearance were better recognized when presented in an upright than an inverted orientation. Thus, full body images of robots activated a configural processing, similarly to social stimuli portraying human beings. However, similar to previous work (see [ 39 ]), such body-IE also emerged for other objects with a human-body shape, i.e. mannequins. Thus, the question arises whether the human-like appearance of a given non-social stimulus triggers a configural processing per se, or whether the activation of the configural processing depends on the specific non-social stimulus considered. To address this issue, in Experiment 2 we manipulated the levels (high vs. low) of human-like appearance of full body images of robots, to verify whether the IE would be moderated by their degree of human likeness. Further, in Experiment 2 we employed a different set of stimuli than mannequins as the object-control condition. In particular, we used a pre-tested set of images portraying buildings, as these are a kind of object extensively used in previous research when exploring the IE of social vs. non-social stimuli.

Finally, unlike the previous study by Zlotowski and colleagues [ 9 ], in Experiment 1 we did not find evidence about a possible association between the cognitive anthropomorphism of robots (i.e., the magnitude of the IE for stimuli of robots) and the participants’ higher-order anthropomorphism, which was detected in terms of attributions of uniquely human features. Thus, Experiment 2 was also designed to better investigate such relation.

Experiment 2

Ninety-four undergraduates at a north-western Italian university (40 male; M age = 21.8; SD = 2.82) were recruited through a similar recruitment procedure to Experiment 1. In this experiment, a 2 (stimulus orientation: upright vs. inverted) × 4 (stimulus category: humans vs. robots with high human likeness vs. robots with low human likeness vs. buildings) within-subject design was employed.

As the data collection for this and subsequent experiments took place during the COVID-19 pandemic, the recognition task was administered online using Inquisit 6 Web software. However, to ensure adequate control about participants’ attention during the task, they were examined individually under the experimenter’s supervision. She introduced them to the task and remained connected until the conclusion. Participants were then fully debriefed.

Each participant was presented with 80 experimental stimuli (20 per category). Unlike Experiment 1, all the stimuli were presented both in the upright and inverted orientation. This resulted in a total of 160 experimental trials per participant, preceded by 12 practice trials that helped familiarize themselves with the task. Due to the length of the task, the experiment was organized into four different blocks, each one containing 40 experimental trials and regarding a specific stimulus category. Stimuli were presented in a randomized order within each block, and the order of blocks was also randomized. Notably, before each block, participants were informed about the specific stimulus category that was presented. This information was especially important when considering the humanoid robots with high levels of human likeness, that would be per se not distinguished by human stimuli. The trial structure was similar to Experiment 1, presenting the original image (250 ms ) followed by a blank screen (1000 ms ) and the discrimination task.

After that, respondents’ higher-order anthropomorphism of humanoid robots was detected by employing the same 7-item measure used in Experiment 1. In this experiment, participants were presented this measure twice in a randomized order, one referring to the robots with high human likeness (α = .87; M = 1.59; SD = 0.69), one referring to those with low human likeness (α = .82; M = 1.47; SD = 0.55). For each scale presentation, the target robots were shown at the top of the screen page.

The analysis on the latency responses identified 133 outlier trials (out of a total of 15040), that were thus removed from the main analyses.

The logit mixed-model conducted on participants’ accuracy responses (1 = correct; 0 = incorrect) revealed a main effect of the stimulus orientation (1 = upright; 2 = inverted), χ 2 (1) = 84.18, p < .001, OR = 0.66, 95% CI [0.60, 0.72]: overall, the stimuli were better recognized when presented upright (EA = .87 ± .02) than inverted (EA = .82 ± .03). Conversely, the main effect of stimulus category was not significant, χ 2 (3) = 0.66, p = 0.883. Most importantly, the two-way Stimulus orientation × Stimulus category interaction emerged as significant, χ 2 (3) = 14.04, p = .003. The interpretation of this interaction through the simple slope analyses (see Fig 6 ) revealed that robots with high levels of human likeness were more accurately recognized when presented upright (EA = .89 ± 0.3) than inverted (EA = .81 ± 0.5), χ 2 (1) = 46.95, p < .001, OR = 0.53, 95% CI [0.44, 0.63]. Interestingly, a similar IE pattern also emerged for robots with low levels of human likeness (for upright orientation, EA = .88 ± 0.3; for inverted orientation, EA = .83 ± 0.5), χ 2 (1) = 20.43, p < .001, OR = 0.66, 95% CI [0.55, 0.79]. Consistent with Experiment 1, human stimuli were better identified when presented upright (EA = .87 ± 0.4) than inverted (EA = .81 ± 0.5), χ 2 (1) = 24.84, p < .001, OR = 0.64, 95% CI [0.53, 0.76]. Instead, confirming previous literature, this pattern did not emerge as significant for buildings ( χ 2 (1) = 3.49, p = .062), indicating that participants had a similar performance in recognizing building stimuli regardless of their upright (EA = .85 ± 0.4) or inverted (EA = .83 ± 0.4) orientation.

thumbnail

Experiment 2. Error bars represent standard errors of the mean values.

https://doi.org/10.1371/journal.pone.0270787.g006

Then, we verified the possible relation between participants’ higher-order anthropomorphism of robots assessed through the self-report scale and their IE index, which was calculated similarly to the previous experiment. As the IE indexes for robots with high and low levels of human likeness did not differ ( t(93) = 1.55, p = .124, 95% CI [-0.006, 0.053]), we collapsed them into a single one which was correlated with the composite scores of the self-report measures. In this case, also, the magnitude of the IE detecting the cognitive anthropomorphism did not correlate with the higher-order one, r = -0.18, p = .088.

The findings above replicated Experiment 1: once again, they revealed that the body-IE emerges for robots, similar to human beings. By expanding the previous results, the simple slope analyses also revealed that the body-IE was significant—and with a similar magnitude—when considering bodies of robots both with high and low levels of human likeness. Instead, consistent with previous literature, this effect did not emerge for objects (buildings).

Taken together, these results suggest that when cognitively processing full bodies of robot stimuli, perceivers tend to adopt a configural processing that is commonly activated for social stimuli. This process seems to regulate the cognitive elaboration of humanoid robots regardless of their levels of human likeness, at least when considering their full bodies. Consistent with the previous experiment, Experiment 2 revealed that this cognitive form of anthropomorphism is unrelated to the higher-order one: the IE index for robots did not significantly correlate with the self-report measure assessing the participants’ tendencies to attribute human mental states to humanoid robots.

Experiment 3 was designed to expand these findings, by mainly verifying whether the IE also emerges when considering faces (i.e., face-IE) of humanoid robots, rather than full-bodies. Like Experiment 2, we explored whether this presumed effect would be moderated by the levels (high vs. low) of human likeness of robot faces or, instead, emerge regardless of the degree of human likeness. Further, we compared the tested pattern of findings for robots with human facial stimuli and a series of object stimuli (i.e., domestic tools) created ad hoc. We opted to employ a different set of control stimuli to, on the one hand, increase the generalizability of our findings and, on the other hand, to have object stimuli with a size and a shape more comparable with the crucial stimuli of robot and human faces. Finally, we correlated the face-IE index of robots with a different scale of higher-order anthropomorphism than that used in the previous experiments.

Experiment 3

One hundred and nine undergraduates (52 male; M age = 22.1; SD = 2.92) were recruited with a similar procedure used in the previous experiments. A 2 (stimulus orientation: upright vs. inverted) × 4 (stimulus category: human faces vs. robot faces with high human likeness vs. robot faces with low human likeness vs. objects) within-subject design was employed.

Data collection was administered online using Inquisit 6 Web, following the same procedure employed in Experiment 2. Each participant was presented with 48 experimental stimuli (12 per category), presented in both upright and inverted orientation. This resulted in a total of 96 experimental trials per participant, preceded by 12 practice trials, that helped participants familiarize themselves with the task. Similar to Experiment 2, experimental trials were organized in 4 blocks, each one containing 24 trials, all regarding a specific stimulus category. Stimuli were presented in a randomized order within each block, the order of blocks was also randomized, and each block was followed by a pause. The trial structure was the same employed in Experiment 1 and 2, with the original image (250 ms) presentation followed by a blank screen (1000 ms) and the discrimination task.

After the computer task, respondents’ higher-order anthropomorphism was measured. Unlike previous experiments, we employed an adapted version of the 4-item scale by Waytz et al. [ 52 ], which detected the extent to which (0 = not at all ; 10 = very much ) participants perceived the considered robots intelligent, able to feel what was happening around them, to anticipate what was about to happen or to plan an action in an autonomous way. In this experiment also the self-report measure was presented twice, one referring to the robots with high human likeness (α = .85; M = 5.19; SD = 2.64), one to those with low human likeness (α = .79; M = 3.94; SD = 2.39). The employed faces for these robots were displayed at the top of the page screen.

The outlier analysis on the latency responses identified 56 outlier trials (out of a total of 10464), that were thus removed from the main analyses.

Consistent with previous experiments, the mixed-model revealed a main effect of the stimulus orientation, χ 2 (1) = 32.80, p < .001, OR = 0.74, 95% CI [0.66, 0.82]: overall, stimuli were better recognized when presented upright (EA = .85 ± .03) than inverted (EA = .81 ± .03). The main effect of stimulus category also emerged as significant ( χ 2 (3) = 29.40, p < .001), as well as the two-way Stimulus orientation × Stimulus category interaction, χ 2 (3) = 19.80, p < .001. Specifically, a simple slope analysis (see Fig 7 ) revealed that the IE emerged for human faces ( χ 2 (1) = 38.46, p < .001, OR = 0.53, 95% CI [0.43, 0.65], EA for upright vs. inverted = .86 ± .04 vs. .77 ± .05) and robot faces with high levels of human likeness ( χ 2 (1) = 15.65, p < .001, OR = 0.67, 95% CI [0.54, 0.81], EA for upright vs. inverted = .85 ± .04 vs. .79 ± .05). Instead, this pattern did not emerge as significant for robot faces with low levels of human likeness, χ 2 (1) = 0.68, p = .411 (EA for upright vs. inverted = .76 ± 0.6 vs. .75 ± 0.6). Similarly, participants had a similar performance in recognizing objects regardless of their orientation, χ 2 (1) = 0.70, p = .404 (EA for upright vs. inverted = .91 ± 0.3 vs. .90 ± 0.3).

thumbnail

Experiment 3. Error bars represent standard errors of the mean values.

https://doi.org/10.1371/journal.pone.0270787.g007

Finally, we calculated the correlation between the higher-order anthropomorphism detected through the self-report measure and the face-IE index, which was calculated like previous experiments. As in this experiment the IE emerged for faces of humanoid robots with high levels but not for those with low levels of human likeness, we computed separated correlations. In that case, also, the relation between the IE index and the participants’ higher-order tendencies to anthropomorphize robots was not significant, neither when considering the facial stimuli of robots with high levels ( r = − 0.06, p = .552) nor when considering those with low levels of human likeness ( r = .04, p = .639).

Findings for Experiment 3 revealed that the IE for robots also occurs when considering as stimuli their faces, rather than the entire bodies. Thus, the configural processing for this technology seems to be validated both for the body- and face-IE. However, the simple slope analyses conducted for this experiment revealed that the degree of human likeness of robots affects the face-IE. It emerged only for facial stimuli of humanoid robots with high levels of human likeness, but not for those with low levels. In line with previous experiments and literature, the IE also occurred for human facial stimuli but not for objects, also when employing a set of stimuli (i.e., domestic tools) different from Experiment 2. Finally, consistent with previous experiments the cognitive anthropomorphism of humanoid robots detected through the IE index did not correlate with the higher-order anthropomorphism assessed through the self-report measure.

General discussion

Overall, findings from our three experiments provided convergent evidence about the human tendency to cognitively anthropomorphize humanoid robots. Similar to stimuli portraying human beings, robots were consistently better recognized when presented in an upright than inverted orientation. Hence, they were subjected to the IE and processed in a configural way, like social stimuli. Instead, confirming previous literature (e.g., [ 53 ]), our results revealed that an analytic processing was triggered when participants visually processed a wide range of objects non-resembling human beings (i.e., buildings and domestic tools).

However, we found relevant differences when considering the full body of robots (body-IE, Experiment 1 and 2) and their faces (face-IE, Experiment 3). In fact, if the body-IE emerged for all levels of the human likeness of robots (medium, Experiment 1; low and high, Experiments 2), the face-IE only emerged for humanoid robots with high levels of human likeness, but not for those with low levels. We argue that such a different pattern of results may depend on the perceptual cues elicited by humanoid robots’ full bodies or faces. More specifically, it is plausible to imagine that when considering the full bodies of robots, few anthropomorphic visual cues are necessary to trigger a configural processing, such as a single arm, a leg, or only the chest. This may explain why humanoid robots with low levels of human likeness are subjected to IE. Such assumption is also indirectly supported by Experiment 1 which considered the object-control category of mannequins. In fact, coherent with previous works [ 39 ], our findings revealed that these human-body-like objects are subjected to the IE and thus trigger a humanized representation at a cognitive level. Therefore, we may speculate that when considering the full body as a crucial stimulus, few visual features resembling human beings are sufficient to activate a configural processing, presumably above and beyond the semantic category within which each stimulus is classified (human being vs. object).

Conversely, when considering the faces of robots, the results of Experiment 3 suggest that a high level of human likeness is required to enact a configural processing. We believe that this is a highly relevant finding that highlights the prominent role of the face in defining the perceived full humanity (or no humanity) of a given exemplar, also at a cognitive level. That is, it is possible that, unlike the entire body, when focusing on the key component of the face, people need meaningful cues resembling human beings before activating a humanized representation of robots and a consequent configural processing. This argumentation is also in line with the work by DiSalvo and colleagues [ 54 ], which indicated that the faces of robots require the presence of specific and multiple features to be perceived as human-like (e.g., nose, eyelids, and mouth). These specific features can be observed in humanoid robots with high levels of human likeness (e.g., Erica and Sophia robots in our Experiment 3), whilst robots with low levels of human likeness often lack these features. For example, most robots with low levels of human likeness included in the ABOT database and thus employed in Experiment 3, despite having a head, did not show specific human features, as their head was made by a combination of object-like components (e.g., a monitor or a camera combined with a set of microphones). Instead, only a few of these robots (e.g., the Poppy robot) had eyes and eyebrows.

Taken together, we believe that our findings meaningfully extend research on the social perception of robots in several directions. First, we demonstrated that their anthropomorphic perceptions also have a cognitive basis, at least when considering humanoid robots. As mentioned when introducing our research, such an overall finding has great importance, because how people cognitively perceive robots deeply affects the first impressions toward them and the possible course of the HRI. That is, our results revealed that on the cognitive level humanoid robots can be elaborated not as mere objects but as social agents and, thus, they presumably trigger anthropomorphic knowledge and expectations, also at an unaware level. Such activation should primarily have positive outcomes for the HRI. In fact, most scholars in the field agree that the higher the—implicit or explicit—anthropomorphic perceptions of robots, the higher the positive feelings or attitudes that human beings display toward them. However, a possible side effect should be taken into account, especially in the light of our results revealing that these anthropomorphic perceptions could be rooted in first-order cognitive processes. That is, similar to other technologies [ 55 ], heightened expectations that humanoid robots can be like human beings can increase negative emotions and attitudes toward them when such expectations are disregarded.

Second and besides that, for the first time in the literature, our results indicate that the cognitive anthropomorphic perceptions of humanoid robots may be different depending on the considered components of robots: while the body of humanoid robots triggers a humanized representation regardless of their levels of human likeness, their faces are cognitively perceived in anthropomorphic terms only when they highly resemble human beings. This latter finding could provide robotics engineering with relevant insights when planning and projecting the external features of robots. Further, our experiments importantly integrate and extend the preliminary evidence by Zlotowski and Bartneck [ 41 ]. In fact, they also revealed the occurrence of IE for robots, albeit considering a broader spectrum of full body (humanoid and no humanoid) robots that were not systematically checked and balanced for their asymmetry and human likeness. Unlike this single study, our experiments exclusively considered humanoid robots with different levels of human-like appearance (i.e., presence of body-manipulators) and thus may provide more specific indications about when (and if) these robots are cognitively recognized as human- vs. object-like. Further, across our experiments, we consistently did not find a linear relationship between the IE index for social robots and the people’s explicit tendency to anthropomorphize them. This latter evidence contrasts Zlotowski and Bartneck [ 41 ]’s study, who instead found a positive linear relationship between the magnitude index of IE and the respondents’ explicit tendency to attribute uniquely human traits and abilities to robots. These different results may be due to the different stimuli that Zlotowski and Bartneck [ 41 ] considered than our research, which encompassed a wider range of robots, including also non-humanoid ones. Such a wider spectrum may have triggered different anthropomorphic explicit tendencies than those elicited by humanoid robots that we considered across our experiments. Alternatively, unlike this previous study, our evidence may robustly confirm the idea that in social cognition implicit and first-order processes are often qualitatively different than those more conscious and elaborated (e.g., [ 56 ]) and may play a complementary or opposite role, depending on the considered social or no social target. Accordingly, implicit measures, such as the inversion effect paradigm that we employed in our research, commonly assess mental constructs (e.g., perceptions, attitudes) that are distinct from those detected through self-report measures. Put differently, implicit methods capture first-order cognitive processes that meaningfully contribute to explaining different aspects of social cognition, not accounted for by the corresponding explicit measures [ 57 ]. About this issue, we believe that our measure of cognitive anthropomorphism may capture one of the main psychological mechanisms underlying this phenomenon, i.e. the people’s accessibility to anthropocentric knowledge [ 11 ], more appropriately than an explicit and self-report measure.

Despite the relevance of our findings, some limitations should be considered in interpreting them and driving the direction of future research. First, our experiments investigated the cognitive anthropomorphism of humanoid robots by relying only on the inversion effect paradigm. Although such a paradigm is the most extensively used when cognitively investigating the perception of social (vs. non-social) stimuli, we believe that future research should replicate our findings by employing further cognitive paradigms. For example, the whole vs. parts paradigm (see [ 58 ]) or the scrambled bodies and faces task (e.g., [ 53 – 59 ]) could be two further tools that could importantly strengthen the generalizability and robustness of our findings, by also better explaining the different cognitive elaboration of bodies and faces of social robots. With regard to this issue, it is also noteworthy that in our paradigm we explicitly differentiated the stimuli, both in the initial instructions and before each block. That is, participants were made salient the stimulus category (i.e., human vs. robots) that they were going to be exposed to, and such a salience could have somewhat affected their cognitive elaboration. Thus, future research should investigate whether the pattern of findings that we found could be replicated also when the stimulus category is not made salient, especially when referring to robots with high levels of human likeness.

Second, our research only considered humanoid robots. We elected to focus on this specific type of robot for two main reasons. First, they are (and presumably will be) the most widespread prototypes of robots employed in social environments. Second, because focusing only on this type allowed us to obtain a more homogenous set of robots, which in turn made the comparison of the different levels of human likeness more reliable across the experiments and conditions. However, future research would compare the cognitive anthropomorphic perceptions of humanoid robots with those concerning object-like robots (e.g., Roomba), to verify whether only the first ones are indeed cognitively elaborated as social agents.

Third, similar to previous research on configural (vs. analytic) processing, we only considered images as experimental stimuli. Thus, future research would verify the cognitive anthropomorphism of robots by considering more ecologically valid stimuli or situations, that for instance could imply videos portraying robots or real brief interactions between participants and robots.

Fourth, in our experiments, we did not analyze whether people’s levels of familiarity with humanoid robots would modulate their cognitive elaboration of these agents. More broadly, it would be interesting to verify across cultures possible differences in the cognitive anthropomorphism of robots, depending on people’s habituation to living among humanoid robots. For instance, it is plausible to imagine that the cognitive anthropomorphism of robots would be especially high within the contexts in which these technologies are massively used in different domains of humans’ everyday life.

Conclusions

Robots are going to become an intrinsic component of our everyday life in a wide range of domains. Thus, a full comprehension of how people perceive and behave toward them is a primary task for psychology and engineering scholars. In achieving this purpose, we believe it is essential to integrate the knowledge about more explicit and conscious processes featuring the people’s attitudes toward this technology with those concerning more cognitive processes underlying their perception. Both these processes play a pivotal and complementary role in understanding the factors facilitating or inhibiting the acceptance of robots in the social environment. In this sense, we hope that our research could provide important insights to think about and create robots as functional as possible in socially interacting with human beings.

Supporting information

https://doi.org/10.1371/journal.pone.0270787.s001

  • 1. World Economic Forum. Top 10 Emerging Technologies 2019. Insight Report, June 2019. http://www3.weforum.org/docs/WEF_Top_10_Emerging_Technologies_2019_Report.pdf .
  • View Article
  • Google Scholar
  • PubMed/NCBI
  • 7. Haring K, Silvera-Tawil D, Takahashi T, Watanabe K, Velonaki M. How people perceive different robot types: A direct comparison of an android, humanoid, and non-biomimetic robot. In Proceedings of the 8th International Conference on Knowledge and Smart Technology (KST). 2016. p. 265–270.
  • 8. Ahmad MI, Bernotat J, Lohan K, Eyssel F. Trust and cognitive load during Human-Robot Interaction. In Proceedings of AAAI Symposium on Artificial Intelligence for Human-Robot Interaction. 2019. https://arxiv.org/abs/1909.05160 .
  • 14. Fink J. Anthropomorphism and Human Likeness in the Design of Robots and Human-Robot Interaction. International Conference of Social Robotics. 2012; p. 199–208.
  • 15. Natarajan M, Gombolay M. Effects of anthropomorphism and accountability on trust in human robot interaction. In Proceeding of ACM/IEEE International Conference on Human-Robot Interaction. 2020. p. 33–42.
  • 17. Riek L, Rabinowitch T, Chakrabarti B, Robinson P. Empathizing with robots: Fellow feeling along the anthropomorphic spectrum. In Proceedings of 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops. 2009. p. 1–6.
  • 18. Malle B, Scheutz M, Forlizzi J, Voiklis J. Which robot am I thinking about? The impact of action and appearance on people’s evaluations of a moral robot. In Proceedings of 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2016. p. 125–132.
  • 25. Pitsch K, Lohan K, Rohlfing K, Saunders J, Nehaniv C, Wrede B. Better be reactive at the beginning. Implications of the first seconds of an encounter for the tutoring style in human-robot-interaction. In Proceedings of IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication. 2012. p. 974–981.
  • 33. Bartky SL. Femininity and domination: Studies in the phenomenology of oppression: Psychology Press; 1990.
  • 41. Zlotowski J, Bartneck C. The inversion effect in HRI: Are robots perceived more like humans or objects? In Proceedings of 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2013. p.365-372.
  • 42. Macrae C, Quadflieg S. Perceiving People. Handbook of Social Psychology. 2010
  • 44. Phillips E, Zhao X, Ullman D, Malle B. What is Human-like?: Decomposing Robots’ Human-like Appearance Using the Anthropomorphic roBOT (ABOT). In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 2018. p. 105–113.
  • 46. Gallucci M. GAMLj suite for jamovi. Version 2.4 [Software]. 2020. [cited 25 Jun 2021]. https://github.com/gamlj/
  • 47. The jamovi project. Jamovi. Version 1.6 [Software]. 2021 [cited 25 Jun 2021]. https://www.jamovi.org
  • 54. DiSalvo CF, Gemperle F, Forlizzi J, Kiesler S. All robots are not created equal: The design and perception of human-oriented robot heads. Proceedings of the 4th Conference on Designing Interactive Systems (DIS’02). 2002. p. 321–326.

human robot interaction

Human Robot Interaction

Oct 15, 2014

1.22k likes | 3.57k Views

Human Robot Interaction. Steven Shark Aerospace Engineering Arizona State University Mentor: Dr. Winslow Burleson. Introduction. Human Robot Interaction Curriculum Basics to robotics and programming Terrain Mapping Pet Replication Designed for secondary education and above.

Share Presentation

  • interaction
  • basic understanding
  • aerospace engineering
  • robotic technologies
  • basic robotics curriculum
  • real life engineering situations

emele

Presentation Transcript

Human Robot Interaction Steven Shark Aerospace Engineering Arizona State University Mentor: Dr. Winslow Burleson

Introduction • Human Robot Interaction Curriculum • Basics to robotics and programming • Terrain Mapping • Pet Replication • Designed for secondary education and above

Technology • For humans by humans… • Technology makes our lives easier • Helps get tasks/goals completed • Everything from getting into space to cleaning the floor.

Robotics • Automation is the key • Repetitive tasks • Incomparable precision • Artificial intelligence is rapidly progressing • The three D’s • Dirty • Dull • Dangerous

The Interaction • With increasing robotic technology, the need to bridge the gap between humans and robots also increases. • Need for basic understanding of robotic principles, control, and application.

Human Robot Interaction Curriculum • Build a core foundation of robotic principles • Engineering Design Process • Creative and Critical Thinking • Robotic Technologies • Programming • Sensor Incorporation

Application of Core… • Develop an understanding for how robotics are used. • Be able to use what was learned and apply to real life engineering situations.

Terrain Mapping with robots • Applied in situations where humans lack the precision and/or the environmental sustainability to properly carry out the procedure. • e.g. Mars

Pet Replication with robots • Design of pet behavior replication • Using more advanced programming theories to produce “AI” type results • Random motions and reaction to stimuli.

IRobot Create • Based off the IRobot Roomba • Fully programmable robot • Variety of sensors • “Bump”, Infrared, “Cliff”, Wheel drop • All Movements can be controlled • Perfect tool for application of a basic robotics’ curriculum

Programming Language • JAVA • Straight forward • Easy to use and learn • Used to set up basic libraries specific for the Create.

Final Product • A curriculum that teaches the basics of robotics and the application of the technology and theories used • Allows for the understanding of how humans and robots interact • Allows for students to gain an interest in science in technology

Comments, Questions?

  • More by User

Effect of Shared-attention for Human-Robot Interaction

Effect of Shared-attention for Human-Robot Interaction

Effect of Shared-attention for Human-Robot Interaction Junji Yamato [email protected] NTT Communication Science Labs., NTT Corp. Japan Kazuhiko Shinozawa, Futoshi Naya ATR Intelligent Robot and Communication Labs. Aim To build Social Robot/Agent Sub goal To establish Evaluation methods

615 views • 26 slides

Psychophysiology-Based Affective Communication for Implicit Human-Robot Interaction

Psychophysiology-Based Affective Communication for Implicit Human-Robot Interaction

Ph.D. Defense. Psychophysiology-Based Affective Communication for Implicit Human-Robot Interaction. Pramila Rani , 2005 October 24, 2005. Committee: Dr. Nilanjan Sarkar (Chair) Dr. Mitch Wilkes, Dr. Richard Shiavi, Dr. Eric Vanman, and Dr. Michael Goldfarb. Some Definitions.

1.19k views • 62 slides

The Role of Human Perception in Human-Robot Interaction

The Role of Human Perception in Human-Robot Interaction

The Role of Human Perception in Human-Robot Interaction Social Robotics Reading Group 3 November 2003 Agenda “All Robots Are Not Created Equal: The Design and Perception of Humanoid Robot Heads” Carl DiSalvo, Francine Gemperle, Jodi Forlizzi, Sara Kiesler

674 views • 19 slides

Human-Human Interaction

Human-Human Interaction

Human-Human Interaction. Human-Computer Interaction. replicated in. Topics covered during the presentation. Communication Preference. Personality Types. Neuro-linguistic Programming Language Patterns. Learning Styles. Human-Computer Interaction. Subliminal Text Messaging.

767 views • 29 slides

Human-Robot Interaction -Emerging Opportunities

Human-Robot Interaction -Emerging Opportunities

688 views • 15 slides

Human-robot interaction

Human-robot interaction

Michal de Vries. Human-robot interaction. Breazeal, Brooks, Gray, Hoffman, Kidd, Lee, Lieberman, Lockerd and Mulanda International Journal of Humanoid Robots Submitted 2003, published in 2004. Humanoid robots as cooperative partners for people. Overview. 1) Introduction

1.02k views • 23 slides

Toward Human-Interaction with Bio-Inspired Robot Teams

Toward Human-Interaction with Bio-Inspired Robot Teams

HuBIRT. Michael A. Goodrich Brigham Young University P.B. Sujit University of Porto. Toward Human-Interaction with Bio-Inspired Robot Teams. ONR via CMU RCTA via USF. What types of problems. Barnes &amp; Fields: Convoy protection Spears Plume tracking. Abstraction: Information Foraging

371 views • 27 slides

Role Exchange and Load Sharing for Human-Robot Haptic Interaction

Role Exchange and Load Sharing for Human-Robot Haptic Interaction

Role Exchange and Load Sharing for Human-Robot Haptic Interaction. Cagatay Basdogan Robotics and Mechatronics Laboratory (RML) http://rml.ku.edu.tr Koc University. Motivation :

211 views • 8 slides

Human Interaction

Human Interaction

Human Interaction. How do people get information?. Vision 55MB/sec Audio 64 KB/sec Touch 400 B/sec Taste Lower. How do people express information?. Audio Speech Muscle movement Dexterity Muscle Memory. Vision. Retina. Lens. Iris. Retina. Iris. Retina – 64 levels 6 bits.

422 views • 32 slides

Automated Caricature of Robot Expressions in Socially Assistive Human-Robot Interaction

Automated Caricature of Robot Expressions in Socially Assistive Human-Robot Interaction

Automated Caricature of Robot Expressions in Socially Assistive Human-Robot Interaction. Ross Mead and Maja J Matarić Presented by David Feil-Seifer Interaction Lab University of Southern California. Outline. Motivation and Background Approach and Methods Robot Platform

349 views • 13 slides

Human Robot Interaction in Guardians

Human Robot Interaction in Guardians

Human Robot Interaction in Guardians. Amir M. Naghsh Sheffield Hallam University, UK Jeremi Gancet Space Applications Service N.V., Belgium Andry Tanoto Heinz Nixdorf Institute, Paderborn, Germany Jacques Penders Sheffield Hallam University, UK

337 views • 21 slides

Effect of Shared-attention for Human-Robot Interaction

Effect of Shared-attention for Human-Robot Interaction. Junji Yamato [email protected] NTT Communication Science Labs., NTT Corp. Japan Kazuhiko Shinozawa, Futoshi Naya ATR Intelligent Robot and Communication Labs. Aim. To build Social Robot/Agent Sub goal To establish Evaluation methods

376 views • 26 slides

Human Interaction

Human Interaction. Chapter 12. W O R K T O G E T H E R. “ Sustainability” is a popular idea these days. What does “sustainability” in the ecosystem mean? What does “sustainability” in human communities mean?. The term “ecological footprint” means:.

858 views • 72 slides

Mental Models for Human-Robot Interaction

Mental Models for Human-Robot Interaction

Mental Models for Human-Robot Interaction. Christian Lebiere ( [email protected] ) 1 Florian Jentsch and Scott Ososky 2 1 Psychology Department, Carnegie Mellon University 2 Institute for Simulation and Training, University of Central Florida. Cognitive Models of Mental Models.

383 views • 17 slides

Doctoral School – Robotics Program Autonomous Robots Class Human-Robot Interaction

Doctoral School – Robotics Program Autonomous Robots Class Human-Robot Interaction

Doctoral School – Robotics Program Autonomous Robots Class Human-Robot Interaction Robots in education, therapy and rehabilitation Aude G Billard Learning Algorithms and Systems Laboratory - LASA EPFL, Swiss Federal Institute of Technology Lausanne, Switzerland [email protected].

489 views • 24 slides

Challenges for Dialog in Human-Robot interaction

Challenges for Dialog in Human-Robot interaction

Challenges for Dialog in Human-Robot interaction. Dialogs on Dialogs Meeting October 5 th 2005 Hartwig Holzapfel. About me. Studied Computer Science in Karlsruhe (Germany) Minor field of study Computational Linguistics Stuttgart (Germany)

354 views • 24 slides

Five Lessons Learned in Human-Robot Interaction

Five Lessons Learned in Human-Robot Interaction

Five Lessons Learned in Human-Robot Interaction. Patricia McDermott Alion Science and Technology Jennifer Riley, Ph.D. SA Technologies. Douglas Gillan, Ph.D. North Carolina State University Laurel Allender, Ph.D., Army Research Laboratory. Bandwidth reduction Span of control

330 views • 23 slides

Human-Robot Interaction 	-Emerging Opportunities

Human-Robot Interaction -Emerging Opportunities. Pramila Rani 1997A3PS071 October 27,2006. Two Faces of Robotics. “Service and entertainment robots” – for… well, service and entertainment. Traditional “industry robots” – for assembly and manufacturing.

407 views • 15 slides

Human-Robot Interaction Robotic Presence Research Hour 17 April 2002

Human-Robot Interaction Robotic Presence Research Hour 17 April 2002

Human-Robot Interaction Robotic Presence Research Hour 17 April 2002. What is HRI? Human-Computer Interaction Artificial Intelligence Carrying on from Kismet. Human-Robot Interaction Robotic Presence Research Hour 17 April 2002. Group Research Framework

184 views • 7 slides

Ethics in Human Robot Interaction (HRI)

Ethics in Human Robot Interaction (HRI)

Ethics in Human Robot Interaction (HRI). Ethics in Human Robot Interaction (HRI). Evolution of robots (then) What can robots do? How do they fit into our lives? What are the possibilities? Dangers?. Ethics in Human Robot Interaction (HRI).

391 views • 14 slides

Interaction Design Human-computer Interaction

Interaction Design Human-computer Interaction

Interaction Design Human-computer Interaction. JMA 464/564 MWF 12:00 – 12:50 College Hall 205. JMA 464/545 - Instructor. Dr. Bill Gibbs Associate Professor in Journalism &amp; Multimedia Arts. Ph.D. in Instructional Systems from The Pennsylvania State University. Office 341 College Hall

756 views • 52 slides

Human-Robot “Pickup” Teams with Language-Based Interaction

Human-Robot “Pickup” Teams with Language-Based Interaction

Human-Robot “Pickup” Teams with Language-Based Interaction. Manuela Veloso, Anthony Stentz, Alexander Rudnicky Brett Browning, M. Bernardine Dias Faculty Thomas Harris, Brenna Argall, Gil Jones Satanjeev Banerjee Students. Sponsored by The Boeing Company. Project Goals.

230 views • 17 slides

  • Tips and Tricks for Getting Started

Taking on the role of an elite Helldiver fighting for democracy is tough work, but there's a whole host of weapons, Stratagems , and armor types that will help you keep the peace. So whether you're jumping into your Hellpod for the first time or you've already taken down an onslaught of Terminids and Automatons, here are our tips and tricks to get you started in Helldivers 2 .

Helldivers2 Tips And Tricks (5).png

21 Essential Tips and Tricks for Getting Started

  • Make use of your Stratagems often. The tutorial does an excellent job of covering how important Stratagems will be for you throughout the game. But when you're out on your very first mission, you'll see just how essential they really are. Whether it's restocking ammo, calling down specific weapons, or wiping out an army of bugs, use your Stratagems! Some have limited uses, but quite a few are unlimited -- so don't be shy about pulling them out. These will form part of your loadout before each mission, so you can swap them in and out to test the ones you like best.

Helldivers2 Tips And Tricks .png

  • On the topic of Stratagems, ship module upgrades are equally important. These will affect your Stratagems, and you can see the ones that will be influenced in the bottom right corner on each ship module upgrade. These can do everything from increasing your magazine size to reducing cooldowns. You'll need to pick up Samples when you're in the field to be able to purchase these upgrades and the better ones will need rarer samples, which you'll unlock at higher difficulty levels.

Helldivers2 Tips And Tricks (2).png

  • Avoid moving around when you're aiming. The tutorial will cover this, explaining how your reticle will move all over the place when you're firing your weapon, and if you want to take down enemies, precision is key. There's also friendly fire in the game, so the last thing you want to do is take down your fellow Helldivers. If you want to lock onto a target and be even steadier with your aim, try crouching .

Helldivers2 Tips And Tricks (1).png

  • There are multiple currencies to get familiar with. Whether you're unlocking new Stratagems or upgrading ship modules, you'll find that not everything uses the same currency. Here are the ones you'll need to know and what exactly they can do:

Helldivers2 Tips And Tricks (4).png

  • Use your map to find Points of Interest. When you set off on a mission, you'll get quite a bit of time to complete it, so if you want to make the most of a planet, don't beeline for the main mission . Instead, take the time to explore it thoroughly. Open the map and scan around to identify points of interest. Investigating these is how you end up finding loot like medals and Requisition Slips, interesting information logs, and more. If you can't see any points of interest immediately, look for flashing beacons in the distance or question marks on your compass at the top of the screen.

Helldivers2 Tips And Tricks (6).png

  • Watch your ammo! This isn't just a suggestion to watch when your ammo is depleted, but also watch when you reload. If you reload before emptying your clip, you throw away crucial ammo. If you're running low on ammunition altogether, don't forget to deploy a supply Stratagem or scour the land for points of interest -- you'll usually find plenty of ammo scattered around.

Helldivers2 Tips And Tricks (7).png

  • Use the environment. You've got big guns, you've got Stratagems, you've got reinforcements from teammates, but you've also got the environment to utilize. As you roam the map, you might find items you can pick up and throw, such as explosive tanks. Grab them and throw them to a prime position to take out as many enemies as possible. There are even artillery weapons that you get back online as part of side objectives during operations. These can be filled with missiles that can then be called in as a Stratagem wherever you are.
  • Go prone! Every time you're hit, you'll be told to go prone and double-tap circle to dive down. This is covered in basic training, but if you're being told to do it, do it! This will sometimes be the best way to get those soft spots on bugs or ensure you can escape from a horde.

Helldivers2 Tips And Tricks (8).png

  • You'll have personal orders and major orders. The major orders will last for multiple days and reward you with Requisition Slips. Personal orders update daily, and you'll be rewarded with medals for completing them. These can be anything from completing secondary objections while you're on an operation to taking down a certain number of enemies. Press L2 to see your orders at any time.

Helldivers2 Tips And Tricks (9).png

  • Sharing is caring. Helldivers 2 encourages multiplayer, and it's honestly much faster to take down objectives with a larger squad. You can have up to 3 teammates with you on each mission, and multiplayer can be with friends or through random matchmaking. When you're working as a team, you'll have to and will want to share certain things. For example, resupply drops apply to the whole team, so if someone uses it, your Supply Stratagem will also enter cooldown . The other thing you'll share is redeployment. If you've got a full squad, you'll have 20 chances to come back , but these are split between the four of you. You can see these in the top-left corner of your screen at all times.

Helldivers 2: What Are Your Favorite Stratagems?

Pick a winner.

human vs robot presentation

  • Practice your Stratagem input skills. Using Stratagems, especially in the heat of an intense battle, can be particularly daunting. Thankfully, there's a way to brush up on your input code skills through a unique arcade-like game machine called Stratagem Hero found along the wall, directly across from the Ship Management station. When interacting with the arcade machine, you'll be prompted with a timed, round-based minigame that requires you to input real Stratagem codes. Not only can you participate in a high-score list, but Stratagem Hero is a fantastic way to get used to inputting Stratagem codes quickly under pressure.

Strategem Hero is available to Helldivers who purchased the Super Citizen Edition of Helldivers 2.

Helldivers2 Tips And Tricks (10).png

  • There's no need to check your cooldowns. Apart from shared Stratagems like Resupply, your personal Stratagems will always have the same cooldown time. But you won't need to hit L1 and try to check these during fights, as a handy little pop-up appears on the screen to tell you when you've just got a few more seconds to wait.

Helldivers2 Tips And Tricks (11).png

  • Look for secondary objectives, outposts, and bases. These aren't always obvious, but each map will have a set of additional activities you can complete. They can range from destroying bases to reloading weapons and disrupting signals.

Helldivers2 Tips And Tricks (12).png

  • Blow the doors off! Things can be destroyed in Helldivers 2, and some of the optional objectives will encourage you to blow places to bits. Sometimes when you find points of interest, you'll see crates or doors nearby. Throw a grenade and open them up to find even more loot.
  • You can still extract once the timer has run out. But if you do, you lose the ability to use your Stratagems. This might make the wait for the extraction ship a little more challenging as you fight off enemies, but it's also worth going to unlock the "In the nick of time" achievement. If you don't manage to make it out alive, you can also earn the Job's done! trophy , where you complete the main objective but fail to extract.

Helldivers2 Tips And Tricks (13).png

  • Adjust the settings of your gun out on the field. If you press and hold square while you're using a gun, you can change a variety of features during a mission. You can toggle your flashlight on and off, or change whether the gun fires automatically or in bursts.
  • Loot fallen teammates. When you die, you'll drop certain things like Weapons and samples. As soon as someone dies, pick up any samples they drop , so they don't go missing. Samples are split between your team anyway, so you will be doing everyone a favor by getting them again. If you're paying solo, look for the test tube icon to show you where your dropped samples are.
  • It pays to play with friends! Occasionally, you can find doors that require 2 people to open up. But that's not all, you get a big XP boost when you're in a team and you all extract together.
  • That being said, there's nothing wrong with flying solo! You can knock out a few easy missions for a fast 180 XP every 15 minutes or so.
  • Get to Level 5 ASAP! From here, you'll be able to unlock some of the Best Stratagems in the game. Once you’ve hit level 5 you’ll also have a good feel for the game and be able to start exploring a lot more with your newfound arsenal.
  • Some missions require a blitz approach. While you don’t need to rush most of the 40-minute missions, some are much shorter and require a fast 15-minute clear. Try to, at the very least, collect the beams of light and samples that you find in the field that are near your critical path. Even if you’re playing on easy you’ll want to nab all samples and yellow beams you find. Those yellow beams give you credits for better unlocks, including the kind you can pay for in the shop, and don’t worry if your buddy gets to them first. The contents will be distributed to your team.

Guide in Progress

We're still making our way through Helldivers 2, so be sure to check back soon for even more helpful tips and tricks as we discover them! In the meantime, why not check out our other handy guide pages:

  • Achievements and Trophy Guide
  • Ship Upgrades

Up Next: Advanced Combat Tips and Tricks

Top guide sections.

  • Advanced Combat Tips and Tricks
  • Best Stratagems
  • Terminids and Automatons Enemy List

Was this guide helpful?

In this guide.

Helldivers II

IMAGES

  1. Human vs robot flat isometric concept Royalty Free Vector

    human vs robot presentation

  2. Humans vs robots Royalty Free Vector Image

    human vs robot presentation

  3. Human vs Robot

    human vs robot presentation

  4. Human vs robot ~ Illustrations ~ Creative Market

    human vs robot presentation

  5. Human vs. Robot: what will automation be capable of in the future?

    human vs robot presentation

  6. Robot Vs Human, Humanity and Technology. Concept Business Vector

    human vs robot presentation

VIDEO

  1. HUMAN VS ROBOT EP5

  2. More human like service robots #robotics #robot

  3. Legendary matchup between human vs robot, exceptional offensive play #Human #Robot #one V1 #Basket

  4. Human VS Robot

COMMENTS

  1. Humans versus robots

    • currently robots are better than humans at a number of things. machines can perceive beyond the human visual spectrum, they need a smaller mass of consumables (e.g., food), they are more expendable, and they can be built to better tolerate environmental extremes (e.g., cold and radiation).

  2. Humans versus Robots

    Science News wires white papers and books Humans versus Robots Humans versus Robots views 3,580,570 updated Humans versus Robots As humans step off their home planet into the surrounding solar system and beyond, they do not go alone. Machines have preceded them. And as people go into space, machines will go along.

  3. PPT Humans vs. robots

    Variation in object placement and pose Long distances between relevant locations Need for specialized tools Variation in object type and appearance Non-rigid objects and substances Sensory variation, noise, and clutter * Today's Robots (& Their Limitations) 1. Simulation: we program the robot to behave in accordance with the world around it 2.

  4. Robots Vs Humans PowerPoint and Google Slides Template

    Animation: Yes Product Details Understanding the difference between robots and humans is crucial to knowing their roles, strengths, and limitations in different situations. Shed light on the significant differences between Robots vs. Humans in a comprehensible manner using our presentation template for MS PowerPoint and Google Slides.

  5. PPT A HUMAN IS A ROBOT

    Answer: It cannot think for itself. It has to be told what to do The program tells the EV3 where to move and when to take in information from its sensors * What makes up the "nerves" of your robot?

  6. What's the difference between humans and robots?

    "The biggest difference between a robot and a computer is its presence. "Robots and androids are easy for persons to anthropomorphize, and it can therefore make people feel that there are...

  7. Human-robot interaction

    Human-robot interaction (HRI) is the study of interactions between humans and robots.Human-robot interaction is a multidisciplinary field with contributions from human-computer interaction, artificial intelligence, robotics, natural language processing, design, and psychology.A subfield known as physical human-robot interaction (pHRI) has tended to focus on device design to enable ...

  8. Humans vs. robots: The battle reaches a 'turning point'

    By Pranshu Verma. December 10, 2022 at 6:00 a.m. EST. Amazon's sparrow robot in Westborough, Mass. (Joseph Prezioso/AFP/Getty Images) Warehouse robots are finally reaching their holy grail moment ...

  9. Human- or object-like? Cognitive anthropomorphism of humanoid robots

    Abstract. Across three experiments ( N = 302), we explored whether people cognitively elaborate humanoid robots as human- or object-like. In doing so, we relied on the inversion paradigm, which is an experimental procedure extensively used by cognitive research to investigate the elaboration of social (vs. non-social) stimuli.

  10. Exploring Space: Robots vs Humans

    Chris Smith asks our panel about the scientific argument for human space travel. We started by considering the case for robots. Carolin - I think it's a no-brainer. Certainly in the short term, robots are the way to go. We've just heard about some of the hazards that humans face. Even if you can get them to Mars in one piece then the dangers on ...

  11. ROBOTS & HUMANS: THE FUTURE IS NOW

    1. HUMANS & ROBOTS - THE FUTURE IS NOW DR. CLAUS LENZ BLUE OCEAN ROBOTICS GMBH automated production Off-line programmed robots work on their own Strict separation of workspaces between robots and humans 4. PRODUCTION CAN YOU IMAGE 5. CURRENT TREND - 6. CURRENT TREND - 7. ROBOTS ARE MORE THAN THAT 8. 2006: „A ROBOT IN EVERY HOME" Bill Gates 10.

  12. A Comparative Study Between Humans and Humanoid Robots

    Abstract. Humanoid robots are usually designed with the goal to realize humanlike topology, structure, and physical properties, as it would allow the robots to work in existing infrastructure built for humans. For example, most humanoid robots have two arms and/or two legs, and each leg typically consists of a three-degrees-of-freedom (DOF) hip ...

  13. Robot Productivity: How Cobots Compare to Humans

    Human vs robot productivity: a complex question. ... Alex Owen-Hill is a freelance writer and public speaker who blogs about a large range of topics, including science, presentation skills at CreateClarifyArticulate.com, storytelling and (of course) robotics. He completed a PhD in Telerobotics from Universidad Politecnica de Madrid as part of ...

  14. Robot Autonomy vs. Human Autonomy: Social Robots, Artificial ...

    Social robots are robots that can interact socially with humans. As social robots and the artificial intelligence (AI) that powers them becomes more advanced, they will likely take on more social and work roles. This has many important ethical implications. In this paper, we focus on one of the most central of these, the impacts that social robots can have on human autonomy. We argue that, due ...

  15. Humans vs robots| KS3 DT & science activity

    Humans vs robots. Consider ethical and moral issues around new technology. Telemedicine is a new and fast developing field in healthcare. Even 20 years ago the idea of a surgeon being able to operate a robot from hundreds of miles away in order to perform an operation seemed like science fiction.

  16. Preference for Human (vs. Robotic) Labor is Stronger in Symbolic

    Advances in robotics and artificial intelligence are transforming the economy. Labor that used to be done exclusively by humans is shifting to machines, robots, and algorithms (Brynjolfsson & McAfee, 2014).The consequences of these developments for the demand for human labor are hotly debated in academia and popular press, with, for example, Nature urging scientists to develop a better ...

  17. The Best Examples Of Human And Robot Collaboration

    Warehouse Robots. A famous example of human and robot collaboration is Amazon's warehouse robots that work alongside staff in its fulfillment centers. These robots have one job - to bring items ...

  18. Human robot interaction

    May 26, 2018 • 17 likes • 12,831 views PrakashSoft Founder at PrakashSoft Technology Human-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers.

  19. Human vs. Robot

    Human Vs. Robot f Human • a man, woman, or child • distinguished from other animals by superior mental development f Robot • Usually designed like human beings created to perform complex, repetitive or dangerous task. • Artificial intelligence (AI), the ability of a digital computer or computer- controlled robot to perform tasks

  20. Human- or object-like? Cognitive anthropomorphism of humanoid robots

    Across three experiments (N = 302), we explored whether people cognitively elaborate humanoid robots as human- or object-like. In doing so, we relied on the inversion paradigm, which is an experimental procedure extensively used by cognitive research to investigate the elaboration of social (vs. non-social) stimuli. Overall, mixed-model analyses revealed that full-bodies of humanoid robots ...

  21. Robots VS Human: Prepared by Mensar Dewantara Siagian

    Robots vs human - View presentation slides online. robots vs human powerpoint

  22. PPT

    Oct 15, 2014 1.22k likes | 3.56k Views Human Robot Interaction. Steven Shark Aerospace Engineering Arizona State University Mentor: Dr. Winslow Burleson. Introduction. Human Robot Interaction Curriculum Basics to robotics and programming Terrain Mapping Pet Replication Designed for secondary education and above. Download Presentation interaction

  23. Humans versus robots

    Sep 11, 2021 • 0 likes • 505 views C chiraggoswami18 Education Humans versus robots 1 of 4 Download Now More Related Content What's hot (20) Robotics Importance of artificial intelligence in leadership - By Jimit Patel Summary for Robotics Artificial Intelligence Presentation Robotics the future of the world Our future in 2050

  24. Tips and Tricks for Getting Started

    Things can be destroyed in Helldivers 2, and some of the optional objectives will encourage you to blow places to bits. Sometimes when you find points of interest, you'll see crates or doors ...