//Logo Image
Author: Jun-Ming Lu, Tzung-Cheng Tsai, Yeh-Liang Hsu (2010-10-05); recommended: Yeh-Liang Hsu (2010-10-05).
Note: This paper is a chapter in the book “Talking about interaction”, to be published.

Interaction between human and robot

Abstract

With the rapid advancement of robotics, robots have become smarter and thus develop a closer relationship with humans over time. To accommodate this strong growth, the interaction between human and robot plays a major role in the modern applications of robotics. This multidisciplinary field of research, namely human-robot interaction (HRI), requires contributions from a variety of research fields such as robotics, human-computer interaction, and artificial intelligence. This chapter will first define the two main categories of robots, namely industrial and service robots, and their applications. Subsequently, the general HRI issues will be discussed to explain how they affect the use of robots. Finally, key design elements for good HRI are summarized to reveal the enabling technologies and future trends in the development of HRI.

Keywords: Robot; robotics; human-robot interaction (HRI); telepresence

1.    Robots and robotics

Prior to the use of the term “robot,” human beings have been dreamed about human-like creations that can assist in performing tasks for a long time. For example, in 1495, Leonardo Da Vinci designed a humanoid automaton that is intended to make several human-like motions (Figure 1). However, due to the technological limitations, most of these studies remain in conceptual stages. In the 18th century, miniature automatons come out as toys for entertainment. With the programmed musical box embedded in a doll, the melody starts to play as if the doll plays the instrument by itself. In 1920, the term “robot” was first introduced by Čapek in his play entitled “Rossum's Universal Robots.” Based on his idea, robots are the artificial people created to work as servants. In the beginning, the robots are happy to work for the human who invented them. However, as time goes by, the robots begin to revolt against humans and fight for their own rights. This play reflects the desire of human to enrich daily lives through the use of robots, as well as the consequence it may lead to. From then on, the term “robot” began to be widespread and adopted in many domains to describe the human-like machines that work to assist humans.

描述: Leonardo-Robot3

Figure 1. The humanoid automation created by Da Vinci (Möller, 2005)

In 1927, Roy J. Wensley invented the humanoid “Televox,” which is likely the first actual robot and can be controlled by means of specific voice input (Figure 2). Later on, Elektro was on exhibit at the 1939 New York World's Fair with his mechanical dog Sparko (Figure 3). He can walk by voice command, speak about 700 words, smoke cigarettes, and blow up balloons. These brilliant inventions immediately caught people’s eyes and encouraged them to continue bringing their dreams to reality. Afterwards, the term “robotics” appeared in the science fiction “I, Robot” to describe this field of study. The three laws of robotics were also proposed to address the interaction between robots and human beings (Asimov, 1942).

描述: Televox0

Figure 2. Roy J. Wensley and his humanoid Televox (Hoggett, 2009)

描述: Elektro and Sparko

Figure 3. Elektro (middle) and Sparko (left) (McKellar, 2006)

In late 20th century, a robot was defined as “a reprogrammable and multifunctional manipulator designed to move materials, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks” (Robot Institute of America, 1979). Nevertheless, this fails to include the broad range of robotics in modern development. At present, the robots are more than agents that help to perform the repetitive works. Moreover, they are expected to cooperate with human beings. Generally speaking, according to the application fields, robots can be categorized as industrial robots and service robots. The different purposes and characteristics of these two types of robots are discussed in the following context.

1.1  Industrial Robots

ISO 8373 (International Standard Organization, 1994) defines an industrial robot as “an automatically controlled, reprogrammable, and multipurpose manipulator programmable in three or more axes, which may be either fixed in place or mobile for use in industrial automation applications.” On the basis of this concept, industrial robots are intended to assist humans in repetitive tasks, so that the efficiency can be improved through the automation of manufacturing processes. For this purpose, industrial robots do not need to resemble real humans. Instead, they are designed to imitate body movements of humans. Thus, an industrial robot generally comes in the form of an articulated robotic arm. Typical applications can be seen in assembly, packaging (Figure 4a), painting (Figure 4b) and so on. In addition, industrial robots can be seen in the agricultural (Figure 5a) and food industries (Figure 5b) as well.

描述: Figure 4a 描述: Figure 4b

(a)                                                     (b)

Figure 4. (a) An industrial robot for packaging (Gromyko, 2009); (b) an industrial robot for printing (Schaefer, 2008)

描述: Figure 5a 描述: Figure 5b

(a)                   (b)

Figure 5. (a) An industrial robot for cookie manufacturing (Garcia et al., 2007); (b) an agricultural robot for apple grading (Billingsley et al., 2009)

1.2  Service Robots

According to the International Federation of Robotics (IFR), a service robot is a robot which operates semi- or fully autonomously to perform services useful to the well-being of humans and equipment, excluding manufacturing operations. Generally, service robots include cleaning robots, assistive robots, wheelchair robots, guide robots, entertainment robots, and educational robots. For example, Figure 6 depicts a robot suit which helps to enhance the strength of caregiver (Satoh et al., 2009). Besides, as shown in Figure 7, robotic wheelchairs with the functions of navigation and motion planning allow people with limited mobility, such as the elderly and the disabled, to move freely and easily (Prassler et al., 2001; Pineau and Atrash, 2007).

描述: Figure 6

Figure 6. Robot suit HAL for bathing care assistance (Satoh et al., 2009)

描述: Figure 7a 描述: Figure 7b

(a)                   (b)

Figure 7. The robotic wheelchairs (a) MAid (Prassler et al., 2001) (b) SmartWheeler (Pineau and Atrash, 2007)

In addition to the basic requirements of service robots, the need for robots facilitating healthcare for the elderly, both physiologically and psychologically, is also becoming an urgent issue in the aging society. Interactive autonomous robots behave autonomously using various kinds of sensors and actuators, and can react to stimulation by its environment, including interacting with a human. Seal robot Paro is an example of robot-assisted therapy for improving human mind at hospitals or institutions (Wada et al., 2008), as shown in Figure 8. Besides, Lytle (2002) reported that Matsushita electrics had developed a robotic care bear whose purpose was to watch over the elderly residents in a hi-tech retirement center. These modern applications of service robots significantly improve the quality of life for the elderly.

描述: Figure 8b

Figure 8. Paro interacting with the elderly in a nursing home (Wada et al., 2008)

In health care applications, telepresence robots, which let a person be in two places at once, are also of great interest. The remote-controlled robot “Rosie” stands 65 inches tall and has a computer-screen head which serves as a physician’s eyes and ears, as shown in Figure 9. Its two-way audio and video capabilities enable individuals to be physically located in one location and virtually present in another at the same time. The robot allows medical assessments and diagnoses to take place in real-time. Patient-specific medical data, such as ultrasound images, can be transmitted through the wireless Internet. Medical personnel can discuss treatment plans and interact with patients remotely. By serving as an extension of physician-patient contact, patients feel more satisfied because physicians seem to spend more time with them (Gerrard et al., 2010).

描述: robot

Figure 9. Physicians operate the robot to visit patients (Gerrard et al., 2010)

2.    Human-robot interaction

As introduced in section 1, robots are becoming more common and have changed the way we live. In such an environment, humans need to interact with robots to perform the tasks or access the service they provide. Thus, the interaction between human and robot is of great concern in the development of robotics. Human-robot interaction (HRI) is a multidisciplinary study that requires contributions from robotics, human-computer interaction, human factors engineering, artificial intelligence, and some other research fields. The origin of HRI as a discrete problem can be traced back to Asimov’s three laws of robotics (Asimov, 1941):

(1)      A robot may not injure a human being or, through inaction, allow a human being to come to harm.

(2)      A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

(3)      A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These three laws mainly point out the HRI issue in terms of safety. Under this concept, robot and human should be regarded as separate individuals that do not conflict with each other. Besides, in order that a robot can obey the orders given by human beings, a control mechanism enabling the robot to perceive humans and make responses is required. Moreover, as humans wish to have human-like robots as assistants or servants, anthropomorphic characteristics help to meet this expectation. Considering these issues with respect to human-robot interaction, the associated research topics will be discussed in the following paragraphs.

2.1 Safety

Safety is the primer issue in human-robot interaction. Since robots are designed to assist humans, they should not bother or even harm humans during operation. In order to expand safety awareness throughout the robotics industry, the Robotic Industries Association (RIA) has released ANSI/RIA R15.06 in 1992. It is an American national standard that provides information and guidance in safeguarding personnel from injury in robotic-production applications. Internationally, ISO 10218 (2006) describes the basic hazards associated with robots and provides requirements to reduce the resulting risks. On the basis of these standards, researchers and practitioners are striving to provide safety assessment in the use of robots.

In human-robot interaction, especially the industrial robots, the hazards may come from impact or collision, crushing or trapping, and some other accidents. In order to prevent against the possible accidents and injuries, special attentions should be given to the workplace layout, sensors, and emergency-off devices of the robots.

Through careful workplace layout, humans and robots can be separated in different blocks to avoid direct contacts. The work envelope of a robot defines the space that it can reach. Thus, while designing the layout, it is critical to prevent personnel from entering this dangerous area in case that a collision happens. In addition, with the support of computer technologies, it is possible to simulate the physical interaction between humans and robots. For example, Oberer et al. (2006) used the CAD models of the human operator and the robot to conduct impact simulation to assess the injury severity for a human operator (Figure 10).

描述: Figure 6

Figure 10. Impact simulation (Oberer et al., 2006)

Ideally, careful workplace layout helps to eliminate the risk of impact or collision. However, in most cases, the space is too limited to enable such practices. Thus, it requires both human and robot to be aware of the possible impacts. For human operators, warning signs and sounds are usually used to alert them whenever collision is about to happen. But when the human operator concentrates too much on the task and fails to detect that, these warning messages won’t work. In such a condition, providing sensors with the robots gives a feasible solution. For example, if the robot can “see” the human operator by means of CCD camera and computer vision techniques, it can stop moving to avoid collision with the human in time. Moreover, while a robot is approaching the human but both of them do not notice that, an emergency-off (EMO) device allows a third person to prevent against the accident. By pushing the EMO button, the power supply of the robot can be disconnected immediately to ensure the safety of human beings.

2.2 Control

Effective control methods are essential for guiding robots to follow the orders given by human beings. Technically, the means of control depend on the application field of robots. Autonomous robots are driven by preprogrammed commands. As the power is turned on, the robot starts to execute the commands and then performs a series of actions. In such applications, computer programming is of great concern. However, the robot itself seems not to really make interaction with human, unless it is equipped with sensors to detect human and make real-time response. Neves and Oliveira (1997) divide the control system of autonomous robots into three levels, including reflexive level, reactive level, and cognitive level. At the reflexive level, robot behaviors are developed in a pure stimulus-response way, which involves only the sensors. To the reactive level, actions are quickly made to the pre-defined problems based on a database of robot behaviors. As the complexity increases and results in heavier computation, it goes to the cognitive level which requires decision making. Combining these features, autonomous robots can be intelligent and interact well with the environment and human. Takahashi et al. (2010) developed a mobile robot for transport called MKR. MKR is able to identify the obstacles and perform real-time path planning, so that it can avoid collision with humans or objects, as shown in Figure 11.

 

描述: Figure 11

Figure 11. The autonomous robot MKR (Takahashi et al., 2010)

For manual controlled robots, the control panel is either connected to the robot or located remotely. No matter which approach is adopted, it is necessary to provide appropriate user interface for control. The key elements to a good interface design generally depend on the nature of the task and user. Concerning the task, it needs to be simple and intuitive to avoid possible errors or mistakes. From the user’s point of view, an operational process that meets human’s expectation helps to enhance the efficiency of control. This is in relation to the mental model of the user. Due to the diversity of human beings, related knowledge of engineering psychology and human factors engineering should be considered to ensure the usability of interface design. In addition to the mental factors, physical characteristics of humans are also important to the success of interface design. For example, the button size is required to fit the finger size of target users, so that they can perform the operation smoothly with ease.

描述: Figure 12a 描述: Figure 12b

Figure 12. The robot controlled by a waistcoat (Suomela and Halme, 2001)

The robots controlled remotely, which are also referred to as teleoperated robots, involve more complicated HRI issues rather than interface design. Since the user is not beside the robot, cameras and microphones are needed to allow the operator to see and hear exactly what the robot does. In other words, the humans should be able to feel as if they are present during remote operation. This is known as “telepresence." In the development of telepresence robots, advanced devices that provide sensory stimuli are critical. As the user gets more immersed into the remote environment, the performance of control and interaction will be better. Adalgeirsson and Breazeal (2010) presented the design and implementation of MeBot, a robotic platform for socially embodied telepresence (Figure 13). This telerobot communicates with human by more than simply audio or video but also expressive gestures and body pose. And it was found that a more engaging and enjoyable interaction is achieved through this practice.

描述: Figure 13

Figure 13. The telepresence robot MeBot (Adalgeirsson and Breazeal, 2010)

2.3 Anthropomorphism

Since the early development of robotics, there has been a significant trend toward anthropomorphizing robots to exhibit human-like appearance and behavior. In this way, people can interact with robots in the ways that they are familiar with. As the level of anthropomorphism goes higher, the interaction performance can be further improved (Li et al., 2010). A simple example can be seen in an articulated robotic arm, which is based on the structure of the human arm and serves as a third arm for human to enhance the productivity. From this point of view, techniques contributing to a higher level of anthropomorphism of robots play an important role in the study of HRI.

One approach to make robots more anthropomorphic concentrates on building the appearance, usually the robot head or face. This is because people generally recognize a person by the face. If the robot head produces human-like expressions, people may find it friendlier to interact with. A typical application is the robotic creature Kismet, an expressive anthropomorphic robot as shown in Figure 14. It is the first autonomous robot explicitly designed to explore face-to-face interactions with people (Breazeal, 2002). Besides, Michaud et al. (2005) designed Roball, a ball-shaped robot, which can effectively draw the attention of young children and interact with them in simple ways. Figure 15 shows the Roball and the interaction between a child and Roball.

Figure 14. The sociable robot Kismet (Coradeschi et al., 2006)

 

描述: Figure 16a 描述: Figure 16b

Figure 15. Roball and its interaction with a child (Michaud et al., 2005)

In addition to a human-like face, the humanoid body further makes a robot more anthropomorphic. Combining the robot head with the torso, arms and legs, it comes closer to the realization of an artificial human. Nevertheless, human-like arms and legs are not sufficient for a humanoid robot. The robot also needs to have the ability of mimicking human motions, so that it can move as human does. This is enabled by collaborative efforts among a number of research fields, such as anatomy, kinematics and biomechanics. Furthermore, if the robot is intended to make decision and react to the environment as human does, human behavior modeling should be taken into consideration as well. As it further relates to the study of psychology and sociology, the complexity thus increases considerably. Figure 16(a) illustrates the humanoid robot ASIMO developed by Honda, which was intended to act as a human servant (Garcia et al., 2007). Besides, as shown in Figure 16(b), Aldebaran Robotics has developed the humanoid robot Nao, which can interact with both human and robot.

描述: ASIMO 描述: Nao

(a)                                                             (b)

Figure 16. Humanoid robots (a) ASIMO (Garcia et al., 2007) and (b) Nao (Aldebaran Robotics, 2010)

3.    Design elements for good human-robot interaction

In section 2, the major domains of HRI issues and some related applications, including safety, control, and anthropomorphism have been reviewed. On the basis of these topics, some design elements contributing to good human-robot interaction need to be further highlighted for successful implementation. This section will discuss these design elements along with the associated technologies and future trends.

Equipped with the capabilities to detect humans or objects in the environment and to react accordingly, the robot can perform autonomous behaviors for the safe use and a stronger interaction. In order to enhance the performance of control, the interface needs to follow the principles of user-centered design. Further, for a more immersive telepresence, sensory enhancing elements including stereoscopic and stereophonic perception, as well as supersensory, can make great contributions to stronger human-robot interaction. Moreover, through the realization of anthropomorphism, human-robot interaction will become as natural as interpersonal communication. This can be achieved by providing humanoid elements and enabling eye contact between human and robot. Finally, in order to enable seamless dataflow, a robust system for data transmission should be adopted. Table 1 summarizes these elements and the associated technologies.

TABLE 1. Design elements and associated technologies for good HRI

Design elements

Associated technologies

Autonomous behaviors

sensors and actuator, path planning

User interface

human-computer interaction, teleoperation, virtual reality

Sensory enhancing elements

telepresence, multi-sensory stimulation, binocular and panoramic vision, stereo audio, virtual reality

Anthropomorphism

humanoid appearance, expression, and motion

Eye contact

camera and screen with specific placement

Data transmission

RF and Internet transmission, time-delay improved algorithm

3.1 Autonomous behaviors

In pure human-robot interaction, autonomous behaviors of the robot are generally designed for the safety of use. For collision prevention, the active identification of possible obstacles in a reasonable distance is required. This involves three design elements: the sensors, an intelligent system for path planning, and the actuators. Yasuda et al. (2009) applied fuzzy logic to develop strategies for collision prevention of a powered wheelchair, which is equipped with a laser range sensor and a position sensitive diode sensor to observe the front and both sides (Figure 17). Combining these elements, the robotic wheelchair can either slow down to stop or directly modify the path setting to avoid obstacles. Besides, Candido et al. (2008) proposed a hierarchical motion planner for an autonomous humanoid robot. Based on this motion planner, the robot can generate a feasible path to finish its walk without making collision or falls, as shown in Figure 18.

描述: Figure 19a        描述: Figure 19b

Figure 17. The robotic wheelchair and its structure of operation (Yasuda et al., 2009)

描述: Figure 20a 描述: Figure 20b

Figure 18. The humanoid robot that is capable for path planning (Candido et al., 2008)

As for human-robot-human interaction, in which the robot serves as an interface for communication between people situated in two places, the autonomous behaviors become more important for a successful interaction. In such applications of telepresence robotics, the person who operates the robot remotely is called the user, whereas the other person interacting directly with the robot is assigned as a participant. From the user’s perspective, autonomous behaviors of the robot extends the capability of projection to operate the robot reliably in a dynamic environment. From the participant’s view, autonomous behaviors also increase the interactive capability of the participant as a dialogist. For example, a telepresence robot with the autonomous behavior of identifying the direction of the participant who is speaking can assist the remote user to respond more quickly and properly. This is achieved by adopting cameras, microphones, and a dedicated software system for recognition.

An interactive museum tour-guide robot, as shown in Figure 19, was developed by two research projects TOURBOT and WebFAIR funded by the European Union (Burgard et al., 1999; Schulz et al., 2000; Trahanias et al., 2005). Thousands of users over the world have experienced controlling this robot through the web to visit the museum remotely. They developed a modular and distributed software architecture which integrates localization, mapping, collision avoidance, planning, and various modules concerned with user interaction and web-based telepresence. With these autonomous features, the user can operate the robot to move quickly and safely in a museum crowded with visitors.

描述: Figure 19a 描述: Figure 19b

Figure 19. An interactive museum tour-guide robot and its GUI (Trahanias et al., 2005)

3.2 User interface

The performance of control mainly depends on the usability of user interface. The first step toward a usable interface design is to acquire a detailed understanding the relationship between the user and the task. This is highly related to the study of human factors engineering, which aims to develop user-centered design based on scientific evidence. Since the interface of control for robots is usually established on a computer system, most of the problems fall within the domain of human-computer interaction (HCI). There have been numerous ongoing HCI studies that endeavored to formulate universal principles of interface design. The focus tends to be on how users can deal with the tasks efficiently without committing errors. To make the design principles into practice, it also requires efforts from the fields of computer science and mechanical design. For instance, Baker et al. (2004) designed the user interface of the robot for search and rescue toward providing easy and intuitive use. As Figure 20 illustrates, the interface helps the user to concentrate on the video window without being distracted by additional information.

描述: Figure 20

Figure 20. The easy and intuitive control interface of the robot (Baker et al., 2004)

In addition to these basic requirements of user interface, there are some more issues to be noted in the modern development of robotics, especially for those with respect to teleoperators. A teleoperator is a machine that extends the user’s sensing and manipulating capability to a location remote from that user. Teleoperation refers to direct and continuous human control of the teleoperator. Many studies emphasize on enabling the user to modify the remote environment (Stoker et al., 1995; Engelberger, 2001; Spudis, 2001), that is, projecting the user to the teleoperator. In order to provide the user with a better remote interaction, virtual reality can be applied to create an environment with more realistic immersion. With a head-mounted display, the user can really feel that he/she is present at the remote location. Further, wired gloves that offer tactile feedbacks as if the user really touches what the robot does.

The Full-Immersion Telepresence Testbed (FITT) developed by NASA, which combines a wearable interface integrating human perception, cognition and eye-hand coordination skills with a robot’s physical abilities, as shown in Figure 21, is a recent example of advent in teleoperation (Rehnmark et al., 2005). The teleoperated master-slave system Robonaut allows an intuitive, one-to-one mapping between master and slave motions. The operator uses the FITT wearable interface to remotely control the Robonaut to follow the operator’s motion fully in simultaneous operation to perform complex tasks in the international space station.

描述: Figure 21a 描述: Figure 21b

Figure 21. FITT and Robonaut (Rehnmark et al., 2005)

3.3 Sensory enhancing elements

In telepresence, stereoscopic and stereophonic elements are often emphasized to create the illusion of remote environment, which increases the feeling of immersion for the user. For example, the user can identify the distance between an object and the telepresence robot by binocular vision (Brooker et al., 1999). In addition, Boutteau et al. (2008) developed an omnidirectional stereoscopic system for the mobile robot navigation. As shown in Figure 22, the 360-degree field of view enables the remote operator to have a more detailed understanding about the environment. Moreover, the head-related transfer function (HRTF) for stereophonic effect further enables the user to identify the location and direction of a sound (Hawksford, 2002).

描述: Figure 22a 描述: Figure 22b

Figure 22. The robot with a stereoscopic system (Boutteau et al., 2005)

For teleoperated robots, stereoscopic and stereophonic elements also help to enhance the feel of presence during operation. In the design of teleoperators, these elements can be added to provide stronger interaction by adopting the technologies involved in telepresence videoconferencing. As many practices show, telepresence videoconferencing enables the users and the participants to communicate more efficiently. For example, Lei et al. (2004) proposed a representation and reconstruction module for an image-based telepresence system, using a viewpoint-adaptation scheme and an image-based rendering technique. This system provides life-size views and 3D perception of participants and viewers in real time and hence improves the interaction.

Supersensory refers to an advanced capability to modify the remote environment provided by a dexterous robot or a precise telepresence system. From the user’s view, the user’s manipulative efficiency for special tasks is enhanced when projecting onto a telepresence robot with supersensory. Green et al. (1995) developed a telepresence surgery system integrating vision, hearing and manipulation. It consists of two main modules: a surgeon’s console and a remote surgical unit located at the surgical table. The remote unit provides scaled motion, force reflection and minimized friction for the surgeon to carry out complex tasks with quick, precise motions. Similar applications of supersensory in telepresence surgery can be also seen in the studies of Satava (1999), Schurr et al., (2000), and Ballantyne (2002).

Supersensory elements can also provide the user with a novel immersion feeling in a remote environment. For example, the user can control the zoom function of the camera on a telepresence robot to observe the small details of the remote environment, which the user does not normally see with the naked eye. Intuitive Surgery (2010) developed the da Vinci® Surgical System through the use of supersensory in telepresence. As Figure 23 shows, the da Vinci Surgical System consists of an ergonomically designed surgeon’s console, a patient-side cart with four interactive robotic arms, and the high-performance vision system. Powered by state-of-the-art robotic technology, the surgeon’s hand movements are scaled, filtered and seamlessly translated into precise movements.

 

描述: Figure 23

Figure 23. The da Vinci® Surgical System (Intuitive Surgery, 2010)

3.4 Anthropomorphism

As the robot resembles human more, the human-robot interaction comes closer to interpersonal communication. Thus, anthropomorphism of robots helps to enhance the performance of human-robot interaction by means of creating an environment that humans are more familiar with. Generally, this is enabled by providing humanoid appearance, expression, and motion. Coradeschi et al. (2006) addressed that appearance and behaviors of robot are essential in human-robot interaction. A robot’s appearance influences subject’s impressions, and it is an important factor in evaluating the interaction. Humanlike appearance can be deceiving, convincing users that robot can understand and do much more than they actually can. Observable behaviors are gaze, posture, movement patterns and linguistic interactions.

Ishiguro created a humanoid robot by copying the appearance of him. As Figure 24 presents, he constructed this robot with silicone rubber, pneumatic actuators, powerful electronics, and hair from his own scalp. Although it is not able to move, this robot however meets the expectation of mimicking a real person’s appearance (Guizzo, 2010). An alternative approach to provide a humanoid appearance is by displaying the face of the remote user on a telepresence robot. For interacting with the participants, the user’s face displayed on a LCD screen is incorporated in many telepresence robots. Dr. Robot and the telepresence system PEBBLES both use a LCD screen to display the user’s face, which allows the participants to realize whom the telepresence robot represents. The commercial product “Giraffe” (2007), a remote-controlled mobile video conferencing platform, is also a telepresence robot application. It is composed of two subsystems: the client application, and the Giraffe robot itself. On the Giraffe robot, there is a video screen and camera mounted on an adjustable height robotic base. The user can move the Giraffe robot from afar using the client application. Software that runs on a standard PC with a webcam enables the user connects to the distant Giraffe robot through the Internet for a telepresence interaction.

描述: Figure 24a 描述: Figure 24b

Figure 24. Ishiguro and the humanoid robot (Guizzo, 2010)

There are many other solutions for presenting anthropomorphic elements, such as the humanoid expressions. For example, as depicted in Figure 14, the humanoid robot Kismet is installed with mechanical facial expressions to make face-to-face interaction with humans (Breazeal, 2002). Besides, Berns and Hirth (2006) developed a humanoid robot face ROMAN. As Figure 25 shows, the mechanical structure allows ROMAN to make facial expressions such as anger, disgust, fear, happiness, sadness and surprise. Facial expressiveness in humanoid-type robots has received a lot of attention because it is a key component to developing personal attachment with human users. From a psychological point of view, using facial expressions is an effective method to build personal attachment in communicating with a human user.

描述: Figure 25a 描述: Figure 25b

Figure 25. The expressive robot head ROMAN (Berns and Hirth, 2006)

Moreover, human-like motions extend the anthropomorphism of robots to a higher level. This involves the efforts from motion capture, biomechanics, kinematics, and statistical methods. For example, Chen (2010) employed a high-speed video camera to capture the jumping procedure of human and then conducted kinematic analysis, which helps to develop a human jumping robot. In addition, Kim et al. (2006) adapted the human motion capture data and formulated an inverse kinematics problem. By optimizing the problem, the robot is able to imitate human arm motion.

3.5 Eye contact

Eye contact is an important element in human-to-human communication. It is a well-known cue for gaining attention and attracting interest. In human-robot interaction, a robot with eye contact can make the user feel more familiar and comfortable to interact with. Yamato et al. (2003) focused on the effect that recommendations made by the agent or robot had on user decisions, and designed a “color name selection task” to determine the key factors in designing interactively communicating robots. They used two robots as the robot/agent for comparison. Based on the experimental results, eye contact and attention-sharing are considered to be important features of communications that display and recognize the attention of participants.

In social psychology, joint attention is people who are communicating with each other frequently focus on the same object. The joint attention is a mental state where two people not only pay attention to the same information but also notice the other’s attention to it. Imai et al. (2003) investigates situated utterance generation in human-robot interaction. In their study, a person has joint attention with a robot to identify the object indicated by a situated utterance generation generated by the robot named Robovie. A psychological experiment was conducted to verify the effect of eye contact on achieving joint attention. According to the experimental results, it was found that a relationship developed by eye contact produces a more fundamental effect on communications than logical reasoning or knowledge processing.

In telepresence applications, eye contact can increase the immersion feeling of the user and the interactive capability of the participant as a dialogist. It is very difficult to achieve eye contact during interpersonal communication between the user and the participant through a telepresence robot when the face of the user is displayed on a LCD screen, because the placement of the camera on a telepresence robot is usually on top of the LCD screen, which hinders direct eye contact between the user and the participant through the telepresence robot. DVE Telepresence (2005) developed a novel LCE screen by setting the internal camera just behind the monitor. It provides natural face-to-face and eye contact communication without causing eyestrain. By adopting advanced devices like this, it is possible to ensure high-quality eye contact in robotics, which contributes to a stronger interaction and enhanced performance.

3.6 Data transmission

The transmission of control commands and sensory feedback is a basic design element for the connection between human and robot. Without this back support, it is not possible to realize real-time teleoperation and telepresence. Thus, the related development in communication engineering also plays an important role in robotics. Generally, wireless radio frequency and Internet are used in most telepresence applications, and dedicated lines are used in specific applications (such as operation in space and deep sea). For example, Winfield and Holland (2000) proposed a communication and control infrastructure for distributed mobile robotics through the use of wireless local area network (WLAN) technology and Internet Protocols (IPs), which results in a powerful platform for collective or cooperative robotics. The infrastructure described is equally applicable to tele-operated mobile robots. In addition, considering cost efficiency and ease of use, Lister and Wunderlich (2002) made use of radio frequency (RF) for mobile robot control. They also explored software methods to correct errors that may develop in RF communication.

In order to realize real-time communications, the speed of transmission is taken into consideration as well. This is in relation to the effective techniques in data compression and decompression, error control, and so on. Combined with adequate algorithms of reactive functions, robots can respond to the human user in a reasonable time. Nevertheless, the respond time is not suggested to be as short as possible. Instead, Shiwa et al. (2009) conducted some experiments and claimed that people prefer one-second delayed responses from the robot rather than immediate responses. Thus, delaying strategy is adopted by adding conversational fillers to the robot, so that the robot seems to make a pause for thinking prior to communicating with the human. This example shows that the issues in data transmission are related to not only the speed but also the modality of stimulus presentation.

4.    Concluding remarks

Human-robot interaction is a growing field of research and application, which includes lots of topics and associated challenges. With the multidisciplinary efforts, there is a global trend toward natural interaction and higher performance. In this chapter, we discussed the highlighted HRI topics and related practices to provide conceptual ideas of how interaction affects the development of robotics. In addition, the according design elements for good human-robot interaction are also presented to serve as a further reference.

In the future development of human-robot interaction, people are looking forward to the intelligent robots that can interact with users as human beings do. However, although anthropomorphic characteristics make the robots more similar to real humans and thus are appealing to many users, there are still a number of barriers and challenges to be addressed. As the theory of “uncanny valley” describes, when robots look and act almost like real humans, it however causes a response of revulsion among human users and participants (Mori, 1970). That is to say, human likeness of the robot is not always positively correlated to the perceived familiarity. If the details of behaviors do not match the high realism of appearance, the robot will produce a negative impression to humans. As a result, related technologies are required to cross or avoid the uncanny valley.

One possibility is to develop complete human-like appearance and behaviors for the robot simultaneously. Nevertheless, there seems a long way to go before overcoming the difficulties in human modeling and other related technologies. An alternative is to make the robot as an agent of the distant user by implementing telepresence and teleoperation. Enabled by telepresence, the human users on both sides appear to communicate with each other by means of one-to-one-scale video in real time. Then the robots reproduce the actions that the distant user intended to perform via teleoperation. In this way, it is also similar to the real human-to-human interaction, although the anthropomorphism of the robot is not really in a high level.

Last but not least, no matter how closely a robot resembles a real human or how powerful it is, safety will always be the most essential issue in human-robot interaction. As Asimov’s three laws of human-robot interaction indicate, human and robot must cooperate with each other upon the principle of not conflicting with each other. After all, it may go back to the ethics and morality with regard to human interaction, just as the relationships among human beings that we have gotten used to.

References

Adalgeirsson, SO and Breazeal, C: 2010, MeBot: A robotic platform for socially embodied telepresence, Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction: 15-22, Osaka, Japan.

Aldebaran Robotics: 2010, retrieved from http://www.aldebaran-robotics.com/

Asimov, I: 1942, Runaround, Street & Smith Press, United States.

Baker, M, Casey, R, Keyes, B and Yanco, HA: 2004, Improved Interfaces for Human-Robot Interaction in Urban Search and Rescue, Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Hague, the Netherlands.

Ballantyne, GH, 2002: Robotic surgery, telerobotic surgery, telepresence, and telementoring - Review of early clinical results, Surgical Endoscopy and Other Interventional Techniques 16(10): 1389-1402.

Berns, K and Hirth, J: 2006, Control of facial expressions of the humanoid robot head ROMAN, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems: 3119-3124, Beijing, China.

Billingsley, J, Oetomo, D and Reid, J: 2009, Agricultural robotics, IEEE Robotics & Automation Magazine 16(4): 16-16, 19.

Boutteau, R, Savatier, X, Ertaud, JY and Mazari, B: 2008, An omnidirectional stereoscopic system for mobile robot navigation, Proceedings of the International Workshop on Robotic and Sensors Environments (ROSE): 138-143, Ottawa, Canada.

Breazeal, CL: 2002, Designing Sociable Robots, The MIT Press.

Brooker, JP, Sharkey, PM, Wann, JP, and Plooy, AM: 1999, A helmet mounted display system with active gaze control for visual telepresence, Mechatronics 9(7): 703-716.

Burgard, W, Cremers, AB, Fox, D, Hahnel, D, Lakemeyer, G, Schulz, D, Steiner, W, and Thrun, S: 1999, Experiences with an interactive museum tour-guide robot, Artificial Intelligence 114(1-2): 3-55.

Candido, S, Kim, YT and Hutchinson, S: 2008, An Improved Hierarchical Motion Planner for Humanoid Robots, Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids), Daejeon, Korea.

Chen Y: 2010, Motion mechanism and simulation of the human jumping robot, Proceedings of the International Conference on Computer Design and Applications (ICCDA) 3: 361-364, Qinhuangdao, China

Coradeschi, S, Ishiguro, H, Asada, M, Shapiro, SC, Thielscher, M, Breazeal, C, Mataric, MJ, and Ishida, H: 2006, Human-Inspired Robots, IEEE Intelligent Systems 21(4): 74-85.

Dr. Robot, http://www.intouch-health.com/

DVE Telepresence, http://www.dvetelepresence.com/

Engelberger, G: 2001, NASA's Robonaut, Industrial Robot 28(1): 35-39.

Garcia, E, Jimenez, MA, De Santos, PG and Armada, M: 2007, The evolution of robotics research, IEEE Robotics & Automation Magazine 14(1): 90-103.  

Gerrard, A, Tusia, J, Pomeroy, B, Dower, A and Gillis, J: 2010, On-call physician-robot dispatched to remote Labrador, News in Health, Media Centre, Dalhousie University, Canada.

Giraffe, http://www.headthere.com/products.html/

Green, PS, Hill, JW, Jensen, JF, and Shah, A: 1995, Telepresence surgery, IEEE Engineering in Medicine and Biology Magazine 14(3): 324-329.

Gromyko, A: 2009, Palletising_robot.jpg, retrieved from: http://commons.wikimedia.org/wiki/ File:Palletising_robot.jpg/

Guizzo, E: 2010, The man who made a copy of himself, IEEE Spectrum 47(4): 44 – 56.

Hawksford, MOJ: 2002, Scalable multichannel coding with HRTF enhancement for DVD and virtual sound systems, Journal of the Audio Engineering Society 50(11): 894-913.

Hoggett, R: 2009, TelevoxWensley21Feb1928.jpg, retrieved from: http://cyberneticzoo.com/?p=656/

Imai, M, Ono, T, and Ishiguro, H: 2003, Physical relation and expression: joint attention for human-robot interaction, IEEE Transactions on Industrial Electronics 50(4): 636-643.

International Standard Organization: 1994, ISO 8373: Manipulating industrial robots – Vocabulary.

International Standard Organization: 2006, ISO 10218-1: Robots for industrial environments - Safety requirements - Part 1: Robot.

Intuitive Surgical: 2010, da Vinci® Surgical System, http://www.intuitivesurgical.com/index.aspx

Kim, CH, Kim, D and Oh, YH: 2006, Adaptation of human motion capture data to humanoid robots for motion imitation using optimization, Journal of Integrated Computer-Aided Engineering 13(4): 377-389.

Lei, BJ, Chang, C, and Hendriks, EA: 2004, An efficient image-based telepresence system for videoconferencing, IEEE Transactions on Circuits and Systems for Video Technology 14(3): 335-347.

Li, D, Rau PL, and Li, Y: 2010, A Cross-cultural Study: Effect of Robot Appearance and Task, International Journal of Social Robotics 2(2): 175-186.

Lister, MB and Wunderlich, JT: 2002, Development of software for mobile robot control over a radio frequency communications link, Proceedings of IEEE Southeast Conference: 414 – 417, Columbia, United States.

Lytle, JM: 2002, Robot care bears for the elderly, BBC News, Thursday, 21 February, 2002.

McKellar, I: 2006, Elektro and Sparko, retrieved from: http://www.flickr.com/photos/ 76722295@N00/263022490/

Möller, E: 2005, Leonardo-Robot3.jpg, retrieved from http://en.wikipedia.org/wiki/File:Leonardo-Robot3.jpg/

Mori, M: 1970, The uncanny valley, Energy 7(4): 33-35.

Neves, M and Oliveira, E: 1997, A control architecture for an autonomous mobile robot, Proceedings of First International Conference on Autonomous Agents, California, USA.

Oberer, S, Malosio, M, and Schraft, RD: 2006, Investigation of Robot-Human Impact, Proceedings of the Joint Conference on Robotics 87-103.

PEBBLES, http://www.ryerson.ca/pebbles/

Pineau, J and Atrash, A: 2007, SmartWheeler: A robotic wheelchair test-bed for investigating new models of human-robot interaction, AAAI Spring Symposium on Multidisciplinary Collaboration for Socially Assistive Robotics.

Prassler, E, Scholz, J, and Fiorini, P: 2001, A robotics wheelchair for crowded public environment, IEEE Robotics & Automation Magazine 8(1): 38-45

Rehnmark, F, Bluethmann, W, Mehling, J, Ambrose, RO, Diftler, M, Chu, M, and Necessary, R: 2005, Robonaut: the ‘short list’ of technology hurdles, Computer 38(1): 28-37.

Robot Institute of America: 1979, RIA Worldwide Robotics Survey and Directory, Robotic Institute of America, P.O. Box 1366, Dearborn, Michigan, U.S.A.

Satava, RM: 1999, Emerging technologies for surgery in the 21st century, Archives of Surgery 134(11): 1197-1202.

Satoh, H, Kawabata, T and Sankai, Y: 2009, Bathing care assistance with robot suit HAL, Proceedings of IEEE International Conference on Robotics and Biomimetics, 498-503, Guilin, China.

Schaefer, MT: 2008, Bios_robotlab_writing_robot.jpg, retrieved from: http://commons.wikimedia.org/ wiki/File:Bios_robotlab_writing_robot.jpg

Schulz, D, Burgard, W, Fox, D, Thrun, S, and Cremers, AB: 2000, Web interfaces for mobile robots in public places, IEEE Robotics & Automation magazine 7(1): 48-56.

Schurr, MO, Buess, G, Neisius, B, and Voges, U: 2000, Robotics and telemanipulation technologies for endoscopic surgery - A review of the ARTEMIS project, Surgical Endoscopy and other Interventional Techniques 14(4): 375-381.

Spudis PD: 2001, The case for renewed human exploration of the Moon, Earth Moon and Planets 87(3): 159-169.

Stoker, CR, Burch, DR, Hine, BP III, and Barry, J: 1995, Antarctic undersea exploration using a robotic submarine with a telepresence user interface, IEEE Expert 10(6): 14-23.

Suomela, J and Halme, A: 2001, Cognitive Human Machine Interface Of Workpartner Robot, Proceedings of  Intelligent Autonomous Vehicles 2001 Conference (IAV2001), Sapporo, Japan.

Takahashi, M, Suzuki, T, Shitamoto, H, Moriguchi, T and Yoshida, K: 2010, Developing a mobile robot for transport applications in the hospital domain, Robotics and Autonomous Systems 58(7): 889-899.

The Robotic Industries Association: 1992, ANSI/RIA R15.06: Industrial Robots and Robot Systems - Safety Requirements, ANSI Standard.

Trahanias, P, Burgard, W, Argyros, A, Hahnel, D, Baltzakis, H, Pfaff, P, and Stachniss, C: 2005, TOURBOT and WebFAIR: Web-operated mobile robots for tele-presence in populated exhibitions, IEEE Robotics & Automation Magazine 12(2): 77-89.

Wada, K, Shibata, T, Musha, T and Kimura, S: 2008, Robot therapy for elders affected by dementia, IEEE Engineering in Medicine and Biology Magazine 27(4): 53 – 60.

Winfield, AFT and Holland, OE: 2000, The application of wireless local area network technology to the control of mobile robots, Microprocessors and Microsystems 23: 597–607.

Yamato, J, Brooks, R, Shinozawa, K, and Naya, F: 2003, Human-Robot Dynamic Social Interaction, NTT Technical Review 1(6): 37-43.

Yasuda, T, Suehiro, N and Tanaka, K: 2009, Strategies for collision prevention of a compact powered wheelchair using SOKUIKI sensor and applying fuzzy theory, IEEE International Conference on Robotics and Biomimetics (ROBIO): 202 – 208, Guilin, China.

Index

Robot

Robotics

Human-robot interaction

Teleoperation

Telepresence

Anthropomorphism

Biography

Jun-Ming Lu received his Ph.D. degree in industrial engineering and engineering management from National Tsing Hua University, Taiwan, in 2009. He is currently a postdoctoral researcher in Gerontechnology Research Center, Yuan Ze University. His research interests are ergonomics, digital human modeling, and gerontechnology.

Tzung-Cheng Tsai received his Ph.D. degree in mechanical engineering from Yuan Ze University, Taiwan, in 2007. He is currently a researcher in Green Energy & Environment Research Laboratories, Industrial Technology Research Institute, Taiwan. His research interests are telepresence, teleoperation, and green energy.

Yeh-Liang Hsu received his Ph.D. degree in mechanical engineering from Stanford University, United States, in 1992. He is currently a professor in Department of Mechanical Engineering, the director of Gerontechnology Research Center, and the secretary general of Yuan Ze University, Taiwan. His research interests are mechanical design, design optimization, and gerontechnology.