//Logo Image
Authors: Jun-Ming Lu, Yeh-Liang Hsu, Chia-Hung Lu, Po-Er Hsu, Yen-Wei Chen and Ju-An Wang (2011-06-30); recommended: Yeh-Liang Hsu (2011-06-30).
Note: This paper is presented in the “2011 International Design Alliance (IDA) Congress Education Conference,” October 24-26, 2011, Taipei, Taiwan.

Development of a telepresence robot to rebuild the physical face-to-face interaction in remote interpersonal communication

Abstract

Interpersonal communication is essential in social networking. People are communicating with one another not merely through verbal and vocal means. Nonverbal communication is actually more powerful than words in conveying messages, emotions, and feelings. This can not be fully satisfied by the traditional communication tools. Therefore, there is considerable demand for a new communication tool that helps to rebuild the face-to-face interaction and communication. For this purpose, a telepresence robot called TRiCmini was developed with enhanced interactions to enrich the interpersonal communication among people from remote places.

The basic concept of TRiCmini is to use it as the avatar or agent of a person in a remote home environment. With the wireless LAN router located in the home environment, TRiCmini is thus connected to the Internet for data transmission. The remote user manipulates TRiCmini through the user interface on a laptop/desktop computer to freely move it around and communicate with the local user, who is staying with the robot in the home environment. In addition to the ability of traveling around freely, it is intended to provide both verbal and nonverbal means of interpersonal communication. Thus, TRiCmini is provided with human-like appearance, facial expression, and body movement. In this way, the face-to-face interaction between the both ends can be rebuilt as if the two users are being together in the same place.

Based on the evaluation results with 60 subjects, TRiCmini demonstrates the extensive ability to rebuild the face-to-face interactions in remote interpersonal communication by means of multimedia interaction, omnidirectional movement, facial expression, whole-body emotion, obstacle avoidance and automatic navigation. With these features, the distance barrier can be overcome to help maintain satisfactory social networks between people.

Keywords: face-to-face interaction, interpersonal communication, telepresence robot.

1.    Introduction

People communicate with one another to share ideas and thoughts, as well as expressing feelings and emotions. Thus, interpersonal communication is essential for social networking. Traditionally, interpersonal communication is usually done by talking to or interacting with one another face to face. Indirect means such as leaving message by handwriting are also possible, but they are obviously not so efficient. The lack of real-time feedback is definitely a serious limitation. With the advancement of technologies, human beings then have developed some useful alternatives for indirect communication. First, the invention of telephones allowed people to carry on a conversation with ease, regardless of the distance barrier. Further, mobile phones make it even more convenient so that one can do this no matter where he/she is.

These communication tools did facilitate interpersonal communication in terms of verbal communication. Nevertheless, verbal communication is not the only mean of face-to-face communication. In fact, nonverbal communication, such as facial expression and body language, is more powerful and efficient in conveying the idea, thoughts, feelings, and emotions. Mehrabian and Ferris [1967] reported that in face-to-face communication, clues from spoken words, voice tone, and facial expression contribute 7%, 38%, and 55% respectively to the total comprehension. Besides, based on the analyses of recorded video tapes, Argyle et al. [1970] claimed that non-verbal cues had 4.3 times the effect of verbal cues in communication. These findings in turn support the importance of nonverbal communication.

In order to enable nonverbal communication between people from two remote sites, videoconferencing was thus developed based on video cameras or webcams. It opened a new page in remote interpersonal communication. People can directly hear the dialogists’ voices, see their faces, and grasp their intentions through their body movement. This is quite similar as in face-to-face communication, but there is an insufficient sense of presence. For example, the videoconferencing system is valid only when one stays within the field of view of the camera as well as sitting in front of the display or monitor. However, face-to-face communication is not merely conducted in this way. If one does not go to the dialogist’s side, the dialogist can also come around to start the conversation. Besides, even though one can see the one-to-one scale image via the videoconferencing system, it is still a three-dimensional image displayed on a two-dimensional screen. The lack of haptic feedback however limits the physical interaction that people have gotten used to in face-to-face interaction.

Aiming to enhance the sense of presence in videoconferencing, telepresence, which allows a person to feel as if being present at a remote site, has then become popular in the development of advanced communication tools. In the beginning, telepresence is actually an adapted version of teleoperation which provides the remote participation with a feeling of “actually being present” [Minsky, 1980]. From this point of view, telepresence majorly focuses on the remote controller’s side. This is also common in the development of telepresence robots. A telepresence robot is the teleoperated robot that gives the controller a stronger sense of presence in a remote site. It’s like a videoconferencing system with moderate mobility. For the communication use, the remote controller is able to let the robot go to the target dialogist’s side. Hence, part of the limitations in videoconferencing can be eliminated. However, face-to-face communication is conducted in two-way interaction. The enhanced sense of presence for only the remote controller is definitely not enough. It is also critical to provide the dialogist with a sense of physical presence of the one who controls the telepresence robot from the remote side. In other words, towards rebuilding the physical face-to-face communication between people, the telepresence robot needs to be designed to make the user who stays with the robot feel as if it is a real person standing right in front of him/her.

Therefore, the objective of this study is to develop a telepresence robot that helps to rebuild the physical face-to-face interaction in remote interpersonal communication. Following an adapted communication model, the telepresence robot aims to serve as a new communication tool that enables both verbal and nonverbal communication. In the following section, the development of the telepresence robot will be given based on the exploration of users’ needs. Subsequently, the remote interpersonal communication enabled by the telepresence robot will be illustrated. Moreover, the procedure of user evaluation and the findings will be presented to suggest the pros and cons of the telepresence robot, as well as the ideas of further developments. Finally, the research highlights will be reviewed to conclude the potential benefits and applications in daily use.

2.    The Development of TRiCmini

This study aims to develop a telepresence robot for the daily use as a communication tool among people. Thus, the first step is to explore the users’ needs by depicting a user scenario of remote interpersonal communication through the use of a telepresence robot. After that, the design requirements can be clearly pointed out and then be followed to determine the appropriate enabling technologies.

2.1.  User Scenario and Design Requirements

As shown in Figure 1, there are two kinds of users in the remote interpersonal communication enabled by the telepresence robot, i.e. the remote user and the local user. The remote user controls the telepresence robot to move around and communicate with the local user, who stays with the robot in the home environment.

Figure 1. User scenario of the telepresence robot

Generally, the remote user initiates the operation by using a desktop/laptop computer. As the remote user launches the user interface software, he/she has to select which local user to connect to. Here, the friend list may contain his/her family members, relatives, and friends, who can be differentiated by specified usernames, photos, and IP addresses. After logging into the desired robot, it will be ready for the remote user to manipulate the robot to interact with the local user. In the mean time, the robot will give an auditory or visual signal to inform the local user that someone from far way has just logged in.

After taking control of the robot, the remote user can see through the robot’s eyes to observe the surrounding environment and the local user’s activities. If the local user is not within the visible range, the remote user is able to move the robot along any desired direction to reach the local user. Since the space may be too crowded for the robot to move freely and safely, the robot should be equipped with sensors to detect obstacles. Integrated with an optimization algorithm, collisions can be avoided. In case that the local user may not notice the presence of the robot, the remote user can use the microphone to make sounds out the speaker, such as saying “Hello!” to the local user. Once the local user turns to the robot and responds, the remote user will instantly receive this information by means of video displayed on the screen and audio transmitted through the speaker and headphone at his/her side. At this moment, the interpersonal communication through the use of the telepresence robot starts at both ends.

By seeing the real-time video of the local user as well as listening to what he/she is saying, it feels like that the local user is right in front of the remote user. As for the local user, the remote user is projected onto the telepresence robot, which looks like a real human and offers the possibility of physical face-to-face interactions. Further, the remote user can utilize facial expressions and body movement of the robot to show richer feelings and emotions to the local user. This is enabled by raising the level of anthropomorphism of the robot with more lifelike appearance, body segments, and robotic mechanism. The enriched human-robot-human interaction will therefore let the local user feel more engaged and involved, while the remote user’s projection is greatly enhanced as well.

In the conversation, it is not possible that the local user stands or sits still right in front of the robot for a long time. Thus, as the local user walks around in the home environment, the sensors and the associated algorithms will allow the telepresence robot to follow him/her to keep the conversation. If there’s any barrier to the robot’s movement, the sensors and the associated algorithms collaborate to avoid collisions. Or the local user may just take it up easily and hold it to pass through. Once the telepresence robot is out of electricity, all the local user has to do is to plug it to the nearest electrical socket for charging, just like charging an ordinary home appliance.

In summary, as Figure 1 illustrates, the activities of the local user are presented on the display of the user interface, while the behaviors of the remote user are projected onto the telepresence robot. This form of interpersonal communication helps to rebuild the physical face-to-face interaction between people, even though they are actually not together with one another. Besides, with the compact size and light weight, the telepresence robot will be similar as other ordinary home appliances. The local user can put it in anyplace as wish. Moreover, the local user is also allowed to create customized clothes for the robot to make it friendlier. Combining all the features, the local user can regard the robot as the agent of the remote user and love to share their lives with it.

According to this user scenario, the users’ demands for the telepresence robot include the real-time transmission of audio and video information, the ability to move freely in all directions, a higher level of anthropomorphism, the compact size and light weight, the presentation of expressions and emotions, and a user-friendly and intuitive interface for robot control. Considering these needs, the design requirements and associated enabling technologies are summarized in Table 1.

Table 1. Design requirements and enabling technologies of the telepresence robot

Design requirements

Enabling technologies

2.2.  System Infrastructure

The system infrastructure of TRiCmini, including both local and remote users, is illustrated in Figure 2.

Figure 2. System infrastructure of TRiCmini

On the one hand, the communication interface of the local user is TRiCmini itself, which contains five modules that cooperate with one another. With the wireless LAN router located in the home environment, TRiCmini is thus connected to the Internet for data transmission. On the other hand, the remote user relies on the user interface software on a laptop/desktop computer to communicate with the local user. The connection of the remote environment can be made via either wired or wireless network. In the following context, the technical details of the five modules of TRiCmini, as well as the user interface for local/remote users, will be explained.

2.2.1.     Communication module

This module is the core of TRiCmini. In order to obtain a compact size and light weight, a mobile data server which consists of a PIC server mounted on a peripheral application board was developed. Besides, the PIC server is integrated with a wireless LAN card for data transmission. In general use, commands transmitted from the remote user can be received via the wireless LAN adapter in the home environment. After being delivered to and decoded by the PIC server, the information can be sent to the emotion module or the movement module for subsequent operations.

2.2.2.     Movement module

There are one PIC controller, three sets of motors and omnidirectional wheels, and three ultrasonic sensors in this module. Once a command is received from the communication module, the PIC controller will run the algorithm to determine how the motors will trigger the omnidirectional wheels. In this way, TRiCmini can freely move forwards/backwards or left/right with the velocity of about 12 cm/s, as well as turning clockwise/counterclockwise. For obstacle avoidance and automatic navigation, the ultrasonic sensors will help to detect the objects in the surrounding environment. After the execution of algorithms on the PIC controller, motors will work in varied ways for different purposes.

2.2.3.     Emotion module

For more engaged user experiences, TRiCmini is given the ability to present facial expressions and whole-body emotions. As Figure 3 shows, LED array lights are used to allow the remote user to switch TRiCmini’s facial expression among “happiness,” “anger,” “disgust,” “sadness,” “fear,” and “surprise.” In addition, the servo motors help to create TRiCmini’s arm motions. By combining these two features and some special patterns of movements, TRiCmini will be able to produce multiple whole-body emotions. As the remote user chooses either of the whole-body emotions, the command will be transmitted to the communication module via wireless network. Then, the movement module is triggered for the associated pattern of movements, while the emotion module enables the presentation of specific facial expression and arm gestures.

Figure 3. Facial expressions of TRiCmini

2.2.4.     Audio/Video module

This module enables two-way audio and one-way video communication. For a more stable Internet connection and higher performance of data processing, an IP camera assigned with its own IP address is used. The IP camera can be moved around different angles by the remote user, both vertically and horizontally, to trace the local user. Integrated with an embedded microphone and an additional speaker, the local user thus has access to audio input/output with TRiCmini. Besides, the remote user relies on the video output and audio input/output of the laptop/desktop computer.

2.2.5.     Power module

This module includes a 12V LiFePO4 battery and a power management circuit board. If the battery is about to run out, a LED light on the power management circuit board will flash. Then, as the local user plugs into the electric socket for charging, the light keeps shining. Once battery is fully charged, the light goes off.

2.2.6.     Local user interface

The appearance of TRiCmini is exactly the user interface for local users. For a higher level of anthropomorphism, its inner structure consists of the head (integrated with the LED array lights for facial expressions and the IP camera for real-time monitoring), body (containing the five modules), two arms (integrated with servo motors for arm motions), and the feet (equipped with omnidirectional wheels for movements). Besides, as Figure 4 presents, TRiCmini is provided with several pieces of baby coat to make it more human-like. Further, considering the convenience of use and the friendliness, TRiCmini is in a compact size of 43 cm tall and weighing 3.5 kg.

Figure 4. The conceptual sketch, inner structure, and two sets of TRiCmini (left to right)

2.2.7.     Remote user interface

As illustrated in Figure 5, the user interface for remote users enables the user-friendly and intuitive manipulation of the position of the camera, the omnidirectional movements, obstacle avoidance, automatic navigation, facial expressions and whole-body emotions. Towards a universal design for all users, graphic symbols are used for better recognition. Besides, consistency and usability of the user interface are emphasized. Moreover, the content layout follows user experiences and expectations.

Figure 5. The remote user interface of TRiCmini

2.3.  Summary

For the possibilities of mass production, the standard assembly process was developed for the five modules of TRiCmini. All the components are ready-to-assemble ones and thus can be easily found and purchased in the stores. In this way, the robot will be no more an extremely complicated machine that can be only available in laboratories. Besides, the production cost of a lab prototype of TRiCmini is at an acceptable level of about 1,000 US dollars.

3.    The Remote Interpersonal Communication Enabled by TRiCmini

The purpose of TRiCmini is to rebuild the physical face-to-face interaction in remote interpersonal communication. Thus, in this section, the communication model of TRiCmini will be described in detail for a better understanding. Besides, the way how TRiCmini facilitates both verbal and nonverbal communication will be explained along with the corresponding elements and features of TRiCmini.

3.1. The Communication Model

A traditional communication model consists of the sender, the encoding process of the sender, the message, the medium or channel carrying the message, the decoding process of the receiver, and the receiver (Shannon and Weaver, 1949). In verbal communication, the speaker and the listener are the sender and the receiver respectively. In the encoding process, the speaker transforms his/her thoughts or ideas into words as the message. The medium can be the speaker’s own voice, telephones, or mobile phones. After receiving the message and passing through the decoding process, the listener mentally grasps the message in terms that are meaningful to him/her. Once the listener decides to respond, the two sides switch their roles and form a closed loop of the transactional model.

In the remote interpersonal communication enabled by TRiCmini, both verbal communication and nonverbal communication are included for a better interaction. Thus, there are two adapted communication models. Figure 6 shows the communication model from the remote user’s side. The remote user is the sender who starts the communication, while the local user serves as the receiver. On the one hand, the sender transforms his/her thoughts into words and speaks out through the microphone. As the local user hears it from the speaker, the message will then be interpreted for his/her understanding. This verbal communication process is almost the same as in the traditional human communication. On the other hand, if the remote user wants to express his/her feelings in ways other than words, he/she can choose to control the robot to do that for him/her. In this case, the message is either TRiCmini’s facial expression or the body movement. And the medium carrying the message will be TRiCmini itself.

Figure 6. The communication model of TRiCmini from the remote user’s side

From the local user’s side, as illustrated in Figure 7, the local user and remote user become the sender and receiver, respectively. The local user’s thoughts and feelings are transformed into spoken words for verbal communication and facial expression or body movement for nonverbal communication. As TRiCmini’s camera and microphone capture the information and transmit it to the remote side, the remote user can experience exactly what the local user shows to TRiCmini. In this case, the message is carried by TRiCmini, as well as the display and speaker of the remote user’s computer. As the remote user sees and hears what the local user presents, the decoding process is the same as that in the traditional face-to-face communication.

Figure 7. The communication model of TRiCmini from the local user’s side

3.2. Verbal Communication

In verbal communication with TRiCmini, it is similar to using telephones or mobile phones. The words are spoken out through TRiCmini’s speaker. For the local user, it is like listening to what the dialogist is trying to say through TRiCmini’s mouth. Of course, the voice will be louder or softer when TRiCmini comes closer or farther, which is similar as in the real face-to-face interaction between two people. As the local user responds, the voice message will be received by the microphone and then transmitted to the remote user’s side. Through the speaker or earphone, the remote user will hear exactly what the local user says. Similarly, the volume depends on the distance between the local user and TRiCmini and their relative positions.

3.3. Nonverbal Communication

In addition to spoken words, messages can also be communicated through other symbols, such as the appearance, proxemics (physical space in communication), eye gaze, body movement and position, and haptics. In the remote interpersonal communication enabled by TRiCmini, all of these activities are done by TRiCmini.

The appearance of TRiCmini is exactly the projection of the remote user’s image. If TRiCmini is wearing a girl suit, the local user will soon identify that it should be the agent of a “she.” As for proxemics, there are generally four territories regarding the distance between the sender and the receiver, including intimate space, personal space, social space, and public space (Hall, 1966). The remote user is able to select the most appropriate one according to the relationship. This is not possible in videoconferencing, and thus makes a significant difference. Further, the lack of eye contact, which is often complained in videoconferencing, can also be solved in the remote interpersonal communication with TRiCmini. While communicating with TRiCmini, the local user may look at its eyes, just as what people usually do in a conversation. Since the camera is set right between TRiCmini’s two eyes, the displayed image will make the remote user naturally feel the local user’s eye gaze. This in turn enables the eye contact between the two ends, which facilitates the better involvement of both users. If the local user is not looking at TRiCmini’s eyes, the remote user can adjust the position of camera horizontally and vertically. Then TRiCmini will turn his head to the local user, just as what a human does in face-to-face communication. Integrating with the free movement enabled by the omnidirectional wheels, the flexible and natural control of TRiCmini helps maintain the eye contact during the remote interpersonal communication.

In addition to the eye contact, facial expressions will be helpful while looking at one’s face. For example, when one laughs without saying a word, people will know that he/she is in a good mood. Thus, TRiCmini is given the six universal facial expressions proposed by Ekman (1971), as shown in Figure 3. With this feature, the local user may experience the similar feeling as looking at the remote user himself/herself face-to-face. On the remote user’s side, of course it is also possible to see the local user’s facial expression via the displayed real-time video. Moreover, while talking face to face, people will use hand gestures or more complicated combination of body movement to emphasize certain words or phrases. With TRiCmini, it is also possible to do so. Being actuated by servo motors, TRiCmini can present some simple gestures with the articulated arms, such as raising the hands and waving the hands. Integrating with the whole body movement enabled by the omnidirectional movements, richer emotions can be shown to the local user for a more interactive communication. On the remote user’s side, it is also done with the real-time video displayed on the computer to grasp the local user’s intention. All the remote user needs to do is to move TRiCmini to a moderate position relative to the local user, as well as adjusting the camera appropriately to see it clearly.

Finally, haptic feedback also helps a lot to convey emotions in face-to-face communication. In a traditional videoconference, the lack of haptic communication often results in a feeling of “talking to a person behind the wall or from far away.” Nevertheless, with TRiCmini, the local user will be able to experience the physical presence of the remote user by patting TRiCmini’s head or holding it during conversation. By making physical interaction with TRiCmini, the local user will develop a stronger sense of being together with the remote user. This is exactly what most people desire for in remote interpersonal communication.

4.    Evaluation of TRiCmini

In order to understand whether the remote interpersonal communication can be realized effectively between the two sides through the use of TRiCmini, an evaluation was conducted for both the remote user and the local user. The findings will help to validate the hypotheses, as well as providing feedbacks for further improvements.

4.1. The Remote User

What the remote user expects is to easily communicate with the local user by using TRiCmini. Hence, the key falls on the usability of the remote user interface. To investigate this, 10 subjects were recruited to participate in the evaluation. The subjects include 5 males and 5 females, with the average age of 22.3 years old. The evaluation was conducted in a simulated home environment. Each subject was assigned a standard task to move TRiCmini to a specified position, reading the text message through TRiCmini’s eyes, following the instruction to the specified room and find out the dialogist, communicating with dialogist, and finally returning back to the starting point. The subject wore the ViewPoint eyetracking system to help monitor his/her eye movement during the task. The captured data will be analyzed to discover the characteristics of the behaviors. In addition, the operation time was recorded as a quantitative indicator of efficiency. After repeating the task for three times, the subject was asked to finish a questionnaire for collecting qualitative evidence of user satisfaction.

Based on the results of paired t-tests, a significant difference was found between the operation time of the first trial and the second trial (p-value = 0.00). It reveals that the remote user interface is with good learnability and memorability, so that users can greatly improve the efficiency of control after practice. While comparing the operation time between the second trial and third trial, there is no significant difference (p-value = 0.20), even though the subjects took relatively less operation time in the third trial. This indicates that users may easily achieve the optimal efficiency after practicing it for only once.

The statistics of eye movement show a common trend among all subjects. All subjects took most time (63.38 % in average) to stare at the display of the image captured by TRiCmini’s camera. In other words, they often kept looking at the image of the local user or the environment through TRiCmini’s eyes. The possible reason may be due to the good design of the user interface, so that the user did not have to spend much time searching around the control panel. This will in turn contribute to an intuitive and natural feeling in the remote interpersonal communication. Further, a negative correlation (r = -0.65) is found between the percentage of eye gaze on the display and the operation time. It indicates that as the user gets used to the control panel and spends less time searching around it, the operation will become more efficient. Thus, further improvements can be made on the modification of the control panel to achieve even better efficiency.

The questionnaire used in the evaluation is based on the five-point Likert scale. The scores are averaged for each of the five principles of usability proposed by Nielsen (1994), including learnability (3.6), memorability (3.8), efficiency (3.8), satisfaction (3.2), and errors (3.9). Similar as the findings reflected by the quantitative data, the user interface provides good learnability, memorability, and efficiency. As comparing the feeling of using TRiCmini against the real face-to-face communication, subjects generally did not find it much similar as communicating with a real human in front of him/her (with the average score of 2.7, between “not so similar” and “moderate”). Besides, there is a more significant difference between traveling around by using TRiCmini and walking around by oneself (with the average score of 2.1). It may be due to the insufficient feedback for the remote user while TRiCmini is moving. Some subjects even let TRiCmini made collision with the environment, since they failed to correctly judge the distance between TRiCmini and the surrounding objects. Definitely, the means of control require further improvements for a more natural use.

4.2. The Local User

The evaluation with the local user is rather simple. No complicated tasks are assigned. Instead, the focus is on the local user’s feelings and preferences while communicating with the remote user through TRiCmini. Fifty subjects (29 males and 21 females) were recruited for the evaluation in this session, with the average age of 21.6 years old. The evaluation was also conducted in a simulated home environment. Each subject was asked to sit on a chair and wait for TRiCmini to come around to start the communication. After a three-minute conversation, the examiner manipulated TRiCmini to present the 6 facial expressions. For each expression, the subject had to choose one description out of 12 options. Among the 12 options, 6 are the universal expressions presented by TRiCmini, whereas the others are similar emotions that might be confused with the correct answers. The subject also wore the ViewPoint eyetracking system to monitor his/her eye movement during the task to provide behavioral criteria. After finishing the task, the subject was asked to finish a questionnaire based on the five-point Likert scale for collecting qualitative evidence of the user satisfaction.

While communicating with TRiCmini, the subject’s sight focused on TRiCmini’s head, upper body, lower body, and the arms for 64.54%, 8.88%, 4.14%, and 1.04% of the time, respectively. Besides, the subject got distracted and looked at other objects in the surrounding environment for the resting 21.72% of the time. Similar as in face-to-face communication, the local user would spend most time looking at TRiCmini’s head. This matches the result obtained from the questionnaire. While being asked “do you feel the eye contact with TRiCmini,” the average score is 3.50, right between “normal” and “strongly.” From the remote user’s side, the eye gaze can be ensured for better involvement.

As for identifying TRiCmini’s facial expression and emotion, the percentage of correct identification is 100%, 92%, 14%, 84%, 22%, and 54% for “happiness,” “anger,” “disgust,” “sadness,” “fear,” and “surprise,” respectively. Obviously, the local user easily experiences the remote user’s happiness, anger, and sadness. However, 28% of the subjects regarded “disgust” as “fear,” while 24% of the subjects misunderstood “fear” as “disgust.” This not only indicates the need for improvement but also reflects that people may get confused between these two emotions. Referring to the questionnaire, while being asked “do you think that TRiCmini’s facial expression and emotion are clear,” the average score is 3.50, right between “moderate” and “clear.” Besides, while being asked “do you think you can understand the remote user’s emotion through TRiCmini’s facial expression and emotion,” the average score is 3.30. Overall, the average score of subject’s opinion toward the usefulness of TRiCmini’s facial expression in remote communication is 3.90. Further, 72% of the subjects feel as if actually communicating with the remote user, whereas 26% of the subjects regard it as communicating with both TRiCmini and the remote user. Only one subject considers this experience as communicating with a robot. Moreover, while being asked “do you think you are communicating with the remote user face to face,” the average score is 3.40. In summary, TRiCmini’s features do help the local user to grasp the remote user’s true feelings and hence add values to the remote interpersonal communication.

5.    Conclusion

TRiCmini demonstrates the extensive ability to help rebuild the physical face-to-face communication by means of multimedia interaction, omnidirectional movement, facial expressions, whole-body emotions, obstacle avoidance and automatic navigation. With these features, the distance barrier can be overcome to contribute to a better satisfaction in remote interpersonal communication. It has the potential to serve as a new communication tool for the daily use.

Reference

Argyle, M., Salter, V., Nicholson, H., Williams, M. & Burgess, P. (1970). The communication of inferior and superior attitudes by verbal and non-verbal signals. British journal of social and clinical psychology, 9, 222-231.

Ekman, P. (1971). Universals and cultural differences in facial expressions of emotion. (Ed.), Nebraska Symposium on Motivation, Lincoln: University of Nebraska Press.

Hall, E. T. (1966). The Hidden Dimension. New York: Doubleday.

Mehrabian, A. & Ferris, S. R. (1967). Inference of attitudes from nonverbal communication in two channels. Journal of consulting psychology, 31(3), 248-252.

Minsky, M. (1980). Telepresence, Omni, 45-51.

Nielsen, J. (1994) Usability Engineering. San Francisco: Morgan Kaufmann Publishers Inc.

Shannon, C. E. & Weaver, W. (1949). The mathematical theory of communication. Urbana: University of Illinois.

Tsai, T. C., Hsu, Y. L., Ma, A. I., King, T. & Wu, C. H. (2007). Developing a telepresence robot for interpersonal communication with the elderly in a home environment. Telemedicine and e-Health, 13(4), 407-424.