//Logo Image
Authors: Yi-Shin Chen, Jun-Ming Lu, Yeh-Liang Hsu (2013-05-03); recommended: Yeh-Liang Hsu (2014-09-09).
Note: This paper was presented in The 11th International Conference on Smart Homes and Health Telematics (ICOST 2013), Singapore, and is included in Inclusive Society: Health and Wellbeing in the Community, and Care at Home, Lecture Notes in Computer Science, Vol. 7910, pp 298-303

Design and evaluation of a telepresence robot for interpersonal communication with older adults


Aging is associated with an increased risk of isolation. Interpersonal communication with family members, friends, and caregivers is crucial to healthy aging. This paper presents a telepresence robot “TRiCmini+” which can be used as an agent of the children or caregivers in an older adult’s home environment, to duplicate three dimensional face-to-face interpersonal communications. TRiCmini+ can be separated into the “brain (a tablet)” and the “body (the robotic vehicle)”. With this structure, the robot control software is an App which can be downloaded, maintained and updated easily through the Internet. TRiCmini+ is integrated with social network services such as Google Talk and Facebook, to provide a wide range of communication and information sharing easily and conveniently. The effectiveness of using TRiCmini+ in communication is evaluated. The results showed that the telepresence performance in both verbal and nonverbal ways is better than the traditional telepresence robot without nonverbal way of interpersonal communication.

Keywords: Interpersonal communication, telepresence robot, nonverbal communication.

1.    Introduction

Aging is associated with an increased risk of isolation. Interpersonal communication with family members, friends, and caregivers is crucial to healthy aging. With the help of information and communication technologies, the older adults may expect more on communicating with their children and caregivers, as well as sharing life experiences and feelings, in addition to transmitting vital sign monitoring data for healthcare purposes.

Communication tools such as mobile phones and video conferencing systems did facilitate remote verbal communication. Nevertheless, nonverbal communication, such as facial expression, body language and haptics, is more powerful and efficient in conveying ideas, thoughts, and emotions. Mehrabian and Ferris reported that in face-to-face communication, clues from spoken words, voice tone, and facial expression contribute 7%, 38%, and 55% respectively to the total comprehension [1].

In 1980, the term “telepresence” was coined by Marvin Minsky, which means that an operator receives sufficient information about the teleoperator and task environment, displayed in a sufficiently natural way, that the operator feels physically present at the remote site [2]. Robotic telepresence is a newer variant that proposes to integrate ICT onto robotic vehicle and enable to operate in a remote location. Derived from the idea of a mobile robot with videophone embedded, Michaud et al. [3] presented a teleoperated robot called Telerobot with wheels. Later on, the commercial product of telepresence robot with mobility such as VGo [4] was launched into the market. Telepresence is provided for both ends with auditory and visual information. But the feeling of “staying with the person at the same place” is however limited due to the machine-like appearance.

“TRIC”, a Telepresence Robot for Interpersonal Communication, was developed for the daily use of older adults in the home environment [5]. “TRiCmini+” (Figure 1) presented in this paper is a more advanced version used as an agent of the remote user in a local user’s home environment, to duplicate three dimensional face-to-face communication. By providing both verbal and nonverbal elements of interpersonal communication, the robot can better serve as the avatar of the children or family members for expressing their care to the older adults at home.


Fig. 1. Prototype of TRiCmini+ and its system structure

This paper presents the design and evaluation of TRiCmini+. Section 2 introduces the system structure and functions of TRiCmini+. Section 3 describes the evaluation procedure and results. Finally, conclusions and future works are drawn in Section 4.

2. Design of TRiCmini+

TRiCmini+ has an innovative system structure which separates the “brain (a tablet)” and the “body (the robotic vehicle)”. The remote user manipulates TRiCmini+ through the user interface on a tablet/PC to freely move it around and communicate with the local user, who is staying with the robot in the home environment. The system structure and user scenarios are presented in this section.

2.1 System Structure of TRiCmini+

According to Figure 1, tablet of TRiCmini+ is used to receive commands from the remote site via the Internet, perform audio/video conferencing, as well as the “face” of TRiCmini+. The tablet can also be easily removed from the robotic vehicle for personal use. The robot control software is an android App on the tablet. The robotic vehicle is equipped with a power management module and a robotic movement module to provide omnidirectional mobility and body motion. The robot movement commands received by the tablet are relayed to the robotic vehicle via Bluetooth.

In audio/video conferencing, the neck design of the TRiCmini+ allows the camera on the tablet to be controlled by the remote user to trace the local user. For more engaged user experiences, TRiCmini+ is given the ability to produce multiple whole-body emotions by combining facial expressions and whole-body motions. Facial expressions are built as animations displayed on the tablet which allows the remote user to switch among the 6 universal facial expressions. In addition, the servo motors in the movement module help to create TRiCmini+’s arm gestures.

TRiCmini+ is connected to the Internet for data/command transmission via 3G/wifi wireless communication of the tablet. Referring to Figure 1, TRiCmini+ is tele-operated by the remote user via the Internet. Two-way audio and video communication is achieved by social network messenger, such as Google Talk or Skype. Robotic movement commands are also sent through Google Talk as text messages. With this structure, communication and tele-operation can be achieved without knowing the IP address of the tablet. Commands from the remote user are transmitted to the tablet to trigger facial expression and audio/video conferencing function, or are relayed to the movement module of the robotic vehicle via Bluetooth to control robot movement.

The robotic vehicle contains a power management module and a movement module. The core of the movement module in the robotic vehicle is an Arduino Mega 2560 microprocessor equipped with Bluetooth shield for data transmission between the tablet and the robotic vehicle. There are 3 sets of omnidirectional wheels and motors controlled by a motor controller. Once a command from the remote user is received by the tablet and relayed to the movement module, the controller will run the algorithm to determine how the motors will trigger the omnidirectional wheels. In this way, TRiCmini+ can freely move around with the speed of about 12 cm/s. The 3 ultrasonic sensors will help to detect the objects in the surrounding environment. Besides, the power module includes a 12V LiFePO4 battery and a power management circuit board. An LED light on the power management circuit board will warn the local user about the electricity consumption by flashing.

2.2 User Scenarios

From the interaction point of view, there are two different kinds of user scenarios, the communication mode and the home telehealth mode. The communication mode is used while there is a remote user logging in to communicate with the local user (the older adult). In addition, the personal health management for older adults with chronic disease can be achieved in their home environment with the home telehealth mode.

At the local site, older adults can use the video and audio communication function to do the verbal communication with their families/caregivers through TRiCmini+, which is similar to using general video conferencing services. Remote users can tele-operate the robot and add in the nonverbal communication elements by choosing the facial expressions and robotic movements on the user interface. As discussed earlier, TRiCmini+ is given the ability to present facial expressions and whole-body motions among the 6 universal facial expressions.

A home telehealth App, “Care Delivery Frame (CDF)”, is also implemented on the tablet of TRiCmini+ to achieve home telehealth function. CDF is an App designed for older adults who are not familiar with the operation of computers and Internet as a channel for transmitting vital sign measurement data [6]. Besides, CDF is also integrated with social network service such as Facebook in order to provide a wide range of information sharing easily and conveniently. Actually CDF can be a “friend” to the children/family members on Facebook. Vital sign data monitoring, remote photo sharing and caring messages can all be done from Facebook by the remote user.

3. Performance of Telepresence in the Communication Mode

In this research, the telepresence performance of TRiCmini+ in remote communication mode is of great concern and interest. Therefore, 20 subjects (12 males, 8 females) with the age ranging from 18 to 25 were recruited to join the prototype evaluation. Most of the subjects (15) did not have any experience about interaction with robot. Each subject was asked to serve as the local user and have a 5-minute interaction with the remote user (a staff) though TRiCmini+. The responses were collected from the questionnaire based on the Temple Presence Inventory (TPI) [7]. For all questions, a 7-point Likert scale was used where “1 = Not at all” and “7 = To a very high degree”. Besides, the eye-tracking system was used to analyze the behavior of the local user.

In Table 1, the first column shows the average score of each TPI index in communication mode of TRiCmini+. Basically, most of the scores are among the middle point 4 except the spatial presence which means the performance of telepresence in the communication mode of TRiCmini+ is acceptable. The low score of spatial presence may be caused by the face size of TRiCmini+. The 7 inches tablet in communication mode of TRiCmini+ containing only the face of the remote user without more information about the remote environment. The best performance of telepresence is found in social richness, with the score of 6.00. It may be due to the multimodal interpersonal communication of TRiCmini+, which provided both verbal and nonverbal aspect of communication.

The second column of Table 1 shows the average scores in TPI indices of another telepresece robot – Giraff [8]. Giraff is a typical telepresence robot with a screen to perform the video conferencing function and a robotic vehicle to support its mobility. Table 1 shows the telepresence performance either in verbal way (Giraff) or in both verbal and nonverbal ways (TRiCmini+) of interpersonal communication. According to Table 1, the score of TRiCmini+ is higher than the Giraff in each TPI index. It is expected that TRiCmini+ has better performance than Giraff in telepresence because of combining video conferencing with robotic facial expressions and whole body movements. Nonverbal aspect of communication seems to be able to improve the telepresence performance of a telepresence robot.

In addition to quantitative information, the qualitative data was also collected in this study. In order to analyze the user behavior while interacting with TRiCmini+, we recorded not only the track of the eye in communication mode of TRiCmini+ but also the communication with real persons. As shown in Table 2, in a 5-minute communication with a real person, 41% of the time is spent on looking at the human face, and 51% of the time is spent on looking at other places. It is because that the local user often looked on the places where the dialoguer (who was communicating with the local user) points to. However, the eye tracking behavior of the local user while using TRiCmini+ is significantly different. The local user spent about 83% of the time looking at the tablet, i.e. TRiCmini+’s face, which indicates that the local user took TRiCmini+ seriously as a dialoguer. On the other hand, this also indicates that there is still a gap between the communication through TRiCmini+ and the real interpersonal communication. The body movements of TRiCmini+ may not provide sufficient stimulation to attract the local user’s attention. Currently the body movements of TRiCmini+ are mostly for expressing emotions, in which natural humanoid movements are critical. How to catch the motion and tempo of real human movements is being studied. Functional gestures (e.g. waving, pointing, etc.) will also be added.

Table 1. The average score of TPI indices in two communication methods

TPI index



Spatial presence



Social presence – Parasocial interaction



Social presence – Passive interpersonal



Social presence – Active interpersonal



Engagement (mental immersion)



Social richness



Perceptual realism



Table 2. The percentage in average of eye gaze in the two communication scenarios


Real human


P value

Looking at the face (Tablet)




Looking at other places




Looking at the feet




Looking at the hands




Looking at the trunk




*Significant difference is found between the two scenarios.

4. Conclusions and Future Works

Care through communication from the family members or caregivers may be what the older adults really expect for. TRiCmini+ has been developed to provide both verbal and nonverbal aspects of interpersonal communications. Three-dimensional face-to-face interaction is duplicated with two-way audio communication to create the feeling of “staying with the person at the same place”. By integrating with the home telehealth App CDF, TRiCmini+ demonstrates the extensive capability to provide different levels of “care delivery” to older adults through robotic movement, vital sign monitoring, and other forms of communication, even if no one logins to control TRiCmini+ from the remote site.

TRiCmini+ delivers an innovative system infrastructure of a telepresence robot by using the tablet as the control center. The robot control functions are developed as an independent App to be used on the tablet, which can be downloaded, maintained and updated easily through the Internet. Besides, according to the evaluation results, the telepresence performance (in both the verbal and nonverbal ways) of TRiCmini+ is proved to be more acceptable than the traditional telepresence robot (with only the verbal way) provided by the users. The nonverbal way of interaction did improve the telepresence performance of the robot for interpersonal communication purpose. However, there is still room for improvement in the nonverbal features such as the animation of the facial expressions and the humanoid moving tempo of the robotic movements.

In addition, more advanced functions such as automatic charging system, indoor navigation, are being investigated. The goal is to make TRiCmini+ a practical robot that can be easily used in the home environment. Finally, the effectiveness of interaction in two modes will be evaluated in real application scenarios (senior users in their own home environment) to confirm whether the “care delivery” provided by TRiCmini+ meets the expectation of older adults.


1.     Mehrabian, A. & Ferris, S. R., “Inference of attitudes from nonverbal communication in two channels”, Journal of consulting psychology, 31(3), 248-252, (1967).M. Young, The Technical Writer’s Handbook.,  Mill Valley, CA: University Science, 1989.

2.     Sheridan, T.B.: Telerobotics, automation, and human supervisory control. MIT Press, Cambridge, United Kingdom (1992)

3.     Michaud, F, Boissy, P, Labonté, D, Corriveau, H, Grant, A, Lauria, M, Cloutier, R, Roux, MA, Iannuzzi, D, and Royer, MP: 2008, “A telementoring robot for home care, Technology and Aging”, 21: Assistive Technology Research Series.

4.     VGo Communications, “Introducing VGo. From anywhere. Go anywhere”, Webpage, 2011, http://vgocom.com/, accessed Mar. 2011.

5.     Tsai, TC, Hsu, YL, Ma, AI, King, T, and Wu, CH: 2006, “Developing a telepresence robot for interpersonal communication with the elderly in a home environment”, Telemedicine and e-Health, 13(4): 407-424.

6.     Chen, Y. S., Hsu, Y. L., Wu, C. C., Chen, Y. W., Wang, J. A., “Development of the Care Delivery Frame for senior users,” the 9th International Conference on Smart Homes and Health Telematics (ICOST 2011), 2011/06.

7.     Lombard, M., Ditton, T. A Literature-based Presence Measurement Instrument The Temple Presence Inventory (TPI) (BETA). Technical Report, M.I.N.D. labs, Temple Uni-versity, Pennsylvania, USA, (2004), http://astro.temple.edu/~lombard/research/P2scales_11-04.doc

8.     Annica Kristoffersson, Kerstin Severinson Eklundh, Amy Loutfi, “Measuring the quality of interaction in mobile robotic telepresence: A pilot’s perspective”, International Journal of Social Robotics, vol.5, no.1, pp.89-101, 2013.