Author: Tzung-Cheng Tsai, Yeh-Liang Hsu, An-I Ma, Trevor King, Chang-Huei
Wu (2006-12-12); recommended: Yeh-Liang Hsu (2007-12-04).
Note: This paper is published in Telemedicine
and e-Health, Vol. 13, No. 4, pp. 407-424, August, 2007.
Developing a telepresence
robot for interpersonal communication with the elderly in a home environment
“Telepresence” is an interesting field that
includes virtual reality implementations with human-system interfaces,
communication technologies, and robotics. This paper describes the development
of a telepresence robot called TRIC (Telepresence Robot for
Interpersonal Communication) for the purpose of interpersonal communication
with the elderly in a home environment. The main aim behind TRIC’s
development is to allow elderly populations to remain in their home
environments, while loved ones and caregivers are able to maintain a higher
level of communication and monitoring than via traditional methods.
TRIC aims to be a low-cost, lightweight robot, which can be easily implemented
in the home environment. Under this goal, decisions on the design elements
included are discussed. In particular, the implementation of key autonomous
behaviors in TRIC to increase the user’s capability of projection of
self and operation of the telepresence robot, in addition to increasing the
interactive capability of the participant as a dialogist are emphasized. The technical
development and integration of the modules in TRIC, as well as human factors considerations are then described. Preliminary functional tests show that new users were able to
effectively navigate TRIC and easily locate visual targets. Finally the future developments of TRIC, especially the possibility of using TRIC
for home tele-health monitoring and tele-homecare visits are discussed.
Telepresence, interpersonal communication, home tele-health.
Telepresence robot as an
interpersonal communication tool
is an interesting field that includes virtual reality implementations with
human-system interfaces, communication technologies, and robotics. The earliest
research in telepresence dates back to the 1960’s.
Goertz in 1965 and Chatten in 1972 showed that when a video display is fixed
relative to the operator’s head and the head’s own pan-and-tilt drive the
camera’s pan-and-tilt functions, the operator feels as if she were physically
present at the location of the camera, however remote it is [Sheridan, 1992a].
Sheridan [1992a, 1992b] defines telepresence as: “…visual,
kinesthetic, tactile or other sensory feedback from the teleoperator to the
human operator that is sufficient and properly displayed such that the human
feels that he is present at the remote site, and that the teleoperator is an
extension of his own body.” Whereas Akin et al.  describe telepresence such
that, “At the worksite, the manipulators have the dexterity to allow the
operator to perform normal human functions. At the control station, the
operator receives sufficient quantity and quality of sensory feedback to provide
a feeling of actual presence at the worksite.” These two definitions emphasize
control and sensory feedback between the human operator and the “teleoperator”
or the “manipulator at the worksite”.
Draper  discerns
three definitions of telepresence: the simple, the cybernetic, and the
experiential. In the simple definition, telepresence refers to the ability to
operate in a computer-mediated environment. In the cybernetic definition,
telepresence is an index of the quality of the human-machine interface. In the
experiential definition, telepresence is a mental state in which a user feels
physically present within the computer-mediated environment.
Schloerb  defines
telepresence from the point of view of an “observer”, “a person is objectively
present in a real environment that is physically separated from the person in
space.” He uses three types of specifications to make the definitions more
precise: (1) a set of tasks, (2) a transformation imposed on the human
operator’s control output and sensory input, and (3) a transformation of the
region of presence. The degree of objective telepresence is equal to the
probability of successfully completing a specified task. Schloerb also proposed
that perfect telepresence occurs when the operator cannot discriminate virtual
telepresence provides a connection between a user (or the “operator” as defined
by Sheridan and
Akin) and a distant participant or an environment (real world or computer
generated world), to perform social interactions (user-participant interactions)
or specific tasks (user-environment interactions). In this research, we are
interested in the application of a telepresence robot for communicating and
interacting between the “user” and the “participant” in a remote site. In such
applications, the remote participant is not only an “observer” as in Schloerb’s
definition, but also a “dialogist” in the interpersonal communication.
framework, there are two views in telepresence application for interpersonal
communication: the user’s view and the participant’s view, as depicted in
Figure 1. From the user’s view, telepresence enables the user to project
herself/himself to another place by controlling the telepresence robot or
system. In the meantime, the user perceives immersion from the sensory feedback
from the remote environment created by telepresence.
As discussed earlier,
the “participant” may have two roles in telepresence application in
interpersonal communication: as an observer and a dialogist. From the
participant’s view as an observer, telepresence provides necessary elements to
the user and the telepresence robot, so that the participant recognizes the telepresence
robot as a representation of the user. From the participant’s view as a
dialogist, telepresence also enables dialogue between the participant and the
user by transmitting audio, video, gestures, physical movements, and other
environmental information between the participant and the user, which are helpful
for effective communication.
Figure 1. The framework of projection-immersion and
interpersonal communication, telepresence differs from traditional video conferencing
by establishing a true sense of shared space among geographically remote
persons. By duplicating the three-dimensional human experience via actual
face-to-face encounters, telepresence is a stunningly different way to
Applying robotic technology for
homecare of the elderly
countries are facing the problems associated with an increasing elderly
population. The need of healthcare for the elderly, both physiologically and psychologically,
is an urgent issue. In particular, many elderly people desire to stay in their
own private residences for as long as possible, and thus methods are needed to
allow them to do so safely and at a reasonable cost. Development of service
robots to assist elderly people in activities of daily living (ADL) or to
improve their quality of life (QOL) in a home environment is an important
Prassler et al.
 developed the robotic wheelchair MAid (Mobility Aid for Elderly and
Disabled People) to support and transport elderly people with limited motion
skills and to provide them with a certain amount of autonomy and independence. Kiguchi
et al.  developed robotic exoskeletons to assist the mobility of
physically weak persons such as the elderly, the disabled, and the injured. The
control of the robotic exoskeleton is based on electromyography (EMG) signals,
as EMG signals are imperative signals to understand how the user intends to
Robots are not only used for physiological assistance but also can
provide psychological consolations for elderly people. Wada et al. 
tested a seal -type animal robot “Paro” at a daycare service center to
investigate its effects on the elderly. After interaction with Paro, the face
scale scores of elderly people were classified as “good”, and their vigor
scores in the questionnaires were considered high, confirming that Paro
improved the mood of elderly people and made them more vigorous. Kanamori et
al.  also examined the usefulness of pet-type robots named AIBO among
elderly patients and handicapped persons in nursing homes or in home
environments via biochemical markers, self-assessment, or health-related QOL questionnaires.
contrast, Pollack et al.  developed “Autominder”, an intelligent
cognitive orthotic system intended to help older adults adapt to cognitive
decline and continue satisfactory performance of routine activities, thereby potentially
enabling them to remain in their own homes longer. Lytle  presented a robotic
care bear whose purpose was to watch over the elderly residents in a hi-tech
retirement center. The Teddy-like bear could record how long the residents
spent performing various tasks and assisted in monitoring the residents’
response times to spoken questions, before relaying conclusions to staff or
altering them of unexpected changes.
research, there have been few examples in the field of medical care, emphasizing
the use and application of telepresence robots as a tool for interpersonal communication.
“Physician-Robot” is a telepresence robot developed by Intouch-health Company
in cooperation with Johns
for physicians to easily and more frequently visit with hospitalized patients
[Thacker, 2005]. Physicians operate the robot with a swiveling video camera and
computer screen mounted on a mechanical base by guiding it through patients’
rooms via a remote-control joystick. By communicating through the hospital’s
virtual private network (VPN) and an 802.11b wireless network, doctors at home
or out of town can teleconference with and examine patients. Physician-Robot is
not expected to replace visits from real physicians, but is to act as an
extension of physician-patient contact. In the evaluation at the Johns Hopkins
patients are more satisfied because physicians spend more time with them.
A telepresence system PEBBLES, or “Providing Education By Bringing Learning Environments to Students” is developed to unite medically
fragile children who are hospitalized for prolonged periods with their regular
school site. PEBBLES was developed in Canada
by a private company called Telbotics in cooperation with Ryerson
University and the University of Toronto.
A PEBBLES system consists of two child-sized robots capable of transmitting
video, audio and documents to each other. One unit is placed with the
hospitalized child and the other unit is located in the child’s regular
classroom. The units are connected via a high-speed communications link. The
classroom unit has a swiveling monitor that duplicates human head movement and
a hand that serves as an attention getting device. PEBBLES creates a virtual presence
for the remote child in the classroom. The presence is so real that in the many
evaluation cases, teachers and fellow students come to react to the school unit
as if it were the hospitalized child [Fels and Weiss, 2001].
Purpose of this paper
associated with an increased risk for isolation. However, social interaction
can delay the deterioration and associated health problems of elderly people [Roy, et al., 2000]. This
paper describes the development of a telepresence robot called TRIC (Telepresence Robot for
Interpersonal Communication) for interpersonal communication with the elderly
in a home environment. The
main aim behind TRIC’s development is to allow elderly populations to
remain in their home environments, while loved ones and caregivers are able to
maintain a higher level of communication and monitoring via traditional
TRIC is positioned as a
low-cost, lightweight robot, which can be easily implemented in the home
environment. Actually the TRIC robot is intended to be used as often and
as easily as a home appliance. It is expected that TRIC not only provides a convenient interactive communication device
for family members and caregivers to communicate with and express care to
elderly people, but also a tool for tele-homecare visits by doctors or nurses, and
a tool for home tele-health monitoring tasks such as measuring vital signs
(blood pressure, glucose, etc.), and monitoring ADL (activities of daily
Section 2 of this
paper reviews the recent and relevant research literature in the field of
telepresence, with an emphasis on the various design elements used throughout. Decisions on the design elements
included are discussed in Section 3. In particular, the implementation of key
autonomous behaviors in TRIC to increase the user’s capability of
projection of self and operation of the telepresence robot, in addition to
increasing the interactive capability of the participant as a dialogist are
emphasized. Section 4 describes TRIC’s hardware design. Focus then shifts to human factors
considerations and interface design in Section 5. Section 6 describes preliminary
evaluations of TRIC, followed by the concluding
remarks in Section 7. Future
developments of TRIC, especially the possibility of using TRIC
for home telehealth monitoring and tele-homecare visits are discussed.
Design elements in telepresence
surveys the application-oriented telepresence literature which describes the
development of a telepresence system. The design elements emphasized in these
studies are extracted and summarized in Table 1. A discussion of these design
elements as they fit into the framework of projection-immersion and
observer-dialogist illustrated in Figure 1 in
the previous section is given below.
Table 1. Design elements and related technological keywords
Related technological keywords
RF and Internet transmission, time-delay improved
simultaneous operation, robotic design
humanoid mechanism and expression
binocular and panoramic vision, image processing
head-related transfer function, stereo audio
camera and screen with specific placement
environmental map establishment,
transmission, the transmission of control commands and sensory feedback, is a
basic design element for the connection between the user and the remote
telepresence robot or system. Wireless radio frequency and Internet are used in
most telepresence applications, and dedicated lines are used in specific
applications (such as operation in space and deep sea).
user’s view, timing of data transmission is important. Time delays would
degrade the telepresence performance in both projection and immersion of the
user. From the participant’s view, the time delays also affect the
participant’s impression as an observer and interactive capability as a
dialogist. Therefore, past telepresence research in data transmission focused
on the development of a control scheme to deal with time delays for promoting
performance [Tzafestas and Prokopiou, 1997; Daniel and McAree, 1998].
studies in telepresence emphasize on enabling the user to modify the remote
environment [Stoker et al., 1995; Engelberger, 2001; Spudis, 2001], that is,
projecting the user to the teleoperator. A teleoperator is a machine that
extends the user’s sensing and/or manipulating capability to a location remote
from that user. Teleoperation refers to direct and continuous human control of
Telepresence Testbed (FITT)” developed by NASA, which combines a wearable
interface integrating human perception, cognition and eye-hand coordination
skills with a robot’s physical abilities, is a recent example of research in
teleoperation [Rehnmark et al., 2005]. The teleoperated master-slave system “Robonaut”
allows an intuitive, one-to-one mapping between master and slave motions. The operator
uses the FITT wearable interface to remotely control the Robonuat to follow the
operator’s motion fully in simultaneous operation to perform complex tasks in
the international space station.
refers to an advanced capability to modify the remote environment provided by a
dexterous robot or a precise telepresence system. From the user’s view, the
user’s manipulative efficiency for special tasks is enhanced when projecting
onto a telepresence robot with supersensory. Green et al.  developed a
telepresence surgery system integrating vision, hearing and manipulation. It
consists of two main modules, a surgeon’s console and a remote surgical unit
located at the surgical table. The
romote unit provides scaled motion, force reflection and minimized friction for
the surgeon to carry out complex tasks with quick, precise motions. Satava
, Schurr et al.,  and Ballantyne  have also applied
supersensory in telepresence surgery.
elements can also provide the user with a novel immersion feeling in a remote
environment. For example, the user can control the zoom function of the camera
on a telepresence robot to observe the small details of the remote environment,
which the user does not normally see with the naked eye.
telepresence applications, non-anthropomorphic telepresence robots are usually
designed to perform specific tasks. Many researches added anthropomorphic
elements to their telepresence robots in order to improve the interaction
between users and participants [Burgard et al., 1999; Schulz et al., 2000; Tachi
et al., 2003; Trahanias et al., 2005]. From the participant’s view,
anthropomorphic elements enhance the impression of the telepresence robot as a
true representation of the remote user. The friendly interface and
characteristics of the anthropomorphic telepresence robot also increase the
interactive capability of the participant as a dialogist.
From the participant’s
view, the basic requirement for interpersonal communication using telepresence
is that the participants must realize whom the telepresence robot represents.
Therefore, the primary anthropomorphic element incorporated is the user’s face
displayed on a LCD screen mounted on the telepresence robot. Both Dr. Robot and
system PEBBLES described in the first section of the paper use a LCD screen to
display the user’s face.
addition to using the LCD screen, creating mechanical facial expressions is another
interesting field in robotic research [Fong et al., 2003]. Mechanical facial
expressions can also be used to increase the humanoid characteristics of the
telepresence robot to further encourage people to interact and communicate with
Stereoscopic and stereophonic
telepresence research, stereoscopic and stereophonic design elements are often
emphasized to create a telepresence illusion of the remote environment or
people aiming to increase the feeling of immersion for the user. For example,
the user can identify the distance between an object and the telepresence robot
by binocular vision [Brooker et al., 1999]; the head-related transfer function
(HRTF) for stereophonic effect enables the user to identify the location and
direction of a sound [Hawksford, 2002].
videoconferencing is an important application using stereoscopic and
stereophonic elements [Izquierdo, 1997; Ohm et al., 1998; Xu et al., 1999].
Telepresence videoconferencing enables the users and the participants to
communicate more efficiently. In other words, the interactive capability of the
participant as a dialogist is enhanced. Lei et al.  proposed a
representation and reconstruction module for an image-based telepresence
system, using a viewpoint-adaptation scheme and an image-based rendering
technique. This system provides life-size views and 3-D perception of
participants and viewers in videoconferencing. The purpose of this research is
to provide the feeling of a virtual-reality presence, in which realistic 3-D
views of the user should be perceived by the participant in real time and with
the correct perspective.
telepresence applications, eye contact can increase the immersion feeling of
the user and the interactive capability of the participant as a dialogist. The
only means to provide eye contact during interpersonal communication between
the user and the participant through a telepresence robot is when the face of
the user is displayed on a LCD screen. However, the placement of the camera on
a telepresence robot is usually on top of the LCD screen, which hinders direct
eye contact between the user and the participant through the telepresence
proposed an implementation of an auto-stereoscopic desktop display suitable for
computer and communication applications. The goal of this research is to
develop a system combining a collimation optic with an auto-stereoscopic
display unit to provide natural face-to-face and eye contact communication
without causing eyestrain.
principle, a telepresence robot is operated by a remote user, and does not
possess autonomous behaviors. However, the telepresence robot should be able to
deal with possible hazardous situations autonomously when the remote user is
not aware of the hazardous situation, cannot control the telepresence robot
properly, or the data transmission is lost. From the user’s view, autonomous
behavior increases the user’s capability of projection to operate the
telepresence robot safely and reliably in a dynamic environment. From the participant’s
view, autonomous behavior also increases the interactive capability of the
participant as a dialogist. For example, a telepresence robot with the
autonomous behavior of identifying the direction of the participant who is
speaking can assist the remote user to respond more quickly and properly.
interactive museum tour-guide robot was developed by two research projects
TOURBOT and WebFAIR funded by the European Union [Burgard et al., 1999; Schulz
et al., 2000; Trahanias et al., 2005]. Thousands of users over the world
controlled this robot through the web to visit a museum. They developed a
modular and distributed software architecture which integrates localization,
mapping, collision avoidance, planning, and various modules concerned with user
interaction and web-based telepresence. With these autonomous features, the
user can operate the robot to move quickly and safely in a museum crowded with
Basic data transmission
structure and design elements of TRIC
earlier, the telepresence robot TRIC developed in this research aims to
be a low-cost, lightweight robot, which can be easily implemented in the home
environment. Therefore the primary decision was to use ADSL and Wireless Local
Area Network (WLAN), which are commonly found in the home environment, as the
channel of data transmission. Two-way audio and one-way video communication can
be transmitted through a network Internet Protocol (IP) camera, which is also a
common tool for home monitoring.
Instead of using
a PC, a “Mobile Data Server (MDS)” was developed as the core of TRIC. Figure 2 shows a picture of the
laboratory prototype of the MDS, which consists of a PIC server mounted on a peripheral
application board. The PIC server integrates a PIC microcontroller (PIC18F6722, Microchip), EEPROM (24LC1025,
Microchip) and a networking IC (RTL8019AS, Realtek). It provides networking
capability and can be used as a web server. The peripheral application board
(as well as the program in the PIC microcontroller) can be easily customized to
adapt to different sensors and applications. The dimensions of the MDS
prototype are 40mm×85mm×15mm. Internet and serial interface (RS-232) are the primary
communication interfaces of the MDS with client PCs and other devices. The MDS
also receives external signals (e.g., sensor signals) through specific analogue
or digital I/O ports, and provides inter-integrated circuit (I2C) communications to allow connections with external modules. A
Multi-Media Card (MMC) in the MDS can be used to store data in FAT16 file
format. Compared to a PC, the MDS is low-cost, has smaller dimensions, consumes
less energy (thus can be powered by batteries), is not affected by viruses, and
is safer and more reliable.
Figure 2. A picture of the laboratory prototype of
Figure 3 shows
the basic data transmission structure of TRIC. The user projects
herself/himself to TRIC in the remote
environment by sending control commands to TRIC through the Internet gateway. The user is able to immerse in
the remote environment from the sensory feedback transmitted through the
Internet gateway. TRIC uses a WLAN the the connector by connecting to
the WLAN in the home environment. MDS takes charge of receiving commands from
the user and sending commands to specific modules which coordinate with each
other to perform specific tasks. Finally the user can have physical interaction
and verbal communication with the participant by controlling TRIC as his/her physical extension in
the remote environment.
Figure 3. The data transmission structure of TRIC
Under this basic
structure, Table 2 lists the design elements currently planned for the design
of TRIC. The implementation of “teleoperation” in TRIC is quite
fundamental. Teleoperation allows the user to move TRIC through the
environment while controlling the pan and tilt of the IP camera from a remote
client PC. Supersensory ability is reflected in the zooming capability of the IP
cam and the sensing capability of the various sensors installed for environment
detection. With the limited processing ability of the MDS, the user’s face
cannot be displayed (so eye contact is also not possible on this telepresence robot).
Instead, mechanical facial expressions are incorporated as the anthropomorphic
element. Sophisticated stereoscopic and stereophonic elements have been omitted
to keep TRIC a low-cost, affordable homecare robot.
Table 2. Design elements included in TRIC
Corresponding Technological Strategies
use MDS for the core of system
design of mobility platform
provide zoom of IP cam, implement various sensors
for environment detection
design of mechanical facial expression
share control authority to participant and
behavior is the design element that received the most attention during the
planning of TRIC. In principle, a telepresence robot is operated by a
remote user who possesses complete control authority. However, a major emphasis
of this research is to implement key autonomous behaviors in TRIC in
order to increase the user’s operating capability and reduce the user’s
workload during operation. By doing so, the aim was to also increase the
interactive capability of elderly people as reciprocal communicators.
autonomous behaviors implies that the control authorities of the telepresence
robot are shared with the participant or the environment it is interacting
with. Several possible features for sharing control authority with the remote
participants are discussed below:
“Look at that!”
engaged in a face-to-face conversation often share the same view by pointing to
an object in discussion. However, it will be difficult for the user to either
point to a certain object or to find the object the remote participant is
pointing at through the telepresence robot. A 2 degree-of-freedom robot arm equipped
with a laser pointer is used as a joint attention device to realize the “look
at that!” function. The remote participant can direct the view of the
telepresence robot by pointing the laser pointer to the object in question.
“Where is the speaker?”
It is not
easy for the user to locate the source of sound in 3D space through the
telepresence robot. When interacting with the remote participant, “Where is the
speaker?” enables the telepresence robot to automatically locate and track
speakers without control from the user. With this feature, the participant
controls the telepresence robot by using her/his own voice.
“Come here!” and “Follow me!”
is the speaker?” the telepresence robot can locate the source of the sound.
Therefore the “Come here!” feature allows the user to command the telepresence
robot to go to the source of the sound. “Follow me!” is another interactive
behavior which is common in interpersonal communication. The passive infrared
motion sensors combined with ultrasonic range-finding sensors are used to
perform the low cost and reliable function of “Follow me!” where TRIC
continuously follows the intended participant.
modes in sharing control authority with the remote environment are discussed
difficult for the user to identify environmental information from the robot’s
limited viewing angles. Therefore automatic obstacle avoidance is necessary.
When an obstacle is detected within a specific distance from the robot, the
obstacle avoidance algorithm is activated, and the robot deviates from the movement
direction controlled by the user in order to avoid this obstacle.
fundamental self-maintenance function is the ability of TRIC to automatically
recharge its battery when needed. This includes the ability to detect energy
capacity, self-positioning to locate and move to the charging station, and automatic
parking control to dock the robot in the charging station.
Hardware design of TRIC
Figure 4 shows a
picture of TRIC and Figure 5 shows its spatial configuration. TRIC
integrates an MDS, an IP camera with pan/tilt/zoom functions and two-way audio communications,
a wireless LAN adaptor pair, a speaker and microphone, and a 12V LiFePO4
battery (10 Ah) with power management system, all within a mobile platform.
Table 3 summarizes the basic specifications of TRIC.
Table 3. Specifications of TRIC
Battery operation time
Battery recharge time
90 minutes (6A)
Figure 4. A picture of TRIC
Figure 5. Spatial configuration of TRIC
In Section 3, the
basic data transmission structure has been defined (Figure 3). Following this
structure, Figure 6 shows the control structure for TRIC. TRIC has its own IP address and is connected to the
world via a wireless Internet link. The WLAN adaptor pair provides a standard
wireless network for the MDS and IP camera. The MDS is the core of TRIC for data transmission, which receives
commands from the user via the Internet and sends commands to the various
modules using the I2C bus.
As shown in
Figure 6, currently 3 modules have been implementd: the IP camera with facial
expression control module, the environmental perception module, and the DC
motor driver (DMD) module. Each module has one slave microprocessor for control
and data processing, but the master MDS has higher priority for taking
corrective actions from the user’s decisions. The design of each module is
described in details below.
Figure 6. Hardware architecture of control system
(1) IP camera with facial expressions control module
controls of pan/tilt/zoom functions for IP camera, controls of facial
expressions and a passive 2 degrees of freedom arm are integrated into a
module. The IP camera provides RS232 interface for controlling its pan/tilt/zoom
functions. The slave micro-controller receives pan/tilt/zoom control commands
from master MDS then sends them to the IP camera via RS232 interface. Facial
expressions are created via the various positioning of eyebrows and an LED
mouth. Motions of eyebrows are driven by two servomotors. A matrix of light
emitting diodes (LEDs) is implemented to display happiness, sorrow and normality.
Motions generated by servomotors and LED’s symbolize emotions of the user.
attention “Look at that!” function described in Section 3 is achieved by the
passive 2 D.O.F. arm. The participant manipulates the passive arm equipped with
a laser pointer to point to a specific direction, which is then measured and transformed
into viewing coordinates for the IP camera. Then the user activates the “Look
at that!” function from a remote client PC, and the IP camera rotates to the
viewing angle so that the user shares the same view as the participant.
(2) DMD (DC Motor Driver) module
TRIC has two differential driving wheels on its middle axis. Its
mobility is controlled by the DMD module that includes a slave micro-controller
driving two motor controllers with H-bridge
DC motor control circuits. Each
motor is equipped with an incremental encoder counting 128 clocks per rotation.
Velocity and position data are available from two optical encoders for precise
closed loop control. The slave micro-controller in DMD module performs 4 basis
tasks: (1) communication with the master MDS via I2C bus; (2) generation of PWM (Pulse Width Modulation)
duty cycle; (3) reading counts from each encoder; (4) reading 4-bit binary code
from environmental perception module.
module receives commands from the user through the Internet, so that the user
can navigate TRIC to move around freely in the remote environment. The
DMD module also cooperates with other modules to achieve the autonomous
behaviors described in the previous section.
(3) Environmental perception module
perception module is designed for the user to acquire information about the
environment around TRIC. This module uses 6 ultrasonic sensors, a
digital compass and a temperature sensor for environment perception, collision
detection, and the “follow me” function. Six ultrasonic sensors, with frequency
of 40 kHz, an effective range of roughly 3cm
to 4m, and approximately 20
to 35 degrees of measurement cone, are arranged around TRIC as shown in Figure 7. The slave micro-controller in this
environmental perception module manages measurements from the ultrasonic
sensors to generate information such as whether there are objects around TRIC. Urgent stop or backward commands
are then sent to the master MDS when objects are detected in the “close” region
in front of TRIC.
Figure 7. A specific arrangement of six ultrasonic
Human factor and interface
design of TRIC
earlier, the TRIC robot is intended to be used as often and as easily as
a home appliance. Therefore human factors considerations are very important.
The current height of the robot is 75cm,
with the camera positioned at approximately 70cm tall. At this height, optimal viewing distances
were determined to fulfill two human factors considerations: limiting neck
flexion to less than 20-30° [Chaffin et al., 1999] during prolonged
interaction, and maintaining an acceptable viewing distance for the elderly
[Council on Aging of Ottawa, 2006]. Anthropometric data on Taiwanese elderly
[Chiu et al., 2000] was used to determine appropriate horizontal positioning
distances for taller and shorter participants between themselves and the robot.
When the participant is standing, the optimal viewing distance lies between 110cm to 154cm depending on the height of the participant.
The weight of TRIC is approximately 8.7kg when the battery pack is
installed. This is the lowest achievable weight of TRIC while
maintaining all of its current functions. Two common ergonomic tools were used
to assess the weight of the robot for lifting purposes from a height of 12” to a height of 32”, with a horizontal distance of 12”, and a frequency of 1/8hrs or once a
day. Both the revised NIOSH lifting equation [Waters et al., 1993] and Mital et
al.’s [Mital et al, 1993] tables for lifting/lowering limits resulted in a
lifting limit of 15kg
for 90% of the female population. However, since TRIC was designed to communicate with elderly participants, a
correction factor for age was needed to account for decreases in strength. From
the graph depicting a decrease in total body strength with age by Voorbig and
Steenbekkers [Voorbig and Steenbekkers, 2001], at 80 years and above, strength
falls to 57% of peak body strength among women. Therefore, the 15kg limit found via the revised NIOSH
lifting equation and Mital’s Lifting/Lowering tables was multiplied by 0.57 to
result in a corrected lifting limit of 8.55kg.
This new limit accounts for a larger majority of the general population than
set by typical ergonomic standards, and accounts for the majority of elderly
populations as well. The weight of TRIC
falls within 0.15 kg
of the new corrected lifting limit. Thus we can assume a low risk of injury
among a large majority of the general and elderly population for lifting and
lowering TRIC once a day from the
locations mentioned above.
It is also
suggested to maintain TRIC’s speed
below typical walking speed – 1.2m/s
for pedestrian crossings [Purser et al., 2005], but higher than the lowest
walking speed found among elderly people undergoing rehabilitation – 0-0.17m/s [Purser et al., 2005], to
prevent collisions from occurring due to differing speeds between the robot and
the participant, while maintaining a reasonable speed of transportation for the
user. Therefore the current speed of the robot is set to 23cm/s. In addition, the robot is equipped with
obstacle detection sensors that cease the robot’s motion when a certain minimum
distance is breached. This will also prevent the robot from colliding with its
participants and the participant’s environment.
consideration for communication between the user and the participant via TRIC is sound quality for projection and
receiving. It has been identified that as we age we prefer louder sounds for
hearing speech sounds adequately [Coren, 1994]. For this reason speakers
capable of 85dB of sound, with volume control, are mounted to TRIC so
that elderly participants would be able to hear speech sounds produced by
remote users clearly. In addition, the closest point for communication is on TRIC’s “head” (the highest point on the
robot). Therefore, a microphone is placed as close as feasibly possible to the
head of TRIC in an attempt to
optimize voice capture from the participant to the user. It has also been
suggested that a remote control with various functions for TRIC be created in the future that would include a microphone to
further improve voice capture from the participant.
Figure 8 shows
the user control interface for TRIC containing five sizeable frames.
Each frame provides some panels to handle different tasks. The user first inputs
the IP address for the MDS and the IP camera in the upper-right corner. The MDS
and the IP camera can share the one physical IP using different ports. An authorization
process using passwords can also be implemented.
The large frame
on the left hand side of the screen contains a video image from TRIC.
The lower left-hand frame provides buttons for controlling the movement of TRIC, as well as the button for detecting
environmental situations. The frame for environmental situations allows users
to adequately assess the distance of objects around TRIC by processing
the measurements from the environment perception module. Other environmental information
such as temperature is also displayed in this frame. The lower right-hand frame
provides the control buttons of the IP camera for pan, tilt, and zoom. Users
can express emotions by pushing the corresponding buttons in above the IP
camera control frame. Basic autonomous behaviors can be activated by pushing
the control buttons in the middle right-hand frame.
Figure 8. Control interface for TRIC
A preliminary evaluation for
usability for the remote user
Experimental data is necessary to assess TRIC’s
success as a telepresence communication device. TRIC’s evaluations will
be extensive as there are two realms of concern. Objective measures of
functionality are required from a technical perspective to ensure that TRIC
is capable of performing each intended command. Objective measures of usability
are also required to determine if TRIC proves to be a realistic, useful
communication tool. Lastly, subjective measures from users and participants are
needed to determine how accepted TRIC is in a domestic environment for
its intended purpose.
This section describes is a preliminary evaluation
for assessing usability and functionality of TRIC. The purpose of the evaluation was to assess how efficient new
users were at navigating TRIC through a fixed obstacle course (similar
to that of a furnished, domestic environment), to assess the efficiency of new
users using the multi-directional viewing camera to locate predetermined
targets, and to assess whether there was a significant improvement in time
across users with practice for both navigation and visual objectives.
In this test, seven users, unfamiliar with TRIC’s
navigation interface, were asked to navigate TRIC through a
predetermined route to a specified target (Task A). Once the target was
reached, users were then asked to locate 2 separate target objects with TRIC’s
adjustable camera in sequential order (Task B and C). The test was then
repeated 2 more times by each user (Trial 1 to Trial 3). Users were located in
another room such that they had no visual contact with TRIC nor the
remote environment other than the first person visual display transmitted to
the PC operating interface. In addition, users were familiar with the
experimental space and the location of the target objects before testing
commenced. This was done as it was assumed that potential users would be
familiar with the environment of the participants.
Due to the small number of subjects and subsequent
samples, statistical analysis was not conducted on the raw data scores from
these preliminary evaluations. Table 4 shows the average time of the 7 users to
complete each task from Trial 1 to Trial 3. For each task, an “optimal time” achieved
by a skilled user after many practices was used as a reference for calculating
the “efficiency” of each user on this task. From the efficiency scores, the
navigation task (Task A) seems to be the most difficult task fro the new user.
However, a trend of an increase in percent efficiency from Trial 1 to Trial 3
is clear, which implies that the users’ performance improves quickly after
practice. The efficiency scores of Task B and C for locating objects in the
remote environment through TRIC
maintained at 80% to 90%, which shows that the users had no difficulty with
Overall it can be assumed that users are able to
effectively achieve navigational and visual target goals, while improving with
as little as 3 attempts. Extensive tests are planned in an environment similar
to that of the intended use (a domestic environment) with both users and the
intended participants (older adults), to determine objectively and subjectively
TRIC’s practicality as an effective tele-health communication device.
Table 4. Data for the preliminary evaluation for
usability for the remote user
Average time (sec)
Average time (sec)
Average time (sec)
Conclusions and Discussions
presents the development of a teleprepresence robot TRIC. The main aim behind TRIC’s development is to allow
elderly populations to remain in their home environments, while loved ones and
caregivers are able to maintain a higher level of communication by establishing
a true sense of shared space among geographically remote persons.
TRIC allows users to project a sense of self via telepresence. By
controlling TRIC’s IP camera, navigational actions, real-time voice
communication, and limited non-verbal cues the user is able to communicate and
monitor the elderly participant in a manner above video-audio communication.
Human factors considerations have also been implemented to allow for optimal
communication and use by both sets of users (local and remote), and to lessen
any risk of potential injury.
simplified design in place, safety factors accounted for, and widely accessible
via the Internet and a personal computer, it is believed that TRIC would
be a cost-feasible opportunity to provide an enriched communication and
monitoring experience for the elderly, their loved ones, and their caregivers.
Preliminary functional tests have shown that new users were able to effectively
navigate TRIC and easily locate visual targets. Their efficiency in
doing so increased significantly in as little as three trials. Further tests in
domestic environments with elderly participants are needed to further assess
objective functions and to capture subjective ratings on TRIC’s
potential as an effective communication device.
interpersonal communication, there is also a potential of using TRIC for home telehealth monitoring and
tele-homecare visits. Figure 9 shows the structure of the “Portable
Telehomecare Monitoring System (PTMS)” developed by the authors [Hsu et. al,
2006]. The PTMS is a decentralized system. Instead of using the centralized
database structure that gathers data from many households, a single household
is the fundamental unit for sensing, data transmission, storage and analysis in
the PTMS. The monitoring data is stored in the Distributed Data Server (DDS,
which is exactly the same as the MDS used in TRIC) inside a household.
As shown in
Figure 9, sensing data from sensors embedded in the home environment are
transmitted to the DDS, which is the core component of the PTMS. Sensing signals
are then processed and stored in the DDS. Authorized remote users can request
data from the DDS using an Internet web browser (through an application server)
or a Visual Basic (VB) program (direct access to the DDS). Event-driven
messages (mobile phone short messages or emails) can be sent to specified
caregivers when an urgent situation is detected.
Figure 9. The structure of the PTMS
Using the PTMS
structure described above, an “Activity of Daily Living (ADL)” monitoring
system is under developed using the MDS of TRIC
as the core to monitor the change in pattern of daily physical activities,
to recognize the transition of a senior from a healthy, independent state into
a state of incapacity and dependency, and to remind the caregivers to take
early actions. Special-designed sensors embedded in the home environment detect
activities such as eating, bathing, using the toilet, lying in bed and watching
TV. Sensor signals are transmitted to the MDS through a battery-powered RF transmitter.
Caregivers can read real-time sensor data, download historical data, perform
various analyses, or set up event-driven messages (mobile phone short messages
or email messages) once any sensor is activated.
TRIC can also become a
station for vital sign parameter (VSP) measuring. Commercial VSP measuring
devices can easily be integrated with the MDS. Integration of an electronic sphygmomanometer
and a glucose monitor with the MDS has been successfully demonstrated. The data
acquired from these two measuring devices can be transmitted to the MDS through
RS-232 serial interface for data processing, storage, and further analyses.
advantage of using TRIC in home
tele-health monitoring is its mobility and capability of interpersonal
communication. Family members and caregivers can actively approach the elderly
participants through TRIC to express
care if abnormal signals are detected. Doctors and nurses can also use TRIC as a tool for tele-homecare visits.
Therefore, future development of TRIC
will emphasize on enhancing its functions in home tele-health monitoring.
Akin, D. L.,
Minsky, M. L., Thiel, E. D., and Kurtzman, C. R., 1983, Space Applications of
Automation, Robotics and Machine Intelligence Systems (ARAMIS) - Phase II. Vol.
1-3 NASA Contractor Reports no. 3734-6, for Contract NAS8-34381.
Brooker, J. P.,
Sharkey, P. M., Wann, J. P., and Plooy, A. M., 1999, “A helmet mounted display
system with active gaze control for visual telepresence,” Mechatronics, 9(7),
Cremers, A. B., Fox, D., Hahnel, D., Lakemeyer, G., Schulz, D., Steiner, W., and
Thrun, S., 1999, “Experiences with an interactive museum tour-guide robot,” Artificial
Intelligence, 114(1-2), 3-55.
Ballantyne, G. H.,
2002, “Robotic surgery, telerobotic surgery, telepresence, and telementoring -
Review of early clinical results,” Surgical Endoscopy and Other Interventional
Techniques, 16(10), 1389-1402.
Coren, S., 1994,
“Most comfortable listening level as a function of age,” Ergonomics, 37(7), 1269-1274.
Andersson, G., and Martin, B. J., 1999, Occupational Biomechanics 3rd
Edition. New York:
John Wiley & Sons.
Chiu, H. C.,
Chang, H. Y., Mau, L. W., Lee, Ti Kai., and Liu, H. W., 2000, “Height, Weight, and Body Mass Index of
Elderly Persons in Taiwan,” The Journals
of Gerentology 55A,
Council on Aging
of Ottawa., 2006, Senior Accessible Checklist.
Draper, J. V.,
1995, “Teleoperators for advanced manufacturing: applications and human factors
challenges,” International Journal of Human Factors in Manufacturing, 5(1),
Daniel, R. W. and
McAree, P. R., 1998, “Fundamental limits of performance for force reflecting
teleoperation,” International Journal of Robotics research, 17(8),
2001, “NASA's Robonaut,” Industrial Robot, 28(1), 35-39.
I. and Weiss, P. L.,
2001, “Video-mediated communication in the classroom to support sick children:
a case study,” International Journal of Industrial Ergonomics, 28(5),
Nourbakhsh, I., and Dautenhahn, K., 2003, “A
survey of socially interactive robots,” Robotics and Autonomous Systems, 42(3-4),
Green, P. S.,
Hill, J. W., Jensen, J. F., and Shah, A., 1995, “Telepresence surgery,” Engineering
in Medicine and Biology Magazine, IEEE, 14(3), 324-329.
Hawksford, M. O.
J., 2002, “Scalable multichannel coding with HRTF enhancement for DVD and
virtual sound systems,” Journal of the Audio Engineering Society, 50(11),
Hopf, K., 2000, “An
autostereoscopic display providing comfortable viewing conditions and a high
degree of telepresence,” Circuits and Systems for Video Technology, IEEE
Transactions on, 10(3), 359-365.
Hsu, Y. L.,
Yang, C. C., Tsai, T. C., Cheng, C. M., and Wu, C. H., 2006, “Development of a
Decentralized Home Telehealth Monitoring System,” to appear, Telemedicine and e-Health.
Izquierdo, E., 1997,
“Stereo matching for enhanced telepresence in three-dimensional
videocommunications,” Circuits and Systems for Video Technology, IEEE
Transactions on, 7(4), 629-643.
Suzuki, M., Oshiro, H., Tanaka, M., Inoguchi, T., Takasugi, H., Saito, Y., and
Yokoyama, T., 2003, “Pilot study on improvement of quality of life among
elderly using a pet-type robot,” Computational Intelligence in Robotics and
Automation, 2003. Proceedings. 2003 IEEE International Symposium on, 1,
Tanaka, T., and Fukuda, T., 2004, “Neuro-fuzzy control of a robotic exoskeleton
with EMG signals,” Fuzzy Systems, IEEE Transactions on, 12(4), 481-490.
Lytle, J. M.,
2002, “Robot care bears for the elderly,” BBC
News, Thursday, 21 February, 2002, 08:51.
Lei, B. J.,
Chang, C., and Hendriks, E. A., 2004, “An efficient image-based telepresence
system for videoconferencing,” Circuits and Systems for Video Technology,
IEEE Transactions on, 14(3), 335-347.
Nicholson, A. S., and Ayoub, A. A., 1993, A Guide to Manual Materials
Handling, Taylor and Francis, London.
Roy, N., Baltus,
G., Fox, D., Gemperle, F., Goetz, J., Hirsch, T., Margaritis, D., Montemerlo,
M., Pineau, J., Schulte, J. and Thrun, S., “Towards Personal Service Robots for
the Elderly,” Workshop on Interactive Robots and Entertainment (WIRE 2000). Pittsburgh, PA.
Ohm, J. R.,
Gruneberg, K., Hendriks, E., Izquierdo, M. E., Kalivas, D., Karl, M.,
Papadimatos, D., and Redert, A., 1998, “A realtime hardware system for
stereoscopic videoconferencing with viewpoint adaptation,” Signal
Processing-Image Communication, 14(1-2), 147-171.
Scholz, J., and Fiorini, P., 2001, “A robotics wheelchair for crowded public
environment,” Robotics & Automation Magazine, IEEE, 8(1), 38-45.
Pollack, M. E.,
Brown, L., Colbry, D., et al., 2003, “Autominder: an intelligent cognitive
orthotic system for people with memory impairment,” Robotics and Autonomous
Systems, 44(3-4), 273-282.
Purser, J. L.,
Weinberger, M., Cohen, H. J., Pieper, C. F., Morey, M. C., Li, T., Williams, G.
R., and Lapuerta, P., 2005, “Walking speed predicts health status and hospital
costs for frail elderly male veterans,” Journal
of Rehabilitation Research & Development, 42(4), 535-546.
Bluethmann, W., Mehling, J., Ambrose, R. O., Diftler, M., Chu,
M., and Necessary, R., 2005, “Robonaut: the ‘short list’ of technology hurdles,”
Computer, 38(1), 28-37.
Sheridan, T. B., 1992a, Telerobotics, Automation and
Human Supervisory Control, MIT Press, Cambridge,
Sheridan, T. B.,
1992b, “Musing on telepresence and virtual presence,” Presence:
Teleoperatiors and Virtual Environment, 1, 120-126.
Schloerb, D. W.,
1995, “A Quantitative Measure of Telepresence,” Presence: Teleoperators and
Virtual Environments, 4(1), 64-80.
Stoker, C. R.,
Burch, D. R., Hine, B. P., III, and Barry, J., 1995, “Antarctic undersea
exploration using a robotic submarine with a telepresence user interface,” Expert,
IEEE, 10(6), 14-23.
Satava, R. M.,
1999, “Emerging technologies for surgery in the 21st century,” Archives of
Surgery, 134(11), 1197-1202.
Schurr, M. O.,
Buess, G., Neisius, B., and Voges, U., 2000, “Robotics and telemanipulation
technologies for endoscopic surgery - A review of the ARTEMIS project,” Surgical
Endoscopy and other Interventional Techniques, 14(4), 375-381.
Burgard, W., Fox, D., Thrun, S., and Cremers, A. B., 2000, “Web interfaces for
mobile robots in public places,” IEEE Robotics & Automation magazine, 7(1),
Spudis P. D.,
2001, “The case for renewed human exploration of the Moon,” Earth Moon and
Planets, 87(3), 159-169.
Tzafestas S. G.,
and Prokopiou P. A., 1997, “Compensation of teleoperator modeling uncertainties
with a sliding mode controller,” Robotics and Computer-Integrated
Manufacturing, 13(1), 9-20.
Komoriya, K., Sawada, K., Nishiyama, T., Itoko, T., Kobayashi, M., and Inoue, K.,
2003, “Telexistence cockpit for humanoid robot control,” Advanced Robotics,
Thacker, P. D.,
2005, “Physician-Robot Makes the Rounds,” Journal of the American Medical
Association, 293(2), 150.
Burgard, W., Argyros, A., Hahnel, D., Baltzakis, H., Pfaff, P., and Stachniss,
C., 2005, “TOURBOT and WebFAIR: Web-operated mobile robots for tele-presence in
populated exhibitions,” Robotics & Automation Magazine, IEEE, 12(2),
Voorbig, A. I. M.,
and Steenbekkers, L. P. A., 2001, “The composition of a graph on the decline of
total body strength with age based on pushing, pulling, twisting, and gripping
force,” Applied Ergonomics, 32,
Waters, T. R.,
Putz-Anderson, V., Garg, A., and Fine, L. J., 1993, “Revised NIOSH equation for
the design and evaluation of manual lifting tasks,” Ergonomics, 36, 749-776.
Shibata, T., Saito, T., and Tanie, K., 2004, “Effects of robot-assisted
activity for elderly people and nurses at a day service center,” Proceedings
of the IEEE, 92(11), 1780-1788.
Xu, L. Q.,
Loffler, A., Sheppard, P. J., and Machin, D., 1999, “True-view
videoconferencing system through 3-D impression of telepresence,” BT Technology
Journal, 17(1), 59-68.