//Logo Image
「世大智科/天才家居」-我們創業囉
PDF Version

Robotics

 

}. SEBASTIAN, Y. COLINDRES, Y.L. HSU. From a tool to a companion  - Evaluation  on  the effect of  a robotic user   interface   for   a   voice-command   companion   robot.   Gerontechnology    2018;17(su ppl): 101s; https://doi .org/10.407 7/gt.2018.17.s.098.00 Purpose The voice is one of the most natural forms of interaction used

between  humans.  As a result, the implementation  of voice command  for  human-computer  interaction has

increasingly been tested in different devices such as mobile phones, smart speakers and more1. The cloud­ based smart assistants using natural language processing have become one of the biggest trends today. Many companion robots for older adults have also implemented natural language processing engines to provide a more natural way of interaction2. The purpose of this study is to evaluate the effectiveness robotic user interface  (RUI)  voice-command  and  a  cloud-based  companion   robot,  and  how  the  RUI  changes user

perception  towards cloud-based smart assistants.  Method  Three different voice-enabled  devices: a mobile

phone, a cloud based smart-speaker, and a companion robot (Figure 1),  were used in the lab-based experiment. The companion robot created for this study  shared the same natural  language processing engine  with the smart speaker. 9 participants (6 female  and 3  male)  belonging to the age groups of 18-29, 30-49  and 50+ years old, were recruited for the preliminary  evaluation.  All  participants  have  used  voice-enabled  devices in the past. 6 participants spent more than 6 hours per day using their mobile phones to complete information

based on tasks. Participants received a set of both function-based tasks (such as "Turn off the coffee machine," "Turn on the fan") and information-based tasks (such as query for time, weather, a sports game's result), and were instructed to repeat the given tasks to three voice-enabled devices. Their gaze point fixation was monitored via a custom-made eye tracker3. Participants were allowed to skip or repeat their tasks if any errors occurred in voice commands. After the experiment, the participants filled out a questionnaire to evaluate the usability of each device. Results & Discussion As shown in Table 1, the co anion robot has an average of 85% gaze point fixation, while participants only fixated on the speaker 10%  of the time during interaction, even though they both shared the same natural language processing engine. In particular, the 50+ year old

group had 100% of gaze point fixation towards the companion robot during interaction. Participants showed a clear tendency of speaking to the companion robot like it was a real person, while they treated the smart speaker like a tool. It should be noted that the mobile phone had 100% gaze point fixation because participants needed the visual confirmation of the voice commands on the screen of the mobile phone. The participants also had the highest error tolerance towards the companion robot. Participants were more patient with the companion robot and were willing to repeat the task if the robot did not understand  the command. The participants repeated 85% of the failed tasks when interacting with the companion robot, while they repeated only 68% of the failed tasks when interacting with the smart speaker. Though the usability score is

only  higher  than  that  of  a  smart  speaker  by  a  small  margin,  perceived  usability   scores  showed  that the

companion robot was better accepted. In conclusion,  this  preliminary  test  illustrated  that  adding  the  robotic user interface seemed to change the perception and interaction behaviors of the participants towards the cloud-based smart assistant, from a tool to a "companion".

 


References

1.   Kaushik D, Jain R. Natural user interfaces: trend in virtual interaction. arXiv preprint arXiv:1405.0101. 2014

2.   Khayrallah H, Trott S, Feldman J. Natural Language For Human Robot Interaction. In International Conference on Human-Robot Interaction (HRI). 2015

3.   Mantiuk R, Kowalik M, Nowosielski A, Bazyluk B. Do-it-yourself eye tracker: Low-cost pupil-based eye tracker for computer graphics applications. In International Conference on Multimedia Modeling. Springer, Berlin, Heidelberg. 2012 Jan; pp.115-125

 

Keywords: robotic user interface, human-computer interaction

Address: Yuan Ze University, No. 135, Yuandong Road, Zhongli District, Taoyuan City, 320, Taiwan;

E: jkings16@yahoo.com


Figure 7. User Interacting with the companion robotic interface while wearing an eye tracker

 

Table 1. Evaluation results; Age groups from left to right: 78-29 I 30-49 I 50+


 

 

RUI

Mobile

Smart Speaker

 

Gaze point

84%

88% 63% 100%

100%

100% 100% 100%

10%

14% 3% 13%

 

Error Tolerance

85%

95% I 90%1 71°/4

58%

71% I 47% I 57° /4

68%

81% I 76% I 47%

 

Perceived Usability

1.02

2.06 0.93 0.07

0.05

0.93 -0.67 -0.4

0.96

2.67 1.00 -0.77

 

 

 

 

 

 

2018                                                                                         102s                                                               Vol 17, supplement