//Logo Image
AuthorsPo-Er Hsu (2013-01-28); recommended: Yeh-Liang Hsu (2013-02-10).
Note: This article is the Chapter 3 of Po-Er Hsu’s doctoral thesis “Development of an intelligent robotic wheelchair as the center of mobility, health care, and daily living of older adults.”

Chapter 3. Development of a user-configurable indoor navigation system for robotic wheelchairs

This chapter presents the development of an indoor navigation system using the quick response code (QR code) for the iRW to reduce the operation load on the wheelchair user. Indoor navigation of the iRW uses the automated guided vehicle (AGV) as the design concept for route planning and motion execution. QR code labels are deployed on the ceiling as the “virtual AGV track”. The user interface and indoor navigation algorithm are implemented as an App in the tablet mounted on the armrest of the iRW. The main feature of this indoor navigation is that the system does not need to store a navigation map of the indoor environment. The performance test result is demonstrated that this indoor navigation system is practical for the home environment.

3.1  Indoor navigation of robotic wheelchairs

Leonard et al. [1991] summarized three questions inherent in designing a navigation system: “Where am I?” “Where do I want to go?” and “How can I get there?” The first question is commonly referred to as “localization.” The second question is related to the navigation user interface, and the third question involves route planning and motion execution. For outdoor navigation, localization is often achieved by the global positioning system (GPS), which is a mature technology. Wireless sensor networks (WSNs) are often used for localization in indoor navigation by robots. The distance measurement in WSNs can be obtained by utilizing techniques such as received signal strength (RSS) [Nasipuri and Li, 2005; Li et al., 2012a; Rahman et al., 2012], time of arrival (TOA) [Sathyan et al., 2012], time difference of arrival (TDOA) [Joseph and Anthony, 2012; Blandin et al., 2012], angle of arrival (AOA) [Singh and Sircar, 2012], etc. Indoor localization methods based on AOA and propagation time measurement can achieve better location accuracy than those based on RSS, at a higher equipment cost [Mao et al., 2007; Anastasi et al., 2009].

Many indoor localization techniques are subject to non-line-of-sight (NLOS) errors. NLOS errors between two sensors can arise when either the line-of-sight between them is obstructed or the line-of-sight measurements are contaminated by reflected and/or diffracted signals. The machine learning algorithm, which is used to recognize pattern features of the area, is often implemented in indoor localization to improve the location accuracy. The indoor location accuracy reported in recent research ranges from three meters to one and a half meters [Li et al., 2012b; Stella et al., 2012; Sathyan et al., 2012; Rantakokko et al., 2011].

The navigation user interface is related to not only how the user inputs the desired destination, but how the navigation system is installed in the indoor environment. Speech synthesis and recognition and the touch screen comprise the usual interface for a user to select and to enter the destination. For example, Galindo et al. [2006b] developed the robotic wheelchair SENA, which can recognize the user’s speech, to facilitate mobility assistance for disabled people and older adults. The installation of the indoor navigation system is affected by the methods chosen for route planning and motion execution.

For route planning and motion execution, there are two types of indoor navigation systems, depending on how the navigation map is stored. In the first type, the navigation map or spatial database, which contains the pedestrian route, barriers, etc., has to be constructed in the robot (i.e., robotic wheelchair, mobile robot) in advance. Cheein et al. [2010] proposed a sequential extended Kalman filter (EKF) feature-based simultaneous localization and mapping (SLAM) algorithm to establish the navigation map for the semi-autonomous robotic wheelchair. The robotic wheelchair was equipped with a laser sensor to obtain the distance from nearby objects. Chow et al. [2006] presented a navigation/localization learning methodology to abstract the human sequential navigation skill and to encode it in the navigation map of the robotic wheelchair, which was equipped with seven wide-angle ultrasonic range sensors. The robotic wheelchair used a lookup table to learn the skills from operation by the user, and built the navigation map by scene features.

Another approach is landmark-based indoor navigation. Instead of using sensors to build the navigation route map by the robotic wheelchair itself, various types of landmarks have been proposed. Courbon et al. [2010] presented an indoor navigation system based on natural landmarks. The images of the environment are first sampled, stored, and organized as a set of key images (visual path), and then the robot can follow the visual path to move. Zeng et al. [2008] used odometry and barcodes to identify the orientation and the absolute location of the robotic wheelchair. These unique barcodes served as artificial landmarks that were preloaded in the memory of the robotic wheelchair. De la Cruz et al. [2011] deployed metallic paths and radio-frequency identification (RFID) tags on the floor. They developed a non-linear control scheme to increase trajectory tracking accuracy, which enabled the metallic paths to be followed precisely.

The robotic wheelchair is not intended to be fully autonomous. Rather, it is a mobility assistive technology (MAT) in which the user and the robotic wheelchair collaborate to perform tasks and achieve goals. Man-machine collaborative control, which addresses how a human and a robot collaborate to perform tasks and achieve goals [Fong et al., 1999], is another important research area in robotic wheelchairs [Katsura et al., 2004; Galindo et al., 2006a; Holzapfel et al., 2008; Urdiales et al., 2010; Braga et al., 2011]. The purpose of the navigation system of the robotic wheelchair is to assist the wheelchair user to maneuver in the indoor environment and to reduce the operation load incurred by the wheelchair user. Just as the design of the wheelchair must be convenient and flexible, so must the installation process of the navigation system in the home environment.

3.2  Design concept of the user-configurable indoor navigation system

On the iRW, a semi-autonomous indoor navigation mode on the iRW uses a concept similar to the AGV to reduce the operation load on the wheelchair user or care givers in the home environment. QR code labels are deployed on the ceiling as the “virtual AGV track” (navigation map). The user interface and indoor navigation algorithm are implemented as an App in the tablet. When the iRW is steered under the track, the camera on the tablet, which is mounted on the iRW, will capture and recognize the QR code. Subsequently, the information conveyed by the QR code can be interpreted to generate motion commands to the iRW, so that the iRW follows the virtual AGV track to move to the desired destination, such as bedroom, living room, or kitchen. The user then takes control to steer the iRW to the final precise location.

Figure 3-1 shows the user interface for setting up the virtual AGV track. The indoor layout (Figure 1, left) is mapped to the virtual AGV track on the interface (Figure 1, right). The user can designate desired destinations as “target stops” (orange circles corresponding to points 1, 2, 3, and 4 on the indoor layout) and the paths connecting the stops (orange lines) based on their relative positions, without considering real distances. Names of the stops (such as “Office,” “Laboratory”) can also be defined by the user. The “intermediate stops” (yellow circles) will be generated automatically, and a QR code is generated for each stop. The user can then print the QR code labels and deploy virtual AGV tracks by simply sticking the QR code labels on the corresponding positions of the target stops and intermediate stops on the ceiling.

Figure 3-1. The interface for defining the virtual AGV track

Figure 3-2 shows the content in the QR code. The direction of the reachable target stops relative to the one represented by this QR code, is indicated through five fields: “current position,” “right,” “forward,” “left,” and “backward.” For example, the QR code for “Elevator” (as shown in Figure 3-2) indicates that the current position is “Elevator,” moving left would head toward “Laboratory,” and instead moving forward would head toward “Restroom” and “Office.”

Figure 3-2. Arrangement of information contained in the QR code

Figure 3-3 depicts the path planning algorithm for generating the QR codes. To start, the user inputs the target stops, names of the target stops (by designating a circle on the grid), and the connecting paths (by clicking on the paths). If the endpoints of a user-input connecting path do not both correspond to existing stops, the algorithm will generate stops where they are absent, called “intermediate stops.” If a user-input stop has not been named, it is also treated as an intermediate stop (ISl).

After the user finishes the set up, the algorithm calculates the shortest path (SPk) from each stop (TSi and ISj) to each adjacent target stop. Our shortest path algorithm was based on the Dijkstra algorithm [Dijkstra, 1959], which is a graph search algorithm used to solve the single-source shortest path problem for a graph with nonnegative arc lengths. If a shortest path has length of infinity, it means that at least one connecting path is missing or there is only one target stop, and the user needs to input connecting paths or target stops. If no shortest path has length infinity, the algorithm stores all of the target stops, intermediate stops, and shortest paths. Finally the QR codes are generated for all stops, containing the five fields described above.

Figure 3-3. Path planning algorithm for generating the QR codes

3.3  Motion execution of the user-configurable indoor navigation system

After the user deploys the virtual AGV track on the ceiling, the user needs to install the indoor navigation App on the tablet, which is mounted on the left armrest of the iRW. Figure 4 shows the indoor navigation user interface. Although the tablet's camera faces horizontal, a prism attached to the camera enables it to capture the image of a QR code that is on the ceiling, as shown in Figure 3-5. Once the iRW is steered under a QR code, the indoor navigation App identifies the current position (e.g., “Laboratory” in Figures 3-1 and 3-4), and the reachable target stops stored in the QR code are displayed on the left in the interface. The user can then select the desired target stop (e.g., “Office” in Figures 1 and 4), and start indoor navigation by pressing the “Start” button.

Figure 3-4. The user interface for indoor navigation

Figure 3-5. A prism fixed on the camera

Figure 3-6 is the indoor navigation algorithm. If the indoor navigation system is about to be used in the environment for the first time, the user has to identify the forward, backward, right, and left directions by using the electronic compass of the tablet. The user then has to connect the tablet with the iRW through Bluetooth so that the tablet can transmit commands to control the movement of the iRW. When the user steers the iRW under a QR code, the camera will capture and recognize the QR code and then display the reachable target stops in the user interface. The user can select the desired target stop to start the indoor navigation function.

The QR code provides the relative position of the target stop. The electronic compass of the tablet detects the movement direction, and the iRW then orients itself so that its forward edge faces the intended direction of movement. If the movement direction diverges from the path toward the desired target stop, the iRW will rotate clockwise or counterclockwise until the difference (angle error) is less than six degrees. It was found that a tolerance less than six degrees may cause excessive adjustments in moving direction, which affects the smoothness of the movement. The ultrasonic sensors detect obstacles at all time. Upon encountering an obstacle within a predetermined distance (20 centimeters), the iRW will stop and transfer operation priority to the wheelchair user. The iRW keeps moving until the desired target stop is reached or the user disables the indoor navigation function.

Figure 3-6. The indoor navigation algorithm

3.4  Performance test of the user-configurable indoor navigation system

The reliability of the user-configurable indoor navigation system depends on the success rate of QR code recognition. A performance test was conducted using different distances between neighboring QR codes. In this research, the success rate of QR code recognition was defined as the ratio of the number of instances of interpreting at least two consecutive QR codes successfully, to the total number of test runs. The success rate of QR code recognition is heavily influenced by the QR code image size appearing in the field of view of the camera, and the duration in which a QR code could be viewed in the field of view of the camera.

The viable image size of the QR code is dependent on environmental conditions such as lighting and scanning distance, and on the data density of the QR code, which means the number of columns of dots in the QR code image. However, both the environmental conditions and the data density of the QR code are difficult to adapt to in this application. For example, the tablet is mounted on the iRW at a fixed distance from the ceiling (as shown in Figure 3-7), which fixes the scanning distance of the camera. Moreover, the data density is constant in this application. In this test, the QR codes were printed on A4 papers for convenience, and the image size of the QR code was 0.28×0.28 m2.

Figure 3-7. The tablet is fixed on the armrest of the iRW

The speed of the iRW influences QR code viewing duration in the field of view of the tablet camera. But to foster operating safety and promote user comfort, the moving speed of the iRW in indoor navigation mode was set at a constant 15 m/min, which is below the lowest manual and electric wheelchair operation speed suggested by Karmarkar et al. [2011]. Proceeding from the current QR code to the subsequent QR code, the error in moving path of the iRW influences the viewing duration of the second QR code. In the extreme case, if the movement direction diverges from the path completely, the second QR code will not appear in the field of view of the tablet camera at all. The error in moving direction depends on the tolerance of the “angle error” described in the indoor navigation algorithm in Figure 3-6, which was fixed at six degrees. The only variable can be tested is the distance between two QR codes in this indoor navigation system.

Figure 3-8 is the test setting. The distance between two QR codes was 11 m initially. The field of view of the tablet camera was 0.94×0.80 m2 at 3.1 m ceiling height (yielding a distance from the camera to the ceiling of 2.3 m). Following the indoor navigation algorithm in Figure 6, the user identified the forward, backward, right, and left directions. Starting from the initial position in Figure 3-8, the iRW was set to face a random direction and then started the indoor navigation function to move to the desired target stop. Table 3-1 shows the QR code recognition results from 30 test runs. There were five failures, yielding a success rate of 83.3%, and the average error is moving direction was 4.8° ± 1.7°.

Figure 3-8. Test setting

Table 3-1. Recognition results with QR code spacing of 11 m

Round

1

2

3

4

5

6

7

8

9

10

Angle error (degrees)

2

6

6

6

2

6

6

5

6

6

QR code scan result

S

S

S

F

S

F

S

S

S

S

Round

11

12

13

14

15

16

17

18

19

20

Angle error (degrees)

6

6

0

6

6

6

5

6

5

4

QR code scan result

S

S

S

S

S

F

S

F

F

S

Round

21

22

23

24

25

26

27

28

29

30

Angle error (degrees)

5

5

6

3

5

6

3

4

3

6

QR code scan result

S

S

S

S

S

S

S

S

S

S

To assess the effect of distance between two QR codes on success rate, the distance between two QR codes was reduced to 6 m for the next test. Table 3-2 shows the results. There was one failure, yielding a success rate of 96.7%, and the average error is moving direction was 2.9° ± 1.2°.

Table 3-2. Recognition results with QR code spacing of 6 m

Round

1

2

3

4

5

6

7

8

9

10

Angle error (degrees)

3

3

3

4

2

3

3

4

3

3

QR code scan result

S

S

S

S

S

S

S

S

S

S

Round

11

12

13

14

15

16

17

18

19

20

Angle error (degrees)

4

3

3

2

0

3

4

3

3

2

QR code scan result

S

S

S

S

S

F

S

S

S

S

Round

21

22

23

24

25

26

27

28

29

30

Angle error (degrees)

0

3

3

4

0

5

2

5

3

3

QR code scan result

S

S

S

S

S

S

S

S

S

S

The third performance test was conducted, and the distance between two QR codes was set at 3.5 m. Table 3-3 shows the results, indicating a success rate of 100.0%, and the average error is moving direction was 1.5° ± 0.8°.

Round

1

2

3

4

5

6

7

8

9

10

Angle error (degrees)

1

2

2

0

2

1

2

2

1

1

QR code scan result

S

S

S

S

S

S

S

S

S

S

Round

11

12

13

14

15

16

17

18

19

20

Angle error (degrees)

2

2

1

0

2

2

2

0

2

1

QR code scan result

S

S

S

S

S

S

S

S

S

S

Round

21

22

23

24

25

26

27

28

29

30

Angle error (degrees)

2

2

2

0

2

0

1

0

2

2

QR code scan result

S

S

S

S

S

S

S

S

S

S

In the three test results, the success rates of QR code recognition were significantly different (p = 0.02 < α = 0.05). We can conclude that the success rate of QR code recognition increases when the distance between two QR codes is reduced. The success rate was 100.0% in the third test when the distance between two QR codes was set at 3.5 m. This distance is adequate for implementation the system in the home environment.

References

Anastasi G., Conti M., Di Francesco M., Passarella A., 2009. “Energy conservation in wireless sensor networks: A survey,” Ad Hoc Networks, v. 7, pp. 537-568.

Blandin C., Ozerov A., Vincent E., 2012. “Multi-source TDOA estimation in reverberant audio using angular spectra and clustering,” Signal processing, v. 92, pp. 1950-1960.

Braga R. A., Petry M., Reis L. P., Moreira A. P., 2011. “IntellWheels: Modular development platform for intelligent wheelchairs,” Journal of rehabilitation research and development, v. 48, pp. 1061-1076. [PMID: 22234711] DOI: 10.1682/JRRD.2010.08.0139

Cheein F. A. A., Lopez N., Soria C. M., di Sciascio F. A., Pereira F. L., Carelli R., 2010. “SLAM algorithm applied to robotics assistance for navigation in unknown environments,” Journal of Neuroengineering and Rehabilitation, v. 7, pp. 7-16.

Chow H. N., Xu Y. S., 2006. “Learning human navigational skill for smart wheelchair in a static cluttered route,” IEEE Transactions on Industrial Electronics, v. 53, pp. 1350-1361.

Courbon J., MezouarY., Gue´ nard N., Martinet P., 2010. “Vision-based navigation of unmanned aerial vehicles,” Control Engineering Practice, v. 18, pp. 789-799.

De la Cruz C., Celeste W. C., Bastos T. F., 2011. “A robust navigation system for robotic wheelchairs,” Control engineering practice, 19: 575-590.

Dijkstra, E. W., 1959. “A note on two problems in connexion with graphs,” Numerische Mathematik, v. 1, pp. 269-271.

Fong T., Thorpe C., Baur C., 1999. “Collaborative Control: A robot-centered model for vehicle teleoperation,” Proceedings of the AAAI Spring Symposium on Agents with Adjustable Autonomy. Stanford, CA.

Galindo C., Cruz-Martin A., Blanco J. L., Fernández-Madrigal J. A., Gonzalez J., 2006a. “A multi-agent control architecture for a robotic wheelchair,” Applied Bionics & Bionmechanics, v. 3, pp. 179-189.

Galindo C., Gonzalez J., Fernández-Madrigal J. A., 2006b. “Control architecture for human–robot integration: application to a robotic wheelchair,” Systems, Man and Cybernetics, Part B, IEEE Transactions on, v. 36, pp. 1053-1067.

Holzapfel H., 2008. “A dialogue manager for multimodal human-robot interaction and learning of a humanoid robot,” Industrial robot, v. 35, pp. 528-535. DIO: 10.1108/01439910810909529

Joseph S. P., Anthony J. W., 2012. “Time difference localization in the presence of outliers,” Signal processing, v. 92, pp. 2432-2443.

Karmarkar A. M, Cooper R. A, Wang H, Kelleher A, Cooper R., 2011. Analyzing wheelchair mobility patterns of community-dwelling older adults,” Journal of rehabilitation research and development, v. 48, pp. 1077-1086. DOI: 10.1682/JRRD.2009.10.0177

Katsura S., Ohnishi K., 2004. “Human cooperative wheelchair for haptic interaction based on dual compliance control,” Industrial Electronics, IEEE Transactions on, v. 51, pp. 221-228.

Li Xu, Mitton N., Simplot-Ryl I., Simplot-Ryl D., 2012a. “Dynamic beacon mobility scheduling for sensor localization,” Parallel and Distributed Systems, IEEE Transactions on, v. 23, pp. 1439-1452.

Li N., Calis G., Becerik-Gerber B., 2012b. “Measuring and monitoring occupancy with an RFID based system for demand-driven HVAC operations,” Automation in construction, v. 24, pp. 89-99.

Leonard J., Durrant-Whyte H. F., 1991. “Mobile robot localization by tracking geometric beacons,” IEEE Transactions on Robotics and Automation, v. 7, pp. 376-382.

Mao G., Fidan B., Anderson B. D. O., 2007. “Wireless sensor network localization techniques,” Computer Networks, v. 51, pp. 2529-2553.

Nasipuri A., Li K., 2005. “A directionality based location discovery scheme for wireless sensor networks,” Proc. First ACM Int'l Workshop Wireless Sensor Networks and Applications, pp. 105-111.

Rahman M. S., Park Y., Kim K. D., 2012. “RSS-based indoor localization algorithm for wireless sensor network using generalized regression neural network,” Arabian journal for science and engineering, v. 37, pp. 1043-1053. DOI: 10.1007/s13369-012-0218-1

Rantakokko J., Rydell J., Stromback P.,  Handel P., Callmer J., Tornqvist D., Gustafsson F., Jobs M., Gruden M., 2011. “Accurate and reliable soldier and first responder indoor positioning: multisensor systems and cooperative localization,” Wireless Communications, IEEE, v. 18, pp. 10-18.

Sathyan T., Hedley M., Humphrey D., 2012. “A multiple candidate time of arrival algorithm for tracking nodes in multipath environments,” Signal processing, v. 92, pp. 1611-1623.

Singh P., Sircar P., 2012. “Time delays and angles of arrival estimation using known signals,” Signal image and video processing, v. 6, pp. 171-178.

Stella M., Russo M., Begusić D., 2012. “RF localization in indoor environment,” Radioengineering, v. 21, pp. 557-567.

Urdiales C., Fernández-Carmona M., Peula J. M., Cortés U., Annichiaricco R., Caltagirone C., Sandoval F., 2011. “Wheelchair collaborative control for disabled users navigating indoors,” Artificial Intelligence in Medicine, v. 52, pp. 177-191.

Zeng Q., Teo C.L., Rebsamen B., Burdet E., 2008. “A Collaborative Wheelchair System,” IEEE Transactions on Neural Systems and Rehabilitation Enggineering, v. 16, pp. 161-170.