UCSB Personal Guidance System - Home

Brief Summary

This project is concerned with developing and evaluating a GPS-based navigation system for visually impaired people. It started in 1985 with a concept paper by Jack Loomis (Professor of Psychology), who has since directed the project. Reginald Golledge (Professor of Geography) and Roberta Klatzky (Professor of Psychology, now at Carnegie Mellon University) are the two other principals in the project. We first publicly demonstrated the PGS in 1993 using a bulky prototype carried in a backpack. Since 1993 we have created several versions of the PGS, one of which was carried in a small pack worn at the waist. Our project has mostly focused on designing the user interface and the Geographic Information System (GIS) component (e.g., spatial database and route finding software).

Several wearable systems are now commercially available (notably BrailleNote GPS from Pulse Data and Trekker by VisuAide). These systems provide verbal guidance and environmental information via speech and Braille displays. Because our recent survey research has confirmed our longstanding belief that visually impaired people sometimes want direct percept information about the environment, just as drivers and pilots want pictorial information from their navigation systems, our recent R&D has concentrated on spatial displays for such systems. Dr. James Marston, a Postdoctoral Researcher in Geography, has contributed greatly to this newer work.

Our R&D has dealt with several types of spatial display. The first is a virtual acoustic display, which provides auditory information to the user via earphones (as originally proposed in the 1985 concept paper). With this display, the user hears important environmental locations, such as turn points along the route and points of interest. The labels of these locations are converted to synthetic speech and then displayed using auditory direction and distance cues, such that the spoken labels appear out in the auditory space of the user. A user wishing to go toward some environmental location that is being displayed simply turns to face the spoken label and then begins walking toward it. The intensity of the displayed information increases as the person approaches the locations.

A second type of display, which we call the Haptic Pointer Interface (HPI), emulates the function of the RIAS receiver (project 1). The user holds in the hand a block to which are attached an electronic compass and a small loudspeaker or vibrator. When the hand is pointing toward some location represented in the computer database, the user hears a tone or feels a vibration. Supplementary verbal information can be provided by synthetic speech. The user moves toward the desired location by aligning the body with the hand while maintaining the "on-course" auditory or vibratory signal. Other variants of the second display involve putting the compass on the body or head and turning the body or head until the on-course signal is perceived.

Three formal route-guidance studies evaluating the different displays mentioned indicate that they provide effective route guidance and are well liked by visually impaired users. Our research indicates that spatial displays ought to be available as optional alternatives to synthetic speech on commercial navigation systems for the visually impaired.

More Detailed Description

Since 1985, Professors Jack Loomis, Reginald Golledge, and Roberta Klatzky, aided by our many research associates, have been doing basic and applied research in support of the development of a navigation system for visually impaired people, which we call the Personal Guidance System. Professor Loomis is the project leader. Since 1987 we have been supported by the National Eye Institute (NEI). More recently our research and development has also been supported by the National Institute on Disability and Rehabilitation Research (NIDRR); we are part of a consortium (The Wayfinding Group) headed by Mike May of SenderoGroup.

The original purpose of the PGS was to guide visually impaired people to destinations of their choice and to provide them with better knowledge of the environments through which they are traveling (Loomis, 1985).

All navigation systems have three functional components: a component determining the traveler's position and orientation in space, a spatial database of the environment in which travel will occur, and an interface which displays information to the user and allows the user to control the system. System software utilizes signals from various input devices to determine the user's position and orientation, implements the Geographic Information System (GIS) for accessing and manipulating information in the spatial database, and implements the user interface. The original design of the PGS proposed using the Global Positioning System (GPS) as the primary means for determining position and orientation and using a virtual acoustic display as the means of displaying information to the traveler (Loomis, 1985). This latter idea was to use spatialized sound to indicate important locations in the environment, such as waypoints along the path and points of interest in the environment. As envisioned, the virtual acoustic display would present synthesized speech to the traveler by way of earphones so that the spoken names of environmental points would appear to come from their actual locations, as if emanating from loudspeakers at those locations.

In the early 1990's we developed a fully functioning system which was worn in a bulky backpack (Loomis, Golledge, Klatzky, Speigle, & Tietz, 1994; Golledge, Klatzky, Loomis, Speigle, & Tietz, 1998). An important part of our research and development in the 1990's was creation of a spatial database of the UCSB campus and implementation of the GIS functionality (Golledge, Loomis, Klatzky, Flury, Yang, 1991; Golledge, Klatzky, Loomis, Speigle, & Tietz, 1998). We also conducted an experiment on the relative effectiveness of the virtual acoustic display and other ways of displaying information to guide a traveler along a route (Loomis, Golledge, & Klatzky, 1998; Loomis, Golledge, & Klatzky, 2001). Although this research demonstrated that a virtual acoustic display was generally more effective in guiding a person along a route than other methods using conventional synthesized speech (i.e., not spatialized), such a design suffers from two drawbacks: many visually impaired people express reservations about wearing earphones while traveling, and the implementation of a virtual acoustic display entails additional hardware, complexity, and cost relative to a display that involves only synthesized speech.

The most recent implementation of the system weighs only a few pounds and is worn in a pack slung over the shoulder. Because many laptop computers now come with 3-D sound capability built-in and GPS receivers are available as plug-in cards, we can implement the original design mostly with standard off-the-shelf hardware. Currently, the only non-standard hardware in our system is an electronic compass (fluxgate magnetometer) which we use to monitor the heading of the user's head, body, or hand.

Our current research and development now focuses on the user interface, especially the display component. This is our primary contribution to the research and development being conducted by the consortium headed by Sendero. In our investigation of how to permit the user to control the system, we have done some informal experimentation using speech-to-text software. The idea is to have the visually impaired traveler control the system using speech commands picked up by a microphone. Unfortunately, ambient noise sometimes interferes with speech reception and makes this method of control unreliable. On the display side of the user interface, we are continuing to compare different methods of presenting information to the user. We are still evaluating conventional speech interfaces and the virtual acoustic display but are now evaluating other interfaces that provide direct perceptual information about direction to a waypoint or point of interest. The most promising of these we call a "haptic pointer interface". It is inspired by the hand-held receiver used in the Talking Signs system of remote signage. With our interface, the user holds a small wand-shaped object in the hand to which is attached an electronic compass, which monitors the pointing direction of the hand. While the user points the hand roughly in the direction of a waypoint or point of interest, the computer sends either beeping tones or synthesized speech to a small speaker worn near the user's shoulder. Thus, the user can localize a waypoint or point of interest by turning the hand until hearing the audible signals and then orienting the body and proceeding in that direction. Our usability studies comparing six different interfaces should be complete in mid to late 2003.