5 research outputs found

    Design and Testing of a Multi-Sensor Pedestrian Location and Navigation Platform

    Get PDF
    Navigation and location technologies are continually advancing, allowing ever higher accuracies and operation under ever more challenging conditions. The development of such technologies requires the rapid evaluation of a large number of sensors and related utilization strategies. The integration of Global Navigation Satellite Systems (GNSSs) such as the Global Positioning System (GPS) with accelerometers, gyros, barometers, magnetometers and other sensors is allowing for novel applications, but is hindered by the difficulties to test and compare integrated solutions using multiple sensor sets. In order to achieve compatibility and flexibility in terms of multiple sensors, an advanced adaptable platform is required. This paper describes the design and testing of the NavCube, a multi-sensor navigation, location and timing platform. The system provides a research tool for pedestrian navigation, location and body motion analysis in an unobtrusive form factor that enables in situ data collections with minimal gait and posture impact. Testing and examples of applications of the NavCube are provided

    Feature extraction and feature selection in smartphone-based activity recognition

    Get PDF
    Nowadays, smartphones are gradually being integrated in our daily lives, and they can be considered powerful tools for monitoring human activities. However, due to the limitations of processing capability and energy consumption of smartphones compared to standard machines, a trade-off between performance and computational complexity must be considered when developing smartphone-based systems. In this paper, we shed light on the importance of feature selection and its impact on simplifying the activity classification process which enhances the computational complexity of the system. Through an in-depth survey on the features that are widely used in state-of-the-art studies, we selected the most common features for sensor-based activity classification, namely conventional features. Then, in an experimental study with 10 participants and using 2 different smartphones, we investigated how to reduce system complexity while maintaining classification performance by replacing the conventional feature set with an optimal set. For this reason, in the considered scenario, the users were instructed to perform different static and dynamic activities, while freely holding a smartphone in their hands. In our comparison to the state-of-the-art approaches, we implemented and evaluated major classification algorithms, including the decision tree and Bayesian network. We demonstrated that replacing the conventional feature set with an optimal set can significantly reduce the complexity of the activity recognition system with only a negligible impact on the overall system performance

    Pedestrian dead reckoning employing simultaneous activity recognition cues

    Get PDF
    Cataloged from PDF version of article.We consider the human localization problem using body-worn inertial/magnetic sensor units. Inertial sensors are characterized by a drift error caused by the integration of their rate output to obtain position information. Because of this drift, the position and orientation data obtained from inertial sensors are reliable over only short periods of time. Therefore, position updates from externally referenced sensors are essential. However, if the map of the environment is known, the activity context of the user can provide information about his position. In particular, the switches in the activity context correspond to discrete locations on the map. By performing localization simultaneously with activity recognition, we detect the activity context switches and use the corresponding position information as position updates in a localization filter. The localization filter also involves a smoother that combines the two estimates obtained by running the zero-velocity update algorithm both forward and backward in time. We performed experiments with eight subjects in indoor and outdoor environments involving walking, turning and standing activities. Using a spatial error criterion, we show that the position errors can be decreased by about 85% on the average. We also present the results of two 3D experiments performed in realistic indoor environments and demonstrate that it is possible to achieve over 90% error reduction in position by performing localization simultaneously with activity recognition

    A Wearable Indoor Navigation System for Blind and Visually Impaired Individuals

    Get PDF
    Indoor positioning and navigation for blind and visually impaired individuals has become an active field of research. The development of a reliable positioning and navigational system will reduce the suffering of the people with visual disabilities, help them live more independently, and promote their employment opportunities. In this work, a coarse-to-fine multi-resolution model is proposed for indoor navigation in hallway environments based on the use of a wearable computer called the eButton. This self-constructed device contains multiple sensors which are used for indoor positioning and localization in three layers of resolution: a global positioning system (GPS) layer for building identification; a Wi-Fi - barometer layer for rough position localization; and a digital camera - motion sensor layer for precise localization. In this multi-resolution model, a new theoretical framework is developed which uses the change of atmospheric pressure to determine the floor number in a multistory building. The digital camera and motion sensors within the eButton acquire both pictorial and motion data as a person with a normal vision walks along a hallway to establish a database. Precise indoor positioning and localization information is provided to the visually impaired individual based on a Kalman filter fusion algorithm and an automatic matching algorithm between the acquired images and those in the pre-established database. Motion calculation is based on the data from motion sensors is used to refine the localization result. Experiments were conducted to evaluate the performance of the algorithms. Our results show that the new device and algorithms can precisely determine the floor level and indoor location along hallways in multistory buildings, providing a powerful and unobtrusive navigational tool for blind and visually impaired individuals
    corecore