415 research outputs found

    Enhancing WiFi-based localization with visual clues

    Get PDF

    Wayfinding and Navigation for People with Disabilities Using Social Navigation Networks

    Get PDF
    To achieve safe and independent mobility, people usually depend on published information, prior experience, the knowledge of others, and/or technology to navigate unfamiliar outdoor and indoor environments. Today, due to advances in various technologies, wayfinding and navigation systems and services are commonplace and are accessible on desktop, laptop, and mobile devices. However, despite their popularity and widespread use, current wayfinding and navigation solutions often fail to address the needs of people with disabilities (PWDs). We argue that these shortcomings are primarily due to the ubiquity of the compute-centric approach adopted in these systems and services, where they do not benefit from the experience-centric approach. We propose that following a hybrid approach of combining experience-centric and compute-centric methods will overcome the shortcomings of current wayfinding and navigation solutions for PWDs

    Wayfinding and Navigation for People with Disabilities Using Social Navigation Networks

    Full text link

    Enhancing the museum experience with a sustainable solution based on contextual information obtained from an on-line analysis of users’ behaviour

    Get PDF
    Human computer interaction has evolved in the last years in order to enhance users’ experiences and provide more intuitive and usable systems. A major leap through in this scenario is obtained by embedding, in the physical environment, sensors capable of detecting and processing users’ context (position, pose, gaze, ...). Feeded by the so collected information flows, user interface paradigms may shift from stereotyped gestures on physical devices, to more direct and intuitive ones that reduce the semantic gap between the action and the corresponding system reaction or even anticipate the user’s needs, thus limiting the overall learning effort and increasing user satisfaction. In order to make this process effective, the context of the user (i.e. where s/he is, what is s/he doing, who s/he is, what are her/his preferences and also actual perception and needs) must be properly understood. While collecting data on some aspects can be easy, interpreting them all in a meaningful way in order to improve the overall user experience is much harder. This is more evident when we consider informal learning environments like museums, i.e. places that are designed to elicit visitor response towards the artifacts on display and the cultural themes proposed. In such a situation, in fact, the system should adapt to the attention paid by the user choosing the appropriate content for the user’s purposes, presenting an intuitive interface to navigate it. My research goal is focused on collecting, in a simple,unobtrusive, and sustainable way, contextual information about the visitors with the purpose of creating more engaging and personalized experiences

    Fusion of non-visual and visual sensors for human tracking

    Get PDF
    Human tracking is an extensively researched yet still challenging area in the Computer Vision field, with a wide range of applications such as surveillance and healthcare. People may not be successfully tracked with merely the visual information in challenging cases such as long-term occlusion. Thus, we propose to combine information from other sensors with the surveillance cameras to persistently localize and track humans, which is becoming more promising with the pervasiveness of mobile devices such as cellphones, smart watches and smart glasses embedded with all kinds of sensors including accelerometers, gyroscopes, magnetometers, GPS, WiFi modules and so on. In this thesis, we firstly investigate the application of Inertial Measurement Unit (IMU) from mobile devices to human activity recognition and human tracking, we then develop novel persistent human tracking and indoor localization algorithms by the fusion of non-visual sensors and visual sensors, which not only overcomes the occlusion challenge in visual tracking, but also alleviates the calibration and drift problems in IMU tracking --Abstract, page iii

    A Review of Hybrid Indoor Positioning Systems Employing WLAN Fingerprinting and Image Processing

    Get PDF
    Location-based services (LBS) are a significant permissive technology. One of the main components in indoor LBS is the indoor positioning system (IPS). IPS utilizes many existing technologies such as radio frequency, images, acoustic signals, as well as magnetic sensors, thermal sensors, optical sensors, and other sensors that are usually installed in a mobile device. The radio frequency technologies used in IPS are WLAN, Bluetooth, Zig Bee, RFID, frequency modulation, and ultra-wideband. This paper explores studies that have combined WLAN fingerprinting and image processing to build an IPS. The studies on combined WLAN fingerprinting and image processing techniques are divided based on the methods used. The first part explains the studies that have used WLAN fingerprinting to support image positioning. The second part examines works that have used image processing to support WLAN fingerprinting positioning. Then, image processing and WLAN fingerprinting are used in combination to build IPS in the third part. A new concept is proposed at the end for the future development of indoor positioning models based on WLAN fingerprinting and supported by image processing to solve the effect of people presence around users and the user orientation problem
    • …
    corecore