12 research outputs found
Towards a Practical Pedestrian Distraction Detection Framework using Wearables
Pedestrian safety continues to be a significant concern in urban communities
and pedestrian distraction is emerging as one of the main causes of grave and
fatal accidents involving pedestrians. The advent of sophisticated mobile and
wearable devices, equipped with high-precision on-board sensors capable of
measuring fine-grained user movements and context, provides a tremendous
opportunity for designing effective pedestrian safety systems and applications.
Accurate and efficient recognition of pedestrian distractions in real-time
given the memory, computation and communication limitations of these devices,
however, remains the key technical challenge in the design of such systems.
Earlier research efforts in pedestrian distraction detection using data
available from mobile and wearable devices have primarily focused only on
achieving high detection accuracy, resulting in designs that are either
resource intensive and unsuitable for implementation on mainstream mobile
devices, or computationally slow and not useful for real-time pedestrian safety
applications, or require specialized hardware and less likely to be adopted by
most users. In the quest for a pedestrian safety system that achieves a
favorable balance between computational efficiency, detection accuracy, and
energy consumption, this paper makes the following main contributions: (i)
design of a novel complex activity recognition framework which employs motion
data available from users' mobile and wearable devices and a lightweight
frequency matching approach to accurately and efficiently recognize complex
distraction related activities, and (ii) a comprehensive comparative evaluation
of the proposed framework with well-known complex activity recognition
techniques in the literature with the help of data collected from human subject
pedestrians and prototype implementations on commercially-available mobile and
wearable devices
Recommended from our members
Mobile assistive technologies for the visually impaired
There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes)
Mobile text reader for people with low vision
People with low vision have visual acuity less than 6/18 and at least 3/60 in the better eye, with correction. The limited vision requires them to enhance their reading ability using magnifying glass or electronic screen magnifier. However, people with severe low vision have difficulty and suffer fatigue from using such assistive tool. This paper presents the development of a mobile text reader dedicated for people with low vision. The mobile text reader is developed as a mobile application that allows user to capture an image of texts and then translate the texts into audio format. One main contribution of this work compared to typical optical character recognition (OCR) engines or text-to-speech engines is the addition of image stitching feature. The image stitching feature can produce one single image from multiple poorly aligned images, and is integrated into the process of image acquisition. Either single or composite image is subsequently uploaded to a cloud-based OCR engine for robust character recognition. Eventually, a text-to-speech (TTS) synthesizer reproduces the word recognized in a natural-sounding speech. The whole series of computation is implemented as a mobile application to be run from a smartphone, allowing the visual impaired to access text information independently
Snippet based trajectory statistics histograms for assistive technologies
Due to increasing hospital costs and traveling time, more and more patients decide to use medical devices at home without traveling to the hospital. However, these devices are not always very straight-forward for usage, and the recent reports show that there are many injuries and even deaths caused by the wrong use of these devices. Since human supervision during every usage is impractical, there is a need for computer vision systems that would recognize actions and detect if the patient has done something wrong. In this paper, we propose to use Snippet Based Trajectory Statistics Histograms descriptor to recognize actions in two medical device usage problems; inhaler device usage and infusion pump usage. Snippet Based Trajectory Statistics Histograms encodes the motion and position statistics of densely extracted trajectories from a video. Our experiments show that by using Snippet Based Trajectory Statistics Histograms technique, we improve the overall performance for both tasks. Additionally, this method does not require heavy computation, and is suitable for real-time systems. © Springer International Publishing Switzerland 2015
Smartphone-Based Escalator Recognition for the Visually Impaired
It is difficult for visually impaired individuals to recognize escalators in everyday environments. If the individuals ride on escalators in the wrong direction, they will stumble on the steps. This paper proposes a novel method to assist visually impaired individuals in finding available escalators by the use of smartphone cameras. Escalators are recognized by analyzing optical flows in video frames captured by the cameras, and auditory feedback is provided to the individuals. The proposed method was implemented on an Android smartphone and applied to actual escalator scenes. The experimental results demonstrate that the proposed method is promising for helping visually impaired individuals use escalators
Airport Accessibility and Navigation Assistance for People with Visual Impairments
People with visual impairments often have to rely on the assistance of sighted guides in airports, which prevents them from having an independent travel experience. In order to learn about their perspectives on current airport accessibility, we conducted two focus groups that discussed their needs and experiences in-depth, as well as the potential role of assistive technologies. We found that independent navigation is a main challenge and severely impacts their overall experience. As a result, we equipped an airport with a Bluetooth Low Energy (BLE) beacon-based navigation system and performed a real-world study where users navigated routes relevant for their travel experience. We found that despite the challenging environment participants were able to complete their itinerary independently, presenting none to few navigation errors and reasonable timings. This study presents the first systematic evaluation posing BLE technology as a strong approach to increase the independence of visually impaired people in airports
A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera
The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots
Visual-Inertial Sensor Fusion Models and Algorithms for Context-Aware Indoor Navigation
Positioning in navigation systems is predominantly performed by Global Navigation Satellite Systems (GNSSs). However, while GNSS-enabled devices have become commonplace for outdoor navigation, their use for indoor navigation is hindered due to GNSS signal degradation or blockage. For this, development of alternative positioning approaches and techniques for navigation systems is an ongoing research topic. In this dissertation, I present a new approach and address three major navigational problems: indoor positioning, obstacle detection, and keyframe detection. The proposed approach utilizes inertial and visual sensors available on smartphones and are focused on developing: a framework for monocular visual internal odometry (VIO) to position human/object using sensor fusion and deep learning in tandem; an unsupervised algorithm to detect obstacles using sequence of visual data; and a supervised context-aware keyframe detection.
The underlying technique for monocular VIO is a recurrent convolutional neural network for computing six-degree-of-freedom (6DoF) in an end-to-end fashion and an extended Kalman filter module for fine-tuning the scale parameter based on inertial observations and managing errors. I compare the results of my featureless technique with the results of conventional feature-based VIO techniques and manually-scaled results. The comparison results show that while the framework is more effective compared to featureless method and that the accuracy is improved, the accuracy of feature-based method still outperforms the proposed approach.
The approach for obstacle detection is based on processing two consecutive images to detect obstacles. Conducting experiments and comparing the results of my approach with the results of two other widely used algorithms show that my algorithm performs better; 82% precision compared with 69%. In order to determine the decent frame-rate extraction from video stream, I analyzed movement patterns of camera and inferred the context of the user to generate a model associating movement anomaly with proper frames-rate extraction. The output of this model was utilized for determining the rate of keyframe extraction in visual odometry (VO). I defined and computed the effective frames for VO and experimented with and used this approach for context-aware keyframe detection. The results show that the number of frames, using inertial data to infer the decent frames, is decreased
Engaging older adults with age-related macular degeneration in the design and evaluation of mobile assistive technologies
Ongoing advances in technology are undoubtedly increasing the scope for enhancing and supporting older adults’ daily living. The digital divide between older and younger adults, however, raises concerns about the suitability of technological solutions for older adults, especially for those with impairments. Taking older adults with Age-Related Macular Degeneration (AMD) – a progressive and degenerative disease of the eye – as a case study, the research reported in this dissertation considers how best to engage older adults in the design and evaluation of mobile assistive technologies to achieve sympathetic design of such technologies. Recognising the importance of good nutrition and the challenges involved in designing for people with AMD, this research followed a participatory and user-centred design (UCD) approach to develop a proof–of–concept diet diary application for people with AMD. Findings from initial knowledge elicitation activities contribute to the growing debate surrounding the issues on how older adults’ participation is initiated, planned and managed. Reflections on the application of the participatory design method highlighted a number of key strategies that can be applied to maintain empathic participatory design rapport with older adults and, subsequently, lead to the formulation of participatory design guidelines for effectively engaging older adults in design activities. Taking a novel approach, the final evaluation study contributed to the gap in the knowledge on how to bring closure to the participatory process in as positive a way as possible, cognisant of the potential negative effect that withdrawal of the participatory process may have on individuals. Based on the results of this study, we ascertain that (a) sympathetic design of technology with older adults will maximise technology acceptance and shows strong indicators for affecting behaviour change; and (b) being involved in the design and development of such technologies has the capacity to significantly improve the quality of life of older adults (with AMD)