6,784 research outputs found
Using remote vision: The effects of video image frame rate on visual object recognition performance
This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.The process of using remote vision was simulated in order to determine the effects of video image frame rate on the performance in visual recognition of stationary environmental hazards in the dynamic video footage of the pedestrian travel environment. The recognition performance was assessed against two different video image frame rate variations: 25 and 2 fps. The assessment included a range of objective and subjective criteria. The obtained results show that the effects of the frame rate variations on the performance are statistically insignificant. This paper belongs to the process of development of a novel system for navigation of visually impaired pedestrians. The navigation system includes a remote vision facility, and the visual recognition of the environmental hazards by the sighted human guide is a basic activity in aiding the visually impaired user of the system in mobility
Recommended from our members
Interface design for a remote guidance system for the blind: Using dual-screen displays
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The mobility for the visually impaired people is one of the main challenges that researchers are still facing around the world. Although some projects have been conducted to improve the mobility of visually impaired people, further research is still needed. One of these projects is Brunel Remote Guidance System (BRGS). BRGS is aimed to assist visually impaired users in avoiding obstacles and reaching their destinations safely by providing online instructions via a remote sighted guide.
This study comes as continuation of the development process of BRGS; the main aim that has been achieved of this research is the optimisation of the interface design for the system guide terminal. This helps the sighted guide to assist the VIUs to avoid obstacles safely and comfortably in the micro-navigation, as well as to keep them on the right track to reach their destination in the macro-navigation. After using the content analysis, the performance factors and their assessments method were identified in each BRGS‘ element, which concluded that there is a lack of research on the guide terminal setup and the assessment method for the sighted guide performance. Furthermore, there are no model to assist the sighted guide performance and two-screen displays used in the literature review and similar projects. A model was designed as a platform to conduct the evaluation on sighted guide performance. Based on this model, the computer-based simulation was established and tested, which made the simulation is ready for next task; the evaluation of the sighted guide performance. The conducted study determined the effects of the two-screen displays on the recognition performance of the 80 participants in the guide terminal. The performance was measured with the context of four different resolution conditions. The study was based on a simulation technique, which is consisted of two key performance elements in order to examine the sighted guide performance; the macro-navigation element and the micro-navigation element. The results show that the two-screen displays have an effect on the performance of the sighted guide. The optimum setup for the two-screen displays for the guide terminal consisted of a big digital map screen display (4CIF [704p x 576p]) and a small video image screen display (CIF [352p x 288p]), which one of the four different resolutions. This interface design has been recommended as a final setup in the guide terminal
Recommended from our members
Foregrounding accessibility for user experience design
textI am interested in creating generative tools and techniques for designing accessible user experiences for end users. As a user experience designer, I am working on embracing the web accessibility standards and guidelines and including them from the beginning of the User Experience (UX) design process. My projects are directed at facilitating design students and professionals to understand two things: that the broad concept of web accessibility is important, and how they can embed web accessibility into the UX design process at a very early stage. To do this, I used different media (website, posters and videos etc.) to create awareness and educate designers in an interesting, simple and engaging way. In this report, I will discuss the definition and role of accessible design, identify limitations in existing tools and methods, and demonstrate how future designers might research, prototype, analyze, and implement their designs for all users.Desig
Recommended from our members
Precise positioning in real-time using GPS-RTK signal for visually impaired people navigation system
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 24/9/2010.This thesis presents the research carried out to investigate and achieve highly reliable and accurate navigation system of guidance for visually impaired pedestrians. The main aim with this PhD project has been to identify the limits and insufficiencies in utilising Network Real-Time Kinematic Global Navigation Satellite Systems (NRTK GNSS) and its augmentation techniques within the frame of pedestrian applications in a variety of environments and circumstances. Moreover, the system can be used in many other applications, including unmanned vehicles, military applications, police, etc. NRTK GNSS positioning is considered to be a superior solution in comparison to the conventional standalone Global Positioning System (GPS) technique whose accuracy is highly affected by the distance dependent errors such as satellite orbital and atmospheric biases.
Nevertheless, NRTK GNSS positioning is particularly constrained by wireless data link coverage, delays of correction and transmission and completeness, GPS and GLONASS signal availability, etc., which could downgrade the positioning quality of the NRTK results.
This research is based on the dual frequency NRTK GNSS (GPS and GLONASS). Additionally, it is incorporated into several positioning and communication methods responsible for data correction while providing the position solutions, in which all identified contextual factors and application requirements are accounted.
The positioning model operates through client-server based architecture consisted of a Navigation Service Centre (NSC) and a Mobile Navigation Unit (MNU). Hybrid functional approaches were consisting of several processing procedures allowing the positioning model to operate in position determination modes. NRTK GNSS and augmentation service is used if enough navigation information was available at the MNU using its local positioning device (GPS/GLONASS receiver).The positioning model at MNU was experimentally evaluated and centimetric accuracy was generally attained during both static and kinematic tests in various environments (urban, suburban and rural). This high accuracy was merely affected by some level of unavailability mainly caused by GPS and GLONASS signal blockage. Additionally, the influence of the number of satellites in view, dilution of precision (DOP) and age corrections (AoC) over the accuracy and stability of the NRTK GNSS solution was also investigated during this research and presented in the thesis.
This positioning performance has outperformed the existing GPS service. In addition, utilising a simulation evaluation facility the positioning model at MNU performance was quantified with reference to a hybrid positioning service that will be offered by future Galileo Open Service (OS) along with GPS. However, a significant difference in terms of the service availability for the advantage of the hybrid system was experienced in all remaining scenarios and environments more especially the urban areas due to surrounding obstacles and conditions.
As an outcome of this research a new and precise positioning model was proposed. The adaptive framework is understood as approaching an integration of the available positioning technology into the context of surrounding wireless communication for a maintainable performance. The positioning model has the capability of delivering indeed accurate, precise and consistent position solutions, and thus is fulfilling the requirements of visually impaired people navigation application, as identified in the adaptive framework
Orientation to Mobility, Socialization, and Communication Android Apps to Help Visually Impaired Students Understand the State University of Surabaya (UNESA) Campus Environment
UNESA is a national leader in education development. This is consistent with UNESA’s vision of achieving educational excellence. This realization is consistent with UNESA’s primary competency and capacity as a disabled-friendly campus, with one visually-impaired student. The goal of this research is to describe the outcomes of creating an Android application with a social mobility and communication focus to help students with visual impairments understand the UNESA campus environment. The product of this development research takes the form of an android application with a focus on social mobility and communication. Realization of product design, namely: 1) android application program packaged in mobile phones, 2) practical guide to access the outdoor environment on the way to various places in the UNESA Lidah Wetan and Ketintang campus environment with braille writing and writing alert, 2) cooperative form of think pair share type, 3) authentic assessment tool as a successful use of android applications in social orientation and communication mobility. The products created will be used by all visually impaired students who wish to visit the UNESA campuses in Lidah Wetan and Ketintang. UNESA’s inclusive campus is a place for all disabled people who want to continue their education, including the visually impaired who require environmental mobility facilities
Blind guide: anytime, anywhere
Sight dominates our mental life, more than any other sense. Even when we are just
thinking about something the world, we end imagining what looks like. This rich visual
experience is part of our lives. People need the vision for two complementary reasons. One
of them is vision give us the knowledge to recognize objects in real time. The other reason
is vision provides us the control one need to move around and interact with objects.
Eyesight helps people to avoid dangers and navigate in our world. Blind people
usually have enhanced accuracy and sensibility of their other natural senses to sense their
surroundings. But sometimes this is not enough because the human senses can be affected
by external sources of noise or disease. Without any foreign aid or device, sightless cannot
navigate in the world. Many assistive tools have been developed to help blind people.
White canes or guide dogs help blind in their navigation. Each device has their limitation.
White canes cannot detect head level obstacles, drop-offs, and obstructions over a meter
away. The training of a guide dog takes a long time, almost five years in some cases. The
sightless also needs training and is not a solution for everybody. Taking care of a guide
dog can be expensive and time consuming.
Humans have developed technology for helping us in every aspect of our lives. The
primary goal of technology is helping people to improve their quality of life. Technology
can assist us with our limitations. Wireless sensor networks is a technology that has been
used to help people with disabilities.
In this dissertation, the author proposes a system based on this technology called
Blind Guide. Blind Guide is an artifact that helps blind people to navigate in indoors or
outdoors scenarios. The prototype is portable assuring that can be used anytime and
anywhere. The system is composed of wireless sensors that can be used in different parts
of the body. The sensors detect an obstacle and inform the user with an audible warning
providing a safety walk to the users.
A great feature about Blind Guide is its modularity. The system can adapt to the
needs of the user and can be used in a combination with other solution. For example, Blind
Guide can be used in conjunction with the white cane. The white cane detects obstacles
below waist level and a Blind Guide wireless sensor in the forehead can detect obstacles at the head level. This feature is important because some sightless people feel uncomfortable
without the white cane.
The system is scalable giving us the opportunity to create a network of
interconnected Blind Guide users. This network can store the exact location and
description of the obstacles found by the users. This information is public for all users of
this system. This feature reduces the time required for obstacle detection and consequent
energy savings, thus increasing the autonomy of the solution.
One of the main requirements for the development of this prototype was to design a
low-cost solution that can be accessible for anyone around the world. All the components
of the solution can provide a low-cost solution, easily obtainable and at a low cost.
Technology makes our life easier and it must be available for anyone.
Modularity, portability, scalability, the possibility to work in conjunction with other
solutions, detecting objects that other solutions cannot, obstacle labeling, a network of
identified obstacles and audible warnings are the main aspects of the Blind Guide system.
All these aspects makes Blind Guide an anytime, anywhere solution for blind people.
Blind Guide was tested with a group of volunteers. The volunteers were sightless and
from different ages. The trials performed to the system show us positive results. The
system successfully detected incoming obstacles and informed in real time to its users. The
volunteers gave us a positive feedback telling that they felt comfortable using the prototype
and they believe that the system can help them with their daily routine
A comparative study of D2L's Performance with a purpose built E-learning user interface for visual- and hearing-Impaired students
An e-learning system in an academic setting is an efficient tool for all students especially for students with physical impairments. This thesis discusses an e-learning system through the design and development of an e-learning user interface for students with visual- and hearing- impairment. In this thesis the tools and features in the user interface required to make the learning process easy and effective for students with such disabilities have been presented. Further, an integration framework is proposed to integrate the new tools and features into the existing e-learning system Desire-To-Learn (D2L). The tools and features added to the user interface were tested by the selected participants with visually-and hearing- impaired students from Laurentian University’s population. Two questionnaires were filled out to assess the usability methods for both the D2L e-learning user interface at Laurentian University and the new e-learning user interface designed for students with visual and hearing impairment. After collecting and analyzing the data, the results from different usability factors such as effectiveness, ease of use, and accessibility showed that the participants were not completely satisfied with the existing D2L e-learning system, but were satisfied with the proposed new user interface. Based on the new interface, the results showed also that the tools and features proposed for students with visual and hearing impairment can be integrated into the existing D2L e-learning system.Master of Science (MSc) in Computational Science
Towards a multidisciplinary user-centric design framework for context-aware applications
The primary aim of this article is to review and merge theories of context within linguistics, computer science, and psychology, to propose a multidisciplinary model of context that would facilitate application developers in developing richer descriptions or scenarios of how a context-aware device may be used in various dynamic mobile settings. More specifically, the aim is to:1. Investigate different viewpoints of context within linguistics, computer science, and psychology, to develop summary condensed models for each discipline. 2. Investigate the impact of contrasting viewpoints on the usability of context-aware applications. 3. Investigate the extent to which single-discipline models can be merged and the benefits and insightfulness of a merged model for designing mobile computers. 4. Investigate the extent to which a proposed multidisciplinary modelcan be applied to specific applications of context-aware computing
Toward a multidisciplinary model of context to support context-aware computing
Capturing, defining, and modeling the essence of context are challenging, compelling, and prominent issues for interdisciplinary research and discussion. The roots of its emergence lie in the inconsistencies and ambivalent definitions across and within different research specializations (e.g., philosophy, psychology, pragmatics, linguistics, computer science, and artificial intelligence). Within the area of computer science, the advent of mobile context-aware computing has stimulated broad and contrasting interpretations due to the shift from traditional static desktop computing to heterogeneous mobile environments. This transition poses many challenging, complex, and largely unanswered research issues relating to contextual interactions and usability. To address those issues, many researchers strongly encourage a multidisciplinary approach. The primary aim of this article is to review and unify theories of context within linguistics, computer science, and psychology. Summary models within each discipline are used to propose an outline and detailed multidisciplinary model of context involving (a) the differentiation of focal and contextual aspects of the user and application's world, (b) the separation of meaningful and incidental dimensions, and (c) important user and application processes. The models provide an important foundation in which complex mobile scenarios can be conceptualized and key human and social issues can be identified. The models were then applied to different applications of context-aware computing involving user communities and mobile tourist guides. The authors' future work involves developing a user-centered multidisciplinary design framework (based on their proposed models). This will be used to design a large-scale user study investigating the usability issues of a context-aware mobile computing navigation aid for visually impaired people
Overcoming barriers and increasing independence: service robots for elderly and disabled people
This paper discusses the potential for service robots to overcome barriers and increase independence of
elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly
people and advances in technology which will make new uses possible and provides suggestions for some of these new
applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses
the complementarity of assistive service robots and personal assistance and considers the types of applications and
users for which service robots are and are not suitable
- …