995 research outputs found

    Assistive Systems for the Visually Impaired Based on Image Processing

    Get PDF
    In this chapter, we proposed three assistive systems for visually impaired individuals based on image processing: Kinect cane system, Kinect goggle system, and light checking system. The Kinect cane system can detect obstacles of various sizes and also recognize objects such as seats. A visually impaired user is notified of the results of detection and recognition by means of vibration feedback. The Kinect goggle system is another type of wearable system, and can make user’s hands free. The light checking system is implemented as an application for a smartphone, and can tell a visually impaired user the ON/OFF states of room lights and elevator button lights. The experimental results demonstrate that the proposed systems are effective in helping visually impaired individuals in everyday environments

    A Smart Real-Time Standalone Route Recognition System for Visually Impaired Persons

    Get PDF
    Visual Impairment is a common disability that results in poor or no eyesight, whose victims suffer inconveniences in performing their daily tasks. Visually impaired persons require some aids to interact with their environment safely. Existing navigation systems like electronic travel aids (ETAs) are mostly cloud-based and rely heavily on the internet and google map. This implies that systems deployment in locations with poor internet facilities and poorly structured environments is not feasible. This paper proposed a smart real-time standalone route recognition system for visually impaired persons. The proposed system makes use of a pedestrian route network, an interconnection of paths and their associated route tables, for providing directions of known locations in real-time for the user. Federal University of Technology (FUT), Minna, Gidan Kwanu campus was used as the case study. The result obtained from testing of the device search strategy on the field showed that the complexity of the algorithm used in searching for paths in the pedestrian network is , at worst-case scenario, where N is the number of paths available in the network. The accuracy of path recognition is 100%. This implies that the developed system is reliable and can be used in recognizing and navigating routes by the visual impaired in real-time

    Investigando Natural User Interfaces (NUIs) : tecnologias e interação em contexto de acessibilidade

    Get PDF
    Orientador: Maria Cecília Calani BaranauskasTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Natural User Interfaces (NUIs) representam um novo paradigma de interação, com a promessa de ser mais intuitivo e fácil de usar do que seu antecessor, que utiliza mouse e teclado. Em um contexto no qual as tecnologias estão cada vez mais invisíveis e pervasivas, não só a quantidade mas também a diversidade de pessoas que participam deste contexto é crescente. Nesse caso, é preciso estudar como esse novo paradigma de interação de fato consegue ser acessível a todas as pessoas que podem utilizá-lo no dia-a-dia. Ademais, é preciso também caracterizar o paradigma em si, para entender o que o torna, de fato, natural. Portanto, nesta tese apresentamos o caminho que percorremos em busca dessas duas respostas: como caracterizar NUIs, no atual contexto tecnológico, e como tornar NUIs acessíveis para todos. Para tanto, primeiro apresentamos uma revisão sistemática de literatura com o estado da arte. Depois, mostramos um conjunto de heurísticas para o design e a avaliação de NUIs, que foram aplicadas em estudos de caso práticos. Em seguida, estruturamos as ideias desta pesquisa dentro dos artefatos da Semiótica Organizacional, e obtivemos esclarecimentos sobre como fazer o design de NUIs com Acessibilidade, seja por meio de Design Universal, seja para propor Tecnologias Assistivas. Depois, apresentamos três estudos de caso com sistemas NUI cujo design foi feito por nós. A partir desses estudos de caso, expandimos nosso referencial teórico e conseguimos, por fim, encontrar três elementos que resumem a nossa caracterização de NUI: diferenças, affordances e enaçãoAbstract: Natural User Interfaces (NUIs) represent a new interaction paradigm, with the promise of being more intuitive and easy to use than its predecessor, that utilizes mouse and keyboard. In a context where technology is becoming each time more invisible and pervasive, not only the amount but also the diversity of people who participate in this context is increasing. In this case, it must be studied how this new interaction paradigm can, in fact, be accessible to all the people who may use it on their daily routine. Furthermore, it is also necessary to characterize the paradigm itself, to understand what makes it, in fact, natural. Therefore, in this thesis we present the path we took in search of these two answers: how to characterize NUIs in the current technological context, and how to make NUIs accessible to all. To do so, first we present a systematic literature review with the state of the art. Then, we show a set of heuristics for the design and evaluation of NUIs, which were applied in practical study cases. Afterwards, we structure the ideas of this research into the Organizational Semiotics artifacts, and we obtain insights into how to design NUIs with Accessibility, be it through Universal Design, be it to propose Assistive Technologies. Then, we present three case studies with NUI systems which we designed. From these case studies, we expanded our theoretical references were able to, finally, find three elements that sum up our characterization of NUI: differences, affordances and enactionDoutoradoCiência da ComputaçãoDoutora em Ciência da Computação160911/2015-0CAPESCNP

    Real-time Robot-assisted Ergonomics

    Get PDF
    This paper describes a novel approach in human robot interaction driven by ergonomics. With a clear focus on optimising ergonomics, the approach proposed here continuously observes a human user's posture and by invoking appropriate cooperative robot movements, the user's posture is, whenever required, brought back to an ergonomic optimum. Effectively, the new protocol optimises the human-robot relative position and orientation as a function of human ergonomics. An RGB-D camera is used to calculate and monitor human joint angles in real-time and to determine the current ergonomics state. A total of 6 main causes of low ergonomic states are identified, leading to 6 universal robot responses to allow the human to return to an optimal ergonomics state. The algorithmic framework identifies these 6 causes and controls the cooperating robot to always adapt the environment (e.g. change the pose of the workpiece) in a way that is ergonomically most comfortable for the interacting user. Hence, human-robot interaction is continuously re-evaluated optimizing ergonomics states. The approach is validated through an experimental study, based on established ergonomic methods and their adaptation for real-time application. The study confirms improved ergonomics using the new approach.Comment: 6 pages, accepted and to be presented at IEEE ICRA 201

    Virtual Reality: An Evidence-Based Guide for Occupational Therapy

    Get PDF
    Problem: Virtual reality (VR) is an emerging technology that serves to position the user in control of a virtual environment to maximize interest and function. VR is a context for intervention included in the Occupational Therapy Practice Framework: Domain and Process 3rd edition (OTPF-3; AOTA, 2014). It is crucial that occupational therapy (OT) practitioners enhance their knowledge about VR technology used by their clients to provide effective client-centered practice (AOTA, 2010). There is a lack of organization of the literature and research evidence regarding the use of VR as an intervention modality. While there has been a significant increase in recent literature supporting the use of VR in OT, there is a need for manuals and resources to guide clinicians in using VR as a therapeutic modality (Levac & Miller, 2013; Proffitt & Lange, 2015). Methods: An extensive literature review for articles regarding the use of the Xbox Kinect and Nintendo Wii within OT was conducted using five databases. Relevant data was extracted from each article related to the use of the Xbox Kinect or Nintendo Wii in intervention to synthesize the findings into charts. The construction of the charts followed the organization of the OTPF-3 (AOTA, 2014). Product: The authors of this product created an evidence-based resource to guide OT practitioners\u27 use of VR. The intended purpose of this product, Virtual Reality: An Evidence-Based Guide, is to assist OT practitioners\u27 adaptation and/or modification of VR activities to address individual client needs. Using this product as a guide to current knowledge and evidence regarding intervention in the context of VR, OT practitioners will be better equipped to make safe and effective choices. To further support development of literature regarding VR, the authors of this product recommended areas for future research

    Sensor-Based Assistive Devices for Visually-Impaired People: Current Status, Challenges, and Future Directions

    Get PDF
    The World Health Organization (WHO) reported that there are 285 million visually impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. Unfortunately, most of these systems are limited in their capabilities. In this paper, we present a comparative survey of the wearable and portable assistive devices for visuallyimpaired people in order to show the progress in assistive technology for this group of people. Thus, the contribution of this literature survey is to discuss in detail the most significant devices that are presented in the literature to assist this population and highlight the improvements, advantages, disadvantages, and accuracy. Our aim is to address and present most of the issues of these systems to pave the way for other researchers to design devices that ensure safety and independent mobility to visually-impaired people.https://doi.org/10.3390/s1703056

    Intelligent strategies for mobile robotics in laboratory automation

    Get PDF
    In this thesis a new intelligent framework is presented for the mobile robots in laboratory automation, which includes: a new multi-floor indoor navigation method is presented and an intelligent multi-floor path planning is proposed; a new signal filtering method is presented for the robots to forecast their indoor coordinates; a new human feature based strategy is proposed for the robot-human smart collision avoidance; a new robot power forecasting method is proposed to decide a distributed transportation task; a new blind approach is presented for the arm manipulations for the robots

    A Highly Accurate And Reliable Data Fusion Framework For Guiding The Visually Impaired

    Get PDF
    The world has approximately 285 million visually impaired (VI) people according to a report by the World Health Organization. Thirty-nine million people are estimated to be blind, whereas 246 million people are estimated to have impaired vision. An important factor that motivated this research is the fact that 90% of VI people live in developing countries. Several systems have been designed to improve the quality of the life of VI people and support the mobility of VI people. Unfortunately, none of these systems provides a complete solution for VI people, and the systems are very expensive. Therefore, this work presents an intelligent framework that includes several types of sensors embedded in a wearable device to support the visually impaired (VI) community. The proposed work is based on an integration of sensor-based and computer vision-based techniques in order to introduce an efficient and economical visual device. The designed algorithm is divided to two components: obstacle detection and collision avoidance. The system has been implemented and tested in real-time scenarios. A video dataset of 30 videos and an average of 700 frames per video was fed to the system for the testing purpose. The achieved 96.53% accuracy rate of the proposed sequence of techniques that are used for real-time detection component is based on a wide detection view that used two camera modules and a detection range of approximately 9 meters. The 98% accuracy rate was obtained for a larger dataset. However, the main contribution in this work is the proposed novel collision avoidance approach that is based on the image depth and fuzzy control rules. Through the use of x-y coordinate system, we were able to map the input frames, whereas each frame was divided into three areas vertically and further 1/3 of the height of that frame horizontally in order to specify the urgency of any existing obstacles within that frame. In addition, we were able to provide precise information to help the VI user in avoiding front obstacles using the fuzzy logic. The strength of this proposed approach is that it aids the VI users in avoiding 100% of all detected objects. Once the device is initialized, the VI user can confidently enter unfamiliar surroundings. Therefore, this implemented device can be described as accurate, reliable, friendly, light, and economically accessible that facilitates the mobility of VI people and does not require any previous knowledge of the surrounding environment. Finally, our proposed approach was compared with most efficient introduced techniques and proved to outperform them
    corecore