12,088 research outputs found

    A context aware architecture to support people with partial visual impairments

    Get PDF
    Nowadays there are several systems that help people with disabilities on their quotidian tasks. The visual impairment is a problem that affects several people in their tasks and movements. In this work we propose an architecture capable of processing information from the environment and suggesting actions to the user with visual impairments, to avoid a possible obstacle. This architecture intends to improve the support given to the user in their daily movements. The idea is to use speculative computation to predict the users’ intentions and even to justify the reactive or proactive users’ behaviors.(undefined

    ANGELAH: A Framework for Assisting Elders At Home

    Get PDF
    The ever growing percentage of elderly people within modern societies poses welfare systems under relevant stress. In fact, partial and progressive loss of motor, sensorial, and/or cognitive skills renders elders unable to live autonomously, eventually leading to their hospitalization. This results in both relevant emotional and economic costs. Ubiquitous computing technologies can offer interesting opportunities for in-house safety and autonomy. However, existing systems partially address in-house safety requirements and typically focus on only elder monitoring and emergency detection. The paper presents ANGELAH, a middleware-level solution integrating both ”elder monitoring and emergency detection” solutions and networking solutions. ANGELAH has two main features: i) it enables efficient integration between a variety of sensors and actuators deployed at home for emergency detection and ii) provides a solid framework for creating and managing rescue teams composed of individuals willing to promptly assist elders in case of emergency situations. A prototype of ANGELAH, designed for a case study for helping elders with vision impairments, is developed and interesting results are obtained from both computer simulations and a real-network testbed

    Ontology-based personalisation of e-learning resources for disabled students

    Get PDF
    Students with disabilities are often expected to use e-learning systems to access learning materials but most systems do not provide appropriate adaptation or personalisation to meet their needs.The difficulties related to inadaptability of current learning environments can now be resolved using semantic web technologies such as web ontologies which have been successfully used to drive e-learning personalisation. Nevertheless, e-learning personalisation for students with disabilities has mainly targeted those with single disabilities such as dyslexia or visual impairment, often neglecting those with multiple disabilities due to the difficulty of designing for a combination of disabilities.This thesis argues that it is possible to personalise learning materials for learners with disabilities, including those with multiple disabilities. This is achieved by developing a model that allows the learning environment to present the student with learning materials in suitable formats while considering their disability and learning needs through an ontology-driven and disability-aware personalised e-learning system model (ONTODAPS). A disability ontology known as the Abilities and Disabilities Ontology for Online LEarning and Services (ADOOLES) is developed and used to drive this model. To test the above hypothesis, some case studies are employed to show how the model functions for various individuals with and without disabilities and then the implemented visual interface is experimentally evaluated by eighteen students with disabilities and heuristically by ten lecturers. The results are collected and statistically analysed.The results obtained confirm the above hypothesis and suggest that ONTODAPS can be effectively employed to personalise learning and to manage learning resources. The student participants found that ONTODAPS could aid their learning experience and all agreed that they would like to use this functionality in an existing learning environment. The results also suggest that ONTODAPS provides a platform where students with disabilities can have equivalent learning experience with their peers without disabilities. For the results to be generalised, this study could be extended through further experiments with more diverse groups of students with disabilities and across multiple educational institutions

    Interacting with Smart Environments: Users, Interfaces, and Devices

    Get PDF
    A Smart Environment is an environment enriched with disappearing devices, acting together to form an “intelligent entity”. In such environments, the computing power pervades the space where the user lives, so it becomes particularly important to investigate the user’s perspective in interacting with her surrounding. Interaction, in fact, occurs when a human performs some kind of activity using any computing technology: in this case, the computing technology has an intelligence of its own and can potentially be everywhere. There is no well-defined interaction situation or context, and interaction can happen casually or accidentally. The objective of this dissertation is to improve the interaction between such complex and different entities: the human and the Smart Environment. To reach this goal, this thesis presents four different and innovative approaches to address some of the identified key challenges. Such approaches, then, are validated with four corresponding software solutions, integrated with a Smart Environment, that I have developed and tested with end-users. Taken together, the proposed solutions enable a better interaction between diverse users and their intelligent environments, provide a solid set of requirements, and can serve as a baseline for further investigation on this emerging topic

    Sonification of guidance data during road crossing for people with visual impairments or blindness

    Get PDF
    In the last years several solutions were proposed to support people with visual impairments or blindness during road crossing. These solutions focus on computer vision techniques for recognizing pedestrian crosswalks and computing their relative position from the user. Instead, this contribution addresses a different problem; the design of an auditory interface that can effectively guide the user during road crossing. Two original auditory guiding modes based on data sonification are presented and compared with a guiding mode based on speech messages. Experimental evaluation shows that there is no guiding mode that is best suited for all test subjects. The average time to align and cross is not significantly different among the three guiding modes, and test subjects distribute their preferences for the best guiding mode almost uniformly among the three solutions. From the experiments it also emerges that higher effort is necessary for decoding the sonified instructions if compared to the speech instructions, and that test subjects require frequent `hints' (in the form of speech messages). Despite this, more than 2/3 of test subjects prefer one of the two guiding modes based on sonification. There are two main reasons for this: firstly, with speech messages it is harder to hear the sound of the environment, and secondly sonified messages convey information about the "quantity" of the expected movement

    Supporting Inclusive Design of Mobile Devices with a Context Model

    Get PDF
    The aim of inclusive product design is to successfully integrate a broad range of diverse human factors in the product development process with the intention of making products accessible to and usable by the largest possible group of users. However, the main barriers for adopting inclusive product design include technical complexity, lack of time, lack of knowledge and techniques, and lack of guidelines. Although manufacturers of consumer products are nowadays more likely to invest efforts in user studies, consumer products in general only nominally fulfill, if at all, the accessibility requirements of as many users as they potentially could. The main reason is that any user-centered design prototyping or testing aiming to incorporate real user input, is often done at a rather late stage of the product development process. Thus, the more progressed a product design has evolved - the more time-consuming and costly it will be to alter the design. This is increasingly the case for contemporary mobile devices such as mobile phones or remote controls
    • 

    corecore