232 research outputs found

    CoUIM: crossover user interface model for inclusive computing

    Get PDF
    Persons with disabilities can face considerable challenges accessing many computing systems, such as cloud computing. We created six low-cost user interfaces using: keyboard-based, touchable, speech-based, touch-less gesture, tactile, and then combined them all in one user interface termed Crossover User Interface Model (CoUIM). We measured inclusiveness, error occurrence, user performance, and user satisfaction though an IRB approved study of twenty-nine participants. We chose Xen cloud platform to evaluate our research. We focused on three groups of users: persons with no disability, persons with blind and visually impairment (B/VI), and persons with motor-impairment. When we combined several interactions in one user interface, results improved for persons with disability. Using CoUIM improved inclusiveness, error rate, user performance and even user satisfaction. Persons with motor disability needed a little more time to complete the same tasks in our study. In particular, we show that persons with blind and visually impairment (B/VI) can compete on equal footing with their sighted peers based on error rate and time to complete the tasks using CoUIM

    Analysis of Touchless Mouse Technology for Physical Disabilities

    Get PDF
    We provide a touchless mouse system that surpasses past attempts by utilising deep learning models such as DenseNet169 and DenseNet201, in addition to an ensemble model. For feature extraction in the touchless mouse system, we use two different state-of-the-art convolutional neural network architectures, namely DenseNet169 and DenseNet201. These models, which were trained using massive datasets, perform remarkably well regarding computer vision tasks. Touchless mouse technology's sophisticated feature extraction capabilities make the exact recognition and interpretation of hand motions and movements possible. An ensemble model is developed by integrating the results of DenseNet169 and DenseNet201. This is done to make the system's performance even more effective. The ensemble technique improves the accuracy, stability, and generalizability of hand gesture detection by capitalising on these distinctions and using them to its advantage. Comparisons are made between the DenseNet169 and DenseNet201 models, the Ensemble model and several other deep learning and ensemble learning models. Additional deep learning and ensemble learning models are also displayed. The Ensemble model reached the maximum attainable accuracy of 99.62 per cent

    Proceedings of the 4th Workshop on Interacting with Smart Objects 2015

    Get PDF
    These are the Proceedings of the 4th IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects

    AeroLamp

    Get PDF
    AeroLamp is a smart robotic desk lamp embodying a pioneering development in Natural User Interface (NUI) design. Free of remotes or controllers, AeroLamp recognizes voice inputs and the natural gestures of human hands to maneuver its robotic position and lighting properties. This unique control of mechanical movement can be achieved without the use of physical contact of any kind and represents the first of its kind in a smart lighting product. An onboard camera will be utilized for observing hand gestures and motions, providing data which will be classified and interpreted on local processing hardware. With this implementation, control is no longer constrained by user location, distance, or physical contact. The removal of physical contact is an essential step in preventing contact transmission of diseases in populous spaces, which is likely to be a societal concern following the COVID-19 pandemic

    Defining CARE Properties Through Temporal Input Models

    Get PDF
    In this paper we show how it is possible to represent the CARE properties (complementarity, assignment, redundancy, equivalence) modelling the temporal relationships among inputs provided through different modalities. For this purpose we extended GestIT, which provides a declarative and compositional model for gestures, in order to support other modalities. The generic models for the CARE properties can be used for the input model design, but also for an analysis of the relationships between the different modalities included into an existing input model

    Usability evaluation of input devices for navigation and interaction in 3D visualisation

    Get PDF
    We present an assessment study of user experience and usability of different kinds of input devices for view manipulation in a 3D data visualisation application. Three input devices were compared: a computer mouse, a 3D mouse with six degrees of freedom, and the Leap Motion Controller - a device for touchless interaction. Assessment of these devices was conducted using the System Usability Scale (SUS) methodology, with addition of application specific questions. To gain further insight into users' behaviour, the users' performance and feedback on the given tasks was recorded and analysed. The best results were achieved by using the 3D mouse (SUS score 88.7), followed by the regular mouse (SUS score 72.4). The Leap Motion Controller (SUS score 56.5) was the least preferred mode of interaction, nevertheless it was described as natural and intuitive, showing great potential

    Optical Accessory to Add Touch Capability to a Non-touchscreen Device

    Get PDF
    Certain laptops and other devices do not include built-in touchscreen capability. This disclosure describes an optical accessory that enables such devices to recognize touch inputs. The accessory includes a fisheye lens attached to a holder, and a tilt plane mirror placed within the accessory that reflects incident light towards the camera of the device to which the accessory is attached. A one-time calibration is performed at a setup time of setup and at runtime, camera images that capture the user’s finger position relative to the screen are received. A neural finger pose is estimated by detecting fingers and utilizing a skeletal finger pose detector. A touch event is determined based on an Euclidean distance between the tip joint coordinates of real and virtual fingers. The touch coordinate in the optical view is determined and is mapped to a display coordinate using a conformal mapping

    Challenges Faced by Persons with Disabilities Using Self-Service Technologies

    Get PDF
    Foreseeable game changing solutions to SSTs will allow for better universal access by better implementing features that are easy and intuitive to use from the inception. Additional robotic advancements will allow for better and easier delivery of goods for consumers. Improvements to artificial intelligence will allow for better communication through natural language and alternative forms of communication. Furthermore, artificial intelligence will aid consumers at SSTs by remembering the consumers preferences and needs. With all foreseeable game changing solutions people with disabilities will be consulted when new and improved SSTs are being developed allowing for the SST to maximize its potential
    corecore