9,557 research outputs found

    EYECOM: an innovative approach for computer interaction

    Get PDF
    The world is innovating rapidly, and there is a need for continuous interaction with the technology. Sadly, there do not exist promising options for paralyzed people to interact with the machines i.e., laptops, smartphones, and tabs. A few commercial solutions such as Google Glasses are costly and cannot be afforded by every paralyzed person for such interaction. Towards this end, the thesis proposes a retina-controlled device called EYECOM. The proposed device is constructed from off-the-shelf cost-effective yet robust IoT devices (i.e., Arduino microcontrollers, Xbee wireless sensors, IR diodes, and accelerometer). The device can easily be mounted on to the glasses; the paralyzed person using this device can interact with the machine using simple head movement and eye blinks. The IR detector is located in front of the eye to illuminate the eye region. As a result of illumination, the eye reflects IR light which includes electrical signals and as the eyelids close, the reflected light over eye surface is disrupted, and such change in reflected value is recorded. Further to enable cursor movement onto the computer screen for the paralyzed person a device named accelerometer is used. The accelerometer is a small device, with the size of phalanges, a human thumb bone. The device operates on the principle of axis-based motion sensing and it can be worn as a ring by a paralyzed person. A microcontroller processes the inputs from the IR sensors, accelerometer and transmits them wirelessly via Xbee wireless sensor (i.e., a radio) to another microcontroller attached to the computer. With the help of a proposed algorithm, the microcontroller attached to the computer, on receiving the signals moves cursor onto the computer screen and facilitate performing actions, as simple as opening a document to operating a word-to-speech software. EYECOM has features which can help paralyzed persons to continue their contributions towards the technological world and become an active part of the society. Resultantly, they will be able to perform number of tasks without depending upon others from as simple as reading a newspaper on the computer to activate word-to-voice software

    The Internet of Things Will Thrive by 2025

    Get PDF
    This report is the latest research report in a sustained effort throughout 2014 by the Pew Research Center Internet Project to mark the 25th anniversary of the creation of the World Wide Web by Sir Tim Berners-LeeThis current report is an analysis of opinions about the likely expansion of the Internet of Things (sometimes called the Cloud of Things), a catchall phrase for the array of devices, appliances, vehicles, wearable material, and sensor-laden parts of the environment that connect to each other and feed data back and forth. It covers the over 1,600 responses that were offered specifically about our question about where the Internet of Things would stand by the year 2025. The report is the next in a series of eight Pew Research and Elon University analyses to be issued this year in which experts will share their expectations about the future of such things as privacy, cybersecurity, and net neutrality. It includes some of the best and most provocative of the predictions survey respondents made when specifically asked to share their views about the evolution of embedded and wearable computing and the Internet of Things

    An Intelligent Robot and Augmented Reality Instruction System

    Get PDF
    Human-Centered Robotics (HCR) is a research area that focuses on how robots can empower people to live safer, simpler, and more independent lives. In this dissertation, I present a combination of two technologies to deliver human-centric solutions to an important population. The first nascent area that I investigate is the creation of an Intelligent Robot Instructor (IRI) as a learning and instruction tool for human pupils. The second technology is the use of augmented reality (AR) to create an Augmented Reality Instruction (ARI) system to provide instruction via a wearable interface. To function in an intelligent and context-aware manner, both systems require the ability to reason about their perception of the environment and make appropriate decisions. In this work, I construct a novel formulation of several education methodologies, particularly those known as response prompting, as part of a cognitive framework to create a system for intelligent instruction, and compare these methodologies in the context of intelligent decision making using both technologies. The IRI system is demonstrated through experiments with a humanoid robot that uses object recognition and localization for perception and interacts with students through speech, gestures, and object interaction. The ARI system uses augmented reality, computer vision, and machine learning methods to create an intelligent, contextually aware instructional system. By using AR to teach prerequisite skills that lend themselves well to visual, augmented reality instruction prior to a robot instructor teaching skills that lend themselves to embodied interaction, I am able to demonstrate the potential of each system independently as well as in combination to facilitate students\u27 learning. I identify people with intellectual and developmental disabilities (I/DD) as a particularly significant use case and show that IRI and ARI systems can help fulfill the compelling need to develop tools and strategies for people with I/DD. I present results that demonstrate both systems can be used independently by students with I/DD to quickly and easily acquire the skills required for performance of relevant vocational tasks. This is the first successful real-world application of response-prompting for decision making in a robotic and augmented reality intelligent instruction system

    Evaluating Context-Aware Applications Accessed Through Wearable Devices as Assistive Technology for Students with Disabilities

    Get PDF
    The purpose of these two single subject design studies was to evaluate the use of the wearable and context-aware technologies for college students with intellectual disability and autism as tools to increase independence and vocational skills. There is a compelling need for the development of tools and strategies that will facilitate independence, self-sufficiency, and address poor outcomes in adulthood for students with disabilities. Technology is considered to be a great equalizer for people with disabilities. The proliferation of new technologies allows access to real-time, contextually-based information as a means to compensate for limitations in cognitive functioning and decrease the complexity of prerequisite skills for successful use of previous technologies. Six students participated in two single-subject design studies; three students participate in Study I and three different students participated in Study II. The results of these studies are discussed in the context applying new technology applications to assist and improve individuals with intellectual disability and autism to self-manage technological supports to learn new skills, set reminders, and enhance independence. During Study I, students were successfully taught to use a wearable smartglasses device, which delivered digital auditory and visual information to complete three novel vocational tasks. The results indicated that all students learned all vocational task using the wearable device. Students also continued to use the device beyond the initial training phase to self-direct their learning and self-manage prompts for task completion as needed. During Study II, students were successfully taught to use a wearable smartwatch device to enter novel appointments for the coming week, as well as complete the tasks associated with each appointment. The results indicated that all students were able to self-operate the wearable device to enter appointments, attend all appointments on-time and complete all associated tasks

    Exploring the Use of Wearables to develop Assistive Technology for Visually Impaired People

    Get PDF
    This thesis explores the usage of two prominent wearable devices to develop assistive technology for users who are visually impaired. Specifically, the work in this thesis aims at improving the quality of life of users who are visually impaired by improving their mobility and ability to socially interact with others. We explore the use of a smart watch for creating low-cost spatial haptic applications. This app explores the use of haptic feedback provided using a smartwatch and smartphone to provide navigation instructions that let visually impaired people safely traverse a large open space. This spatial feedback guides them to walk on a straight path from source to destination by avoiding veering. Exploring the paired interaction between a Smartphone and a Smartwatch, helped to overcome the limitation that smart devices have only single haptic actuator.We explore the use of a head-mounted display to enhance social interaction by helping people with visual impairments align their head towards a conversation partner as well as maintain personal space during a conversation. Audio feedback is provided to the users guiding them to achieve effective face-to-face communication. A qualitative study of this method shows the effectiveness of the application and explains how it helps visually impaired people to perceive non-verbal cues and feel more engaged and assertive in social interactions

    v. 80, issue 6, October 26, 2012

    Get PDF

    An innovative system to assist the mobility of people with motor disabilities

    Get PDF
    International audiencePeople with motor disabilities require assistance for navigating form one location to another. In order to improve the integration of wheelchair users into their daily life and work, we propose a real time adaptive planning algorithm for routing the user through an obstacle free optimal path. Our application is based on an augmented reality system for the assistance of wheelchair people (ARSAWP) and uses augmented reality (AR) smart glasses. The main goal is to support the development of indoor and outdoor navigation systems devoted to wheelchair users. In this paper we detail the design, the implementation and the evaluation of the proposed application, which was implemented in java for the Android operational system. Two types of database are used (local database and remote database). The information about navigation is displayed on AR glasses which give the user the possibility to interact with the system according to the external environment. The prototype is designed for use within the University of Lille campus

    Integration of Assistive Technologies into 3D Simulations: Exploratory Studies

    Get PDF
    Virtual worlds and environments have many purposes, ranging from games to scientific research. However, universal accessibility features in such virtual environments are limited. As the impairment prevalence rate increases yearly, so does the research interests in the field of assistive technologies. This work introduces research in assistive technologies and presents three software developments that explore the integration of assistive technologies within virtual environments, with a strong focus on Brain-Computer Interfaces. An accessible gaming system, a hands-free navigation software system, and a Brain-Computer Interaction plugin have been developed to study the capabilities of accessibility features within virtual 3D environments. Details of the specification, design, and implementation of these software applications are presented in the thesis. Observations and preliminary results as well as directions of future work are also included

    Augmented Reality for Real-time Navigation Assistance to Wheelchair Users with Obstacles' Management

    Get PDF
    International audienceDespite a rapid technological evolution in the field of technical assistance for people with motor disabilities, their ability to move independently in a wheelchair is still limited. New information and communication technologies (NICT) such as augmented reality (AR) are a real opportunity to integrate people with disabilities into their everyday life and work. AR can afford real-time information about buildings and locations' accessibility through mobile applications that allow the user to have a clear view of the building details. By interacting with augmented environments that appear in the real world using a smart device, users with disabilities have more control of their environment. In this paper, we propose a decision support system using AR for motor disabled people navigation assistance. We describe a real-time wheelchair navigation system equipped with geological mapping that indicates access path to a desired location, the shortest route towards it and identifies obstacles to avoid. The prototyped wheelchair navigation system was developed for use within the University of Lille campus

    Social Media in the Dental School Environment, Part A: Benefits, Challenges, and Recommendations for Use

    Get PDF
    Social media consist of powerful tools that impact not only communication but relationships among people, thus posing an inherent challenge to the traditional standards of who we are as dental educators and what we can expect of each other. This article examines how the world of social media has changed dental education. Its goal is to outline the complex issues that social media use presents for academic dental institutions and to examine these issues from personal, professional, and legal perspectives. After providing an update on social media, the article considers the advantages and risks associated with the use of social media at the interpersonal, professional, and institutional levels. Policies and legal issues of which academic dental institutions need to be aware from a compliance perspective are examined, along with considerations and resources needed to develop effective social media policies. The challenge facing dental educators is how to capitalize on the benefits that social media offer, while minimizing risks and complying with the various forms of legal constraint
    corecore