48 research outputs found

    Alternate Computer Input Device for Individuals with Quadriplegia

    Get PDF
    This project details the design development of an alternative computer input system that allows a person with quadriplegia to move a computer\u27s cursor and activate left and right click button inputs. After researching and analyzing possible solutions, an end design was chosen that most appropriately satisfied all user requirements and engineering specifications. This final design employs a head mounted Inertial Measurement Unit (IMU) with 9 DoF (Degrees of Freedom) to track head movements and correlate these motions to computer cursor movements. A Sip-Puff Transducer monitors and interprets a user\u27s application of negative and positive air pressure differentials to a vinyl tube as analog voltages, which are then interpreted over time to trigger left and right click events. An Arduino Due microcontroller is used to interpret and process these inputs and send mouse commands to the user\u27s computer via a USB connection. In addition to the sensing hardware, there are two indicator LEDs which display the state of the left and right mouse buttons. There are also two adjustment potentiometers, which can be turned to adjust the sensitivity of the mouse tracking and the sip-puff click sensing window. This system improves upon other alternative computer interfaces by allowing the user to more easily perform complex and non-linear tasks such as file organization and digital painting/drawing. Two accelerometers were initially incorporated into the design to be strapped to the upper arms of the user, and upward and downward accelerations caused by the raising and lowering of each shoulder would have corresponded to the activation of the Control and Shift keys. However, due to issues with program timing and computational complexity, these parts of our design that operated the control and shift keys were abandoned

    Development of a general purpose computer-based platform to provide functional assistance to people with severe motor disabilities

    Get PDF
    Research and development into a generic assistive platform, which can accommodate a variety of patients suffering from a wide range of motor disabilities is described. Methodologies were established, whereby the design could be made sufficiently flexible, such that it could be programmed to suit these people in terms of their needs and level of motor disability. This needed to be achieved without redesigning the system for each person. Suitable sensors were chosen to sense the residual motor function of the disabled individual, while being non-invasive and safe for use. These sensors included a dual-axis accelerometer (tilt switch), a 6-key touch sensor and a SCATIR switch (blink/wink sensor). The placement of the sensors, for the purpose of this study, were restricted to sensing arm (dual-axis accelerometer) or finger movements (touch sensors), head and neck movements (accelerometer) and blink/wink and/or eye-brow movements (SCATIR switch). These input devices were used to control a variety of different output functions, as required by the user, while being non-invasive and safe for use. After ethics approval was obtained, volunteers with various motor disabilities were subsequently invited to test the system and thereafter requested to answer a series of questions regarding the performance and potential usefulness of the system. The input sensors were found to be comfortable and easy to use, while performing predictably and with very little to no fatigue experienced. The system performed as expected and accepted all of the input sensors attached to it, while repeating specific tasks multiple times. It was also established that the system was customisable in terms of providing a specific output for a specific and voluntary input. The system could be improved by further compacting and simplifying the design and operation, while using wireless sensors were necessary. It was thereafter concluded that the system, in general, was capable of satisfying the various users’ diverse requirements, thereby achieving the required objectives

    The Machinery of Democracy: Voting System Accessibility

    Get PDF
    Traditionally, many voters with disabilities have been unable to cast their ballots without assistance from personal aides or poll workers. Those voters do not possess the range of visual, motor, and cognitive facilities typically required to operate common voting systems. For example, some are not be able to hold a pen or stylus to mark a ballot that they must see and read. Thus, the voting experience for citizens who cannot perform certain tasks reading a ballot, holding a pointer or pencil has not been equal to that of their peers without disabilities.The Help America Vote Act of 2002 took a step forward in addressing this longstanding inequity. According to HAVA, new voting systems must allow voters with disabilities to complete and cast their ballots in a manner that provides the same opportunity for access and participation (including privacy and independence) as for other voters.1 In other words, as jurisdictions purchase new technologies designed to facilitate voting in a range of areas, they must ensure that new systems provide people with disabilities with an experience that mirrors the experience of other voters.This report is designed to help state and local jurisdictions improve the accessibility of their voting systems. We have not conducted any direct accessibility testing of existent technologies. Rather, we set forth a set of critical questions for election officials and voters to use when assessing available voting systems, indicate whether vendors have provided any standard or custom features designed to answer these accessibility concerns, and offer an evaluation of each architectures limitations in providing an accessible voting experience to all voters.The report thus provides a foundation of knowledge from which election officials can begin to assess a voting systems accessibility. The conclusions of this report are not presented as a substitute for the evaluation and testing of a specific manufacturers voting system to determine how accessible a system is in conjunction with a particular jurisdictions election procedures and system configuration. We urge election officials to include usability and accessibility testing in their product evaluation process

    Impact of universal design ballot interfaces on voting performance and satisfaction of people with and without vision loss

    Get PDF
    Since the Help America Vote Act (HAVA) in 2002 that addressed improvements to voting systems and voter access through the use of electronic technologies, electronic voting systems have improved in U.S. elections. However, voters with disabilities have been disappointed and frustrated, because they have not been able to vote privately and independently (Runyan, 2007). Voting accessibility for individuals with disabilities has generally been accomplished through specialized designs, providing the addition of alternative inputs (e.g., headphones with tactile keypad for audio output, sip-and-puff) and outputs (e.g., audio output) to existing hardware and/or software architecture. However, while the add-on features may technically be accessible, they are often complex and difficult for poll workers to set up and require more time for targeted voters with disabilities to use compared to the direct touch that enable voters without disabilities to select any candidate in a particular contest at any time. To address the complexities and inequities with the accessible alternatives, a universal design (UD) approach was used to design two experimental ballot interfaces, namely EZ Ballot and QUICK Ballot, that seamlessly integrate accessible features (e.g., audio output) based on the goal of designing one voting system for all. EZ Ballot presents information linearly (i.e., one candidate’s name at a time) and voters can choose Yes or No inputs that does not require search (i.e., finding a particular name). QUICK Ballot presents multiple names that allow users to choose a name using direct-touch or gesture-touch interactions (e.g., the drag and lift gesture). Despite the same goal of providing one type of voting system for all voters, each ballot has a unique selection and navigation process designed to facilitate access and participation in voting. Thus, my proposed research plan was to examine the effectiveness of the two UD ballots primarily with respect to their different ballot structures in facilitating voting performance and satisfaction for people with a range of visual abilities including those with blindness or vision loss. The findings from this work show that voters with a range of visual abilities were able to use both ballots independently. However, as expected, the voter performance and preferences of each ballot interface differed by voters through the range of visual abilities. While non-sighted voters made fewer errors on the linear ballot (EZ Ballot), partially-sighted and sighted voters completed the random access ballot (QUICK Ballot) in less time. In addition, a higher percentage of non-sighted participants preferred the linear ballot, and a higher percentage of sighted participants preferred the random ballot. The main contributions of this work are in: 1) utilizing UD principles to design ballot interfaces that can be differentially usable by voters with a range of abilities; 2) demonstrating the feasibility of two UD ballot interfaces by voters with a range of visual abilities; 3) providing an impact for people with a range of visual abilities on other applications. The study suggests that the two ballots, both designed according to UD principles but with different weighting of principles, can be differentially usable by individuals with a range of visual abilities. This approach clearly distinguishes this work from previous efforts, which have focused on developing one UD solution for everyone because UD does not dictate a single solution for everyone (e.g., a one-size-fits-all approach), but rather supports flexibility in use that provide a new perspective into human-computer interaction (Stephanidis, 2001).Ph.D

    Semi-Autonomous Control of an Exoskeleton using Computer Vision

    Get PDF

    Electric Powered Wheelchair Control with a Variable Compliance Joystick: Improving Control of Mobility Devices for Individuals with Multiple Sclerosis

    Get PDF
    While technological developments over the past several decades have greatly enhanced the lives of people with mobility impairments, between 10 and 40 percent of clients who desired powered mobility found it very difficult to operate electric powered wheelchairs (EPWs) safely because of sensory impairments, poor motor function, or cognitive deficits [1]. The aim of this research is to improve control of personalized mobility for those with multiple sclerosis (MS) by examining isometric and movement joystick interfaces with customizable algorithms. A variable compliance joystick (VCJ) with tuning software was designed and built to provide a single platform for isometric and movement, or compliant, interfaces with enhanced programming capabilities.The VCJ with three different algorithms (basic, personalized, personalized with fatigue adaptation) was evaluated with four subjects with MS (mean age 58.7±5.0 yrs; years since diagnosis 28.2±16.1 yrs) in a virtual environment. A randomized, two-group, repeated-measures experimental design was used, where two subjects used the VCJ in isometric mode and two in compliant mode.While still too early to draw conclusions about the performance of the joystick interfaces and algorithms, the VCJ was a functional platform for collecting information. Inspection of the data shows that the learning curve may be long for this system. Also, while subjects may have low trial times, low times could be related to more deviation from the target path

    Design of a Multiple-User Intelligent Feeding Robot for Elderly and Disabled

    Get PDF
    The number of elderly people around the world is growing rapidly. This has led to an increase in the number of people who are seeking assistance and adequate service either at home or in long-term- care institutions to successfully accomplish their daily activities. Responding to these needs has been a burden to the health care system in terms of labour and associated costs and has motivated research in developing alternative services using new technologies. Various intelligent, and non-intelligent, machines and robots have been developed to meet the needs of elderly and people with upper limb disabilities or dysfunctions in gaining independence in eating, which is one of the most frequent and time-consuming everyday tasks. However, in almost all cases, the proposed systems are designed only for the personal use of one individual and little effort to design a multiple-user feeding robot has been previously made. The feeding requirements of elderly in environments such as senior homes, where many elderly residents dine together at least three times per day, have not been extensively researched before. The aim of this research was to develop a machine to feed multiple elderly people based on their characteristics and feeding needs, as determined through observations at a nursing home. Observations of the elderly during meal times have revealed that almost 40% of the population was totally dependent on nurses or caregivers to be fed. Most of those remaining, suffered from hand tremors, joint pain or lack of hand muscle strength, which made utensil manipulation and coordination very difficult and the eating process both messy and lengthy. In addition, more than 43% of the elderly were very slow in eating because of chewing and swallowing problems and most of the rest were slow in scooping and directing utensils toward their mouths. Consequently, one nurse could only respond to a maximum of two diners simultaneously. In order to manage the needs of all elderly diners, they required the assistance of additional staff members. The limited time allocated for each meal and the daily progression of the seniors’ disabilities also made mealtime very challenging. Based on the caregivers’ opinion, many of the elderly in such environments can benefit from a machine capable of feeding multiple users simultaneously. Since eating is a slow procedure, the idle state of the robot during one user’s chewing and swallowing time can be allotted for feeding another person who is sitting at the same table. The observations and studies have resulted in the design of a food tray, and selection of an appropriate robot and applicable user interface. The proposed system uses a 6-DOF serial articulated robot in the center of a four-seat table along with a specifically designed food tray to feed one to four people. It employs a vision interface for food detection and recognition. Building the dynamic equations of the robotic system and simulation of the system were used to verify its dynamic behaviour before any prototyping and real-time testing

    Heterogeneous recognition of bioacoustic signals for human-machine interfaces

    No full text
    Human-machine interfaces (HMI) provide a communication pathway between man and machine. Not only do they augment existing pathways, they can substitute or even bypass these pathways where functional motor loss prevents the use of standard interfaces. This is especially important for individuals who rely on assistive technology in their everyday life. By utilising bioacoustic activity, it can lead to an assistive HMI concept which is unobtrusive, minimally disruptive and cosmetically appealing to the user. However, due to the complexity of the signals it remains relatively underexplored in the HMI field. This thesis investigates extracting and decoding volition from bioacoustic activity with the aim of generating real-time commands. The developed framework is a systemisation of various processing blocks enabling the mapping of continuous signals into M discrete classes. Class independent extraction efficiently detects and segments the continuous signals while class-specific extraction exemplifies each pattern set using a novel template creation process stable to permutations of the data set. These templates are utilised by a generalised single channel discrimination model, whereby each signal is template aligned prior to classification. The real-time decoding subsystem uses a multichannel heterogeneous ensemble architecture which fuses the output from a diverse set of these individual discrimination models. This enhances the classification performance by elevating both the sensitivity and specificity, with the increased specificity due to a natural rejection capacity based on a non-parametric majority vote. Such a strategy is useful when analysing signals which have diverse characteristics, false positives are prevalent and have strong consequences, and when there is limited training data available. The framework has been developed with generality in mind with wide applicability to a broad spectrum of biosignals. The processing system has been demonstrated on real-time decoding of tongue-movement ear pressure signals using both single and dual channel setups. This has included in-depth evaluation of these methods in both offline and online scenarios. During online evaluation, a stimulus based test methodology was devised, while representative interference was used to contaminate the decoding process in a relevant and real fashion. The results of this research provide a strong case for the utility of such techniques in real world applications of human-machine communication using impulsive bioacoustic signals and biosignals in general
    corecore