297 research outputs found

    Tongue Control of Upper-Limb Exoskeletons For Individuals With Tetraplegia

    Get PDF

    Techniques of EMG signal analysis: detection, processing, classification and applications

    Get PDF
    Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications

    Heterogeneous recognition of bioacoustic signals for human-machine interfaces

    No full text
    Human-machine interfaces (HMI) provide a communication pathway between man and machine. Not only do they augment existing pathways, they can substitute or even bypass these pathways where functional motor loss prevents the use of standard interfaces. This is especially important for individuals who rely on assistive technology in their everyday life. By utilising bioacoustic activity, it can lead to an assistive HMI concept which is unobtrusive, minimally disruptive and cosmetically appealing to the user. However, due to the complexity of the signals it remains relatively underexplored in the HMI field. This thesis investigates extracting and decoding volition from bioacoustic activity with the aim of generating real-time commands. The developed framework is a systemisation of various processing blocks enabling the mapping of continuous signals into M discrete classes. Class independent extraction efficiently detects and segments the continuous signals while class-specific extraction exemplifies each pattern set using a novel template creation process stable to permutations of the data set. These templates are utilised by a generalised single channel discrimination model, whereby each signal is template aligned prior to classification. The real-time decoding subsystem uses a multichannel heterogeneous ensemble architecture which fuses the output from a diverse set of these individual discrimination models. This enhances the classification performance by elevating both the sensitivity and specificity, with the increased specificity due to a natural rejection capacity based on a non-parametric majority vote. Such a strategy is useful when analysing signals which have diverse characteristics, false positives are prevalent and have strong consequences, and when there is limited training data available. The framework has been developed with generality in mind with wide applicability to a broad spectrum of biosignals. The processing system has been demonstrated on real-time decoding of tongue-movement ear pressure signals using both single and dual channel setups. This has included in-depth evaluation of these methods in both offline and online scenarios. During online evaluation, a stimulus based test methodology was devised, while representative interference was used to contaminate the decoding process in a relevant and real fashion. The results of this research provide a strong case for the utility of such techniques in real world applications of human-machine communication using impulsive bioacoustic signals and biosignals in general

    Application-driven data processing in wireless sensor networks

    Get PDF
    Wireless sensor networks (WSNs) are composed of spatially distributed, low-cost, low-power, resource-constrained devices using sensors and actuators to cooperatively monitor and operate into the environment. These systems are being used in a wide range of applications. The design and implementation of an effective WSN requires dealing with several challenges involving multiple disciplines, such as wireless communications and networking, software engineering, embedded systems and signal processing. Besides, the technical solutions found to these issues are closely interconnected and determine the capability of the system to successfully fulfill the requirements posed by each application domain. The large and heterogeneous amount of data collected in a WSN need to be efficiently processed in order to improve the end-user comprehension and control of the observed phenomena. The thesis focuses on a) the development of centralized and distributed data processing methods optimized for the requirements and characteristics of the considered application domains, and b) the design and implementation of suitable system architectures and protocols with respect to critical application-specific parameters. The thesis comprehends a summary and nine publications, equally divided over three different application domains, i.e. wireless automation, structural health monitoring (SHM) and indoor situation awareness (InSitA). In the first one, a wireless joystick control system for human adaptive mechatronics is developed. Also, the effect of packet losses on the performance of a wireless control system is analyzed and validated with an unstable process. A remotely reconfigurable, time synchronized wireless system for SHM enables a precise estimation of the modal properties of the monitored structure. Furthermore, structural damages are detected and localized through a distributed data processing method based on the Goertzel algorithm. In the context of InSitA, the short-time, low quality acoustic signals collected by the nodes composing the network are processed in order to estimate the number of people located in the monitored indoor environment. In a second phase, text- and language-independent speaker identification is performed. Finally, device-free localization and tracking of the movements of people inside the monitored indoor environment is achieved by means of distributed processing of the radio signal strength indicator (RSSI) signals. The results presented in the thesis demonstrate the adaptability of WSNs to different application domains and the importance of an optimal co-design of the system architecture and data processing methods

    Control of a Wheelchair-Mounted 6DOF Assistive Robot With Chin and Finger Joysticks

    Get PDF
    Throughout the last decade, many assistive robots for people with disabilities have been developed; however, researchers have not fully utilized these robotic technologies to entirely create independent living conditions for people with disabilities, particularly in relation to activities of daily living (ADLs). An assistive system can help satisfy the demands of regular ADLs for people with disabilities. With an increasing shortage of caregivers and a growing number of individuals with impairments and the elderly, assistive robots can help meet future healthcare demands. One of the critical aspects of designing these assistive devices is to improve functional independence while providing an excellent human–machine interface. People with limited upper limb function due to stroke, spinal cord injury, cerebral palsy, amyotrophic lateral sclerosis, and other conditions find the controls of assistive devices such as power wheelchairs difficult to use. Thus, the objective of this research was to design a multimodal control method for robotic self-assistance that could assist individuals with disabilities in performing self-care tasks on a daily basis. In this research, a control framework for two interchangeable operating modes with a finger joystick and a chin joystick is developed where joysticks seamlessly control a wheelchair and a wheelchair-mounted robotic arm. Custom circuitry was developed to complete the control architecture. A user study was conducted to test the robotic system. Ten healthy individuals agreed to perform three tasks using both (chin and finger) joysticks for a total of six tasks with 10 repetitions each. The control method has been tested rigorously, maneuvering the robot at different velocities and under varying payload (1–3.5 lb) conditions. The absolute position accuracy was experimentally found to be approximately 5 mm. The round-trip delay we observed between the commands while controlling the xArm was 4 ms. Tests performed showed that the proposed control system allowed individuals to perform some ADLs such as picking up and placing items with a completion time of less than 1 min for each task and 100% success

    Bare nothingness: Situated subjects in embodied artists' systems

    Get PDF
    This chapter examines the current state of digital artworks, arguing that they have not yet made a groundbreaking impact on the cultural landscape of the 21st century and suggesting that a reason for this lack of notoriety is the obsolete model of agency deployed by many digital artists. As an alternative to what is framed as out-of-date forms of interactivity, the chapter highlights evolving research into interactive systems, artists' tools, applications, and techniques that will provide readers with an insightful and up-to-date examination of emerging multimedia technology trends. In particular, the chapter looks at situated computing and embodied systems, in which context-aware models of human subjects can be combined with sensor technology to expand the agencies at play in interactive works. The chapter connects these technologies to Big Data, Crowdsourcing and other techniques from artificial intelligence that expand our understanding of interaction and participation

    Towards Natural Human Control and Navigation of Autonomous Wheelchairs

    Get PDF
    Approximately 2.2 million people in the United States depend on a wheelchair to assist with their mobility. Often times, the wheelchair user can maneuver around using a conventional joystick. Visually impaired or wheelchair patients with restricted hand mobility, such as stroke, arthritis, limb injury, Parkinson’s, cerebral palsy or multiple sclerosis, prevent them from using traditional joystick controls. The resulting mobility limitations force these patients to rely on caretakers to perform everyday tasks. This minimizes the independence of the wheelchair user. Modern day speech recognition systems can be used to enhance user experiences when using electronic devices. By expanding the motorized wheelchair control interface to include the detection of user speech commands, the independence is given back to the mobility impaired. A speech recognition interface was developed for a smart wheelchair. By integrating navigation commands with a map of the wheelchair’s surroundings, the wheelchair interface is more natural and intuitive to use. Complex speech patterns are interpreted for users to command the smart wheelchair to navigate to specified locations within the map. Pocketsphinx, a speech toolkit, is used to interpret the vocal commands. A language model and dictionary were generated based on a set of possible commands and locations supplied to the speech recognition interface. The commands fall under the categories of speed, directional, or destination commands. Speed commands modify the relative speed of the wheelchair. Directional commands modify the relative direction of the wheelchair. Destination commands require a known location on a map to navigate to. The completion of the speech input processer and the connection between wheelchair components via the Robot Operating System make map navigation possible

    Affective speech modulates a cortico-limbic network in real time

    Full text link
    Affect signaling in human communication involves cortico-limbic brain systems for affect information decoding, such as expressed in vocal intonations during affective speech. Both, the affecto-acoustic speech profile of speakers and the cortico-limbic affect recognition network of listeners were previously identified using non-social and non-adaptive research protocols. However, these protocols neglected the inherent socio-dyadic nature of affective communication, thus underestimating the real-time adaptive dynamics of affective speech that maximize listeners' neural effects and affect recognition. To approximate this socio-adaptive and neural context of affective communication, we used an innovative real-time neuroimaging setup that linked speakers' live affective speech production with listeners' limbic brain signals that served as a proxy for affect recognition. We show that affective speech communication is acoustically more distinctive, adaptive, and individualized in a live adaptive setting and more efficiently capitalizes on neural affect decoding mechanisms in limbic and associated networks than non-adaptive affective speech communication. Only live affective speech produced in adaption to listeners' limbic signals was closely linked to their emotion recognition as quantified by speakers' acoustics and listeners' emotional rating correlations. Furthermore, while live and adaptive aggressive speaking directly modulated limbic activity in listeners, joyful speaking modulated limbic activity in connection with the ventral striatum that is, amongst others, involved in the processing of pleasure. Thus, evolved neural mechanisms for affect decoding seem largely optimized for interactive and individually adaptive communicative contexts
    • …
    corecore