625 research outputs found

    Assessment of a Wearable Force- and Electromyography Device and Comparison of the Related Signals for Myocontrol

    Get PDF
    In the frame of assistive robotics, multi-finger prosthetic hand/wrists have recently appeared,offering an increasing level of dexterity; however, in practice their control is limited to a few handgrips and still unreliable, with the effect that pattern recognition has not yet appeared in the clinicalenvironment. According to the scientific community, one of the keys to improve the situation ismulti-modal sensing, i.e., using diverse sensor modalities to interpret the subject’s intent andimprove the reliability and safety of the control system in daily life activities. In this work, wefirst describe and test a novel wireless, wearable force- and electromyography device; throughan experiment conducted on ten intact subjects, we then compare the obtained signals bothqualitatively and quantitatively, highlighting their advantages and disadvantages. Our resultsindicate that force-myography yields signals which are more stable across time during whenevera pattern is held, than those obtained by electromyography. We speculate that fusion of the twomodalities might be advantageous to improve the reliability of myocontrol in the near future

    Signal Processing and Machine Learning Techniques Towards Various Real-World Applications

    Get PDF
    abstract: Machine learning (ML) has played an important role in several modern technological innovations and has become an important tool for researchers in various fields of interest. Besides engineering, ML techniques have started to spread across various departments of study, like health-care, medicine, diagnostics, social science, finance, economics etc. These techniques require data to train the algorithms and model a complex system and make predictions based on that model. Due to development of sophisticated sensors it has become easier to collect large volumes of data which is used to make necessary hypotheses using ML. The promising results obtained using ML have opened up new opportunities of research across various departments and this dissertation is a manifestation of it. Here, some unique studies have been presented, from which valuable inference have been drawn for a real-world complex system. Each study has its own unique sets of motivation and relevance to the real world. An ensemble of signal processing (SP) and ML techniques have been explored in each study. This dissertation provides the detailed systematic approach and discusses the results achieved in each study. Valuable inferences drawn from each study play a vital role in areas of science and technology, and it is worth further investigation. This dissertation also provides a set of useful SP and ML tools for researchers in various fields of interest.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    A Human-Centric Metaverse Enabled by Brain-Computer Interface: A Survey

    Full text link
    The growing interest in the Metaverse has generated momentum for members of academia and industry to innovate toward realizing the Metaverse world. The Metaverse is a unique, continuous, and shared virtual world where humans embody a digital form within an online platform. Through a digital avatar, Metaverse users should have a perceptual presence within the environment and can interact and control the virtual world around them. Thus, a human-centric design is a crucial element of the Metaverse. The human users are not only the central entity but also the source of multi-sensory data that can be used to enrich the Metaverse ecosystem. In this survey, we study the potential applications of Brain-Computer Interface (BCI) technologies that can enhance the experience of Metaverse users. By directly communicating with the human brain, the most complex organ in the human body, BCI technologies hold the potential for the most intuitive human-machine system operating at the speed of thought. BCI technologies can enable various innovative applications for the Metaverse through this neural pathway, such as user cognitive state monitoring, digital avatar control, virtual interactions, and imagined speech communications. This survey first outlines the fundamental background of the Metaverse and BCI technologies. We then discuss the current challenges of the Metaverse that can potentially be addressed by BCI, such as motion sickness when users experience virtual environments or the negative emotional states of users in immersive virtual applications. After that, we propose and discuss a new research direction called Human Digital Twin, in which digital twins can create an intelligent and interactable avatar from the user's brain signals. We also present the challenges and potential solutions in synchronizing and communicating between virtual and physical entities in the Metaverse

    Research on Application of Cognitive-Driven Human-Computer Interaction

    Get PDF
    Human-computer interaction is an important research content of intelligent manufacturing human factor engineering. Natural human-computer interaction conforms to the cognition of users' habits and can efficiently process inaccurate information interaction, thus improving user experience and reducing cognitive load. Through the analysis of the information interaction process, user interaction experience cognition and human-computer interaction principles in the human-computer interaction system, a cognitive-driven human-computer interaction information transmission model is established. Investigate the main interaction modes in the current human-computer interaction system, and discuss its application status, technical requirements and problems. This paper discusses the analysis and evaluation methods of interaction modes in human-computer system from three levels of subjective evaluation, physiological measurement and mathematical method evaluation, so as to promote the understanding of inaccurate information to achieve the effect of interaction self-adaptation and guide the design and optimization of human-computer interaction system. According to the development status of human-computer interaction in intelligent environment, the research hotspots, problems and development trends of human-computer interaction are put forward

    Intent sensing for assistive technology

    Get PDF
    This thesis aims to develop systems for intent sensing – the measurement and prediction of what it is that a user wants to happen. Being able to sense intent could be hugely beneficial for control of assistive devices, and could make a great impact on the wider medical device industry. Initially, a literature review is performed to determine the current state-of-the-art for intent sensing, and identifies that a holistic intent sensing system that properly captures all aspects of intent has not yet been developed. This is therefore followed by the development of such a novel intent sensing system. To achieve this, algorithms are developed to combine multiple sensors together into a modular Probabilistic Sensor Network. The performance of such a network is modelled mathematically, with these models tested and verified on real data. The intent sensing system then developed from these models is tested for sensing modalities such as Electromyography (EMG), motion data from Inertial Measurement Units (IMUs), and audio. The benefits of constructing a modular system in this way are demonstrated, showcasing improvement in accuracy with a fixed amount of training data, and in robustness to sensor unavailability – a common problem in prosthetics, where sensor lift-off from the skin is a frequent issue. Initially, the algorithm is developed to classify intent after activity completion, and this is then developed to allow it to run in real-time. Different classification methods are proposed and tested including K-nearest-neighbours (KNN), before deep learning is selected as an effective classifier for this task. In order to apply deep learning without requiring a prohibitively large training data set, a time-segmentation method is developed to limit the complexity of the model and make better use of the available data. Finally, the techniques developed in the thesis are combined into a single continuous, multi-modal intent sensing system that is modular in both sensor composition and in time. At every stage of this process, the algorithms are tested against real data, initially from non-disabled volunteer participants and in the later chapters on data from patients with Parkinson’s disease (a group who may benefit greatly from an intent sensing system). The final system is found to achieve an accuracy of 97.4% almost immediately after activity inception, increasing to 99.9918% over the course of the activity. This high accuracy can be seen both in the patient group and the control group, demonstrating that intent sensing is indeed viable with currently available technology, and should be further developed into future control systems for assistive devices to improve quality of life for both disabled and non-disabled users alike
    • …
    corecore