224 research outputs found

    Biosleeve Human-Machine Interface

    Get PDF
    Systems and methods for sensing human muscle action and gestures in order to control machines or robotic devices are disclosed. One exemplary system employs a tight fitting sleeve worn on a user arm and including a plurality of electromyography (EMG) sensors and at least one inertial measurement unit (IMU). Power, signal processing, and communications electronics may be built into the sleeve and control data may be transmitted wirelessly to the controlled machine or robotic device

    Integrating Neuromuscular and Touchscreen Input for Machine Control

    Get PDF
    Current touchscreen interfaces are unable to distinguish between individual fingers or to determine poses associated with the user’s hand. This limits the use of touchscreens in recognizing user input. As discussed herein, a statistical model can be trained using training data that includes sensor readings known to be associated with various hand poses and gestures. The trained statistical model can be configured to determine arm, hand, and/or figure configurations and forces (e.g., handstates) based on sensor readings, e.g., obtained via a wearable device such as a wristband with wearable sensors. The statistical model can identify the input from the handstate detected by the wearable device. For example, the handstates can include identification of a portion of the hand that is interacting with the touchscreen, a user’s finger position relative to the touchscreen, an identification of which finger or fingers of the user’s hand are interacting with the touchscreen, etc. The handstates can be used to control any aspect(s) of the touchscreen or a connected device indirectly through the touchscreen

    Design and recognition of microgestures for always-available input

    Get PDF
    Gestural user interfaces for computing devices most commonly require the user to have at least one hand free to interact with the device, for example, moving a mouse, touching a screen, or performing mid-air gestures. Consequently, users find it difficult to operate computing devices while holding or manipulating everyday objects. This limits the users from interacting with the digital world during a significant portion of their everyday activities, such as, using tools in the kitchen or workshop, carrying items, or workout with sports equipment. This thesis pushes the boundaries towards the bigger goal of enabling always-available input. Microgestures have been recognized for their potential to facilitate direct and subtle interactions. However, it remains an open question how to interact using gestures with computing devices when both of the user’s hands are occupied holding everyday objects. We take a holistic approach and focus on three core contributions: i) To understand end-users preferences, we present an empirical analysis of users’ choice of microgestures when holding objects of diverse geometries. Instead of designing a gesture set for a specific object or geometry and to identify gestures that generalize, this thesis leverages the taxonomy of grasp types established from prior research. ii) We tackle the critical problem of avoiding false activation by introducing a novel gestural input concept that leverages a single-finger movement, which stands out from everyday finger motions during holding and manipulating objects. Through a data-driven approach, we also systematically validate the concept’s robustness with different everyday actions. iii) While full sensor coverage on the user’s hand would allow detailed hand-object interaction, minimal instrumentation is desirable for real-world use. This thesis addresses the problem of identifying sparse sensor layouts. We present the first rapid computational method, along with a GUI-based design tool that enables iterative design based on the designer’s high-level requirements. Furthermore, we demonstrate that minimal form-factor devices, like smart rings, can be used to effectively detect microgestures in hands-free and busy scenarios. Overall, the presented findings will serve as both conceptual and technical foundations for enabling interaction with computing devices wherever and whenever users need them.Benutzerschnittstellen fĂŒr ComputergerĂ€te auf Basis von Gesten erfordern fĂŒr eine Interaktion meist mindestens eine freie Hand, z.B. um eine Maus zu bewegen, einen Bildschirm zu berĂŒhren oder Gesten in der Luft auszufĂŒhren. Daher ist es fĂŒr Nutzer schwierig, GerĂ€te zu bedienen, wĂ€hrend sie GegenstĂ€nde halten oder manipulieren. Dies schrĂ€nkt die Interaktion mit der digitalen Welt wĂ€hrend eines Großteils ihrer alltĂ€glichen AktivitĂ€ten ein, etwa wenn sie KĂŒchengerĂ€te oder Werkzeug verwenden, GegenstĂ€nde tragen oder mit SportgerĂ€ten trainieren. Diese Arbeit erforscht neue Wege in Richtung des grĂ¶ĂŸeren Ziels, immer verfĂŒgbare Eingaben zu ermöglichen. Das Potential von Mikrogesten fĂŒr die Erleichterung von direkten und feinen Interaktionen wurde bereits erkannt. Die Frage, wie der Nutzer mit GerĂ€ten interagiert, wenn beide HĂ€nde mit dem Halten von GegenstĂ€nden belegt sind, bleibt jedoch offen. Wir verfolgen einen ganzheitlichen Ansatz und konzentrieren uns auf drei KernbeitrĂ€ge: i) Um die PrĂ€ferenzen der Endnutzer zu verstehen, prĂ€sentieren wir eine empirische Analyse der Wahl von Mikrogesten beim Halten von Objekte mit diversen Geometrien. Anstatt einen Satz an Gesten fĂŒr ein bestimmtes Objekt oder eine bestimmte Geometrie zu entwerfen, nutzt diese Arbeit die aus frĂŒheren Forschungen stammenden Taxonomien an Griff-Typen. ii) Wir adressieren das Problem falscher Aktivierungen durch ein neuartiges Eingabekonzept, das die sich von alltĂ€glichen Fingerbewegungen abhebende Bewegung eines einzelnen Fingers nutzt. Durch einen datengesteuerten Ansatz validieren wir zudem systematisch die Robustheit des Konzepts bei diversen alltĂ€glichen Aktionen. iii) Auch wenn eine vollstĂ€ndige Sensorabdeckung an der Hand des Nutzers eine detaillierte Hand-Objekt-Interaktion ermöglichen wĂŒrde, ist eine minimale Ausstattung fĂŒr den Einsatz in der realen Welt wĂŒnschenswert. Diese Arbeit befasst sich mit der Identifizierung reduzierter Sensoranordnungen. Wir prĂ€sentieren die erste, schnelle Berechnungsmethode in einem GUI-basierten Designtool, das iteratives Design basierend auf den Anforderungen des Designers ermöglicht. Wir zeigen zudem, dass GerĂ€te mit minimalem Formfaktor wie smarte Ringe fĂŒr die Erkennung von Mikrogesten verwendet werden können. Insgesamt dienen die vorgestellten Ergebnisse sowohl als konzeptionelle als auch als technische Grundlage fĂŒr die Realisierung von Interaktion mit ComputergerĂ€ten wo und wann immer Nutzer sie benötigen.Bosch Researc

    On the role of gestures in human-robot interaction

    Get PDF
    This thesis investigates the gestural interaction problem and in particular the usage of gestures for human-robot interaction. The lack of a clear definition of the problem statement and a common terminology resulted in a fragmented field of research where building upon prior work is rare. The scope of the research presented in this thesis, therefore, consists in laying the foundation to help the community to build a more homogeneous research field. The main contributions of this thesis are twofold: (i) a taxonomy to define gestures; and (ii) an ingegneristic definition of the gestural interaction problem. The contributions resulted is a schema to represent the existing literature in a more organic way, helping future researchers to identify existing technologies and applications, also thanks to an extensive literature review. Furthermore, the defined problem has been studied in two of its specialization: (i) direct control and (ii) teaching of a robotic manipulator, which leads to the development of technological solutions for gesture sensing, detection and classification, which can possibly be applied to other contexts

    Formulation of a new gradient descent MARG orientation algorithm: case study on robot teleoperation

    Get PDF
    We introduce a novel magnetic angular rate gravity (MARG) sensor fusion algorithm for inertial measurement. The new algorithm improves the popular gradient descent (ʻMadgwick’) algorithm increasing accuracy and robustness while preserving computa- tional efficiency. Analytic and experimental results demonstrate faster convergence for multiple variations of the algorithm through changing magnetic inclination. Furthermore, decoupling of magnetic field variance from roll and pitch estimation is pro- ven for enhanced robustness. The algorithm is validated in a human-machine interface (HMI) case study. The case study involves hardware implementation for wearable robot teleoperation in both Virtual Reality (VR) and in real-time on a 14 degree-of-freedom (DoF) humanoid robot. The experiment fuses inertial (movement) and mechanomyography (MMG) muscle sensing to control robot arm movement and grasp simultaneously, demon- strating algorithm efficacy and capacity to interface with other physiological sensors. To our knowledge, this is the first such formulation and the first fusion of inertial measure- ment and MMG in HMI. We believe the new algorithm holds the potential to impact a very wide range of inertial measurement applications where full orientation necessary. Physiological sensor synthesis and hardware interface further provides a foundation for robotic teleoperation systems with necessary robustness for use in the field

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    DeePoint: Pointing Recognition and Direction Estimation From A Fixed View

    Full text link
    In this paper, we realize automatic visual recognition and direction estimation of pointing. We introduce the first neural pointing understanding method based on two key contributions. The first is the introduction of a first-of-its-kind large-scale dataset for pointing recognition and direction estimation, which we refer to as the DP Dataset. DP Dataset consists of more than 2 million frames of over 33 people pointing in various styles annotated for each frame with pointing timings and 3D directions. The second is DeePoint, a novel deep network model for joint recognition and 3D direction estimation of pointing. DeePoint is a Transformer-based network which fully leverages the spatio-temporal coordination of the body parts, not just the hands. Through extensive experiments, we demonstrate the accuracy and efficiency of DeePoint. We believe DP Dataset and DeePoint will serve as a sound foundation for visual human intention understanding

    Wearable Sensors Applied in Movement Analysis

    Get PDF
    Recent advances in electronics have led to sensors whose sizes and weights are such that they can be placed on living systems without impairing their natural motion and habits. They may be worn on the body as accessories or as part of the clothing and enable personalized mobile information processing. Wearable sensors open the way for a nonintrusive and continuous monitoring of body orientation, movements, and various physiological parameters during motor activities in real-life settings. Thus, they may become crucial tools not only for researchers, but also for clinicians, as they have the potential to improve diagnosis, better monitor disease development and thereby individualize treatment. Wearable sensors should obviously go unnoticed for the people wearing them and be intuitive in their installation. They should come with wireless connectivity and low-power consumption. Moreover, the electronics system should be self-calibrating and deliver correct information that is easy to interpret. Cross-platform interfaces that provide secure data storage and easy data analysis and visualization are needed.This book contains a selection of research papers presenting new results addressing the above challenges
    • 

    corecore