546 research outputs found
On the Utility of Representation Learning Algorithms for Myoelectric Interfacing
Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steerâa gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden
Tradition and Innovation in Construction Project Management
This book is a reprint of the Special Issue 'Tradition and Innovation in Construction Project Management' that was published in the journal Buildings
Evaluating EEGâEMG Fusion-Based Classification as a Method for Improving Control of Wearable Robotic Devices for Upper-Limb Rehabilitation
Musculoskeletal disorders are the biggest cause of disability worldwide, and wearable mechatronic rehabilitation devices have been proposed for treatment. However, before widespread adoption, improvements in user control and system adaptability are required. User intention should be detected intuitively, and user-induced changes in system dynamics should be unobtrusively identified and corrected. Developments often focus on model-dependent nonlinear control theory, which is challenging to implement for wearable devices.
One alternative is to incorporate bioelectrical signal-based machine learning into the system, allowing for simpler controller designs to be augmented by supplemental brain (electroencephalography/EEG) and muscle (electromyography/EMG) information. To extract user intention better, sensor fusion techniques have been proposed to combine EEG and EMG; however, further development is required to enhance the capabilities of EEGâEMG fusion beyond basic motion classification. To this end, the goals of this thesis were to investigate expanded methods of EEGâEMG fusion and to develop a novel control system based on the incorporation of EEGâEMG fusion classifiers.
A dataset of EEG and EMG signals were collected during dynamic elbow flexionâextension motions and used to develop EEGâEMG fusion models to classify task weight, as well as motion intention. A variety of fusion methods were investigated, such as a Weighted Average decision-level fusion (83.01 ± 6.04% accuracy) and Convolutional Neural Network-based input-level fusion (81.57 ± 7.11% accuracy), demonstrating that EEGâEMG fusion can classify more indirect tasks.
A novel control system, referred to as a Task Weight Selective Controller (TWSC), was implemented using a Gain Scheduling-based approach, dictated by external load estimations from an EEGâEMG fusion classifier. To improve system stability, classifier prediction debouncing was also proposed to reduce misclassifications through filtering. Performance of the TWSC was evaluated using a developed upper-limb brace simulator. Due to simulator limitations, no significant difference in error was observed between the TWSC and PID control. However, results did demonstrate the feasibility of prediction debouncing, showing it provided smoother device motion. Continued development of the TWSC, and EEGâEMG fusion techniques will ultimately result in wearable devices that are able to adapt to changing loads more effectively, serving to improve the user experience during operation
Path and Motion Planning for Autonomous Mobile 3D Printing
Autonomous robotic construction was envisioned as early as the â90s, and yet, con-
struction sites today look much alike ones half a century ago. Meanwhile, highly
automated and efficient fabrication methods like Additive Manufacturing, or 3D
Printing, have seen great success in conventional production. However, existing
efforts to transfer printing technology to construction applications mainly rely on
manufacturing-like machines and fail to utilise the capabilities of modern robotics.
This thesis considers using Mobile Manipulator robots to perform large-scale
Additive Manufacturing tasks. Comprised of an articulated arm and a mobile base,
Mobile Manipulators, are unique in their simultaneous mobility and agility, which
enables printing-in-motion, or Mobile 3D Printing. This is a 3D printing modality,
where a robot deposits material along larger-than-self trajectories while in motion.
Despite profound potential advantages over existing static manufacturing-like large-
scale printers, Mobile 3D printing is underexplored. Therefore, this thesis tack-
les Mobile 3D printing-specific challenges and proposes path and motion planning
methodologies that allow this printing modality to be realised. The work details
the development of Task-Consistent Path Planning that solves the problem of find-
ing a valid robot-base path needed to print larger-than-self trajectories. A motion
planning and control strategy is then proposed, utilising the robot-base paths found
to inform an optimisation-based whole-body motion controller. Several Mobile 3D
Printing robot prototypes are built throughout this work, and the overall path and
motion planning strategy proposed is holistically evaluated in a series of large-scale
3D printing experiments
Real-time generation and adaptation of social companion robot behaviors
Social robots will be part of our future homes.
They will assist us in everyday tasks, entertain us, and provide helpful advice.
However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate.
An essential skill of every social robot is verbal and non-verbal communication.
In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine.
Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors.
In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot.
However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems.
This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences.
Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence.
The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning.
Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning.
It provides a higher-level view from the system designer's perspective and guidance from the start to the end.
It illustrates the process of modeling, simulating, and evaluating such adaptation processes.
Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness.
The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes.
They are evaluated in the lab and in in-situ studies.
These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor.
Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukĂŒnftigen Zuhauses sein.
Sie werden uns bei alltĂ€glichen Aufgaben unterstĂŒtzen, uns unterhalten und uns mit hilfreichen RatschlĂ€gen versorgen.
Noch gibt es allerdings technische Herausforderungen, die zunĂ€chst ĂŒberwunden werden mĂŒssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen.
Eine wesentliche FĂ€higkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation.
Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt.
Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natĂŒrliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen.
DarĂŒber hinaus mĂŒssen Roboter auch die individuellen Vorlieben der Benutzer berĂŒcksichtigen.
So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprĂ€gt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter fĂŒhrt.
Roboter haben jedoch keine menschliche Intuition - sie mĂŒssen mit entsprechenden Algorithmen fĂŒr diese Probleme ausgestattet werden.
In dieser Arbeit wird der Einsatz von bestĂ€rkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die BedĂŒrfnisse und Vorlieben des Benutzers anzupassen.
Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern.
Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle ĂŒber die multimodale Verhaltenserzeugung des Roboters, ein VerstĂ€ndnis des menschlichen Feedbacks und eine algorithmische Basis fĂŒr maschinelles Lernen.
Daher wird in dieser Arbeit ein konzeptioneller Rahmen fĂŒr die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestĂ€rkendem Lernen entwickelt.
Er bietet eine ĂŒbergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende.
Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse.
Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten.
Der konzeptionelle Rahmen wird fĂŒr mehrere AnwendungsfĂ€lle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen fĂŒhrt, die in Labor- und In-situ-Studien evaluiert werden.
Diese AnsÀtze befassen sich mit typischen AktivitÀten in hÀuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt.
In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an
How to Communicate Robot Motion Intent: A Scoping Review
Robots are becoming increasingly omnipresent in our daily lives, supporting
us and carrying out autonomous tasks. In Human-Robot Interaction, human actors
benefit from understanding the robot's motion intent to avoid task failures and
foster collaboration. Finding effective ways to communicate this intent to
users has recently received increased research interest. However, no common
language has been established to systematize robot motion intent. This work
presents a scoping review aimed at unifying existing knowledge. Based on our
analysis, we present an intent communication model that depicts the
relationship between robot and human through different intent dimensions
(intent type, intent information, intent location). We discuss these different
intent dimensions and their interrelationships with different kinds of robots
and human roles. Throughout our analysis, we classify the existing research
literature along our intent communication model, allowing us to identify key
patterns and possible directions for future research.Comment: Interactive Data Visualization of the Paper Corpus:
https://rmi.robot-research.d
- âŠ