463 research outputs found

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    Introduction to Facial Micro Expressions Analysis Using Color and Depth Images: A Matlab Coding Approach (Second Edition, 2023)

    Full text link
    The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment. FMER is a subset of image processing and it is a multidisciplinary topic to analysis. So, it requires familiarity with other topics of Artifactual Intelligence (AI) such as machine learning, digital image processing, psychology and more. So, it is a great opportunity to write a book which covers all of these topics for beginner to professional readers in the field of AI and even without having background of AI. Our goal is to provide a standalone introduction in the field of MFER analysis in the form of theorical descriptions for readers with no background in image processing with reproducible Matlab practical examples. Also, we describe any basic definitions for FMER analysis and MATLAB library which is used in the text, that helps final reader to apply the experiments in the real-world applications. We believe that this book is suitable for students, researchers, and professionals alike, who need to develop practical skills, along with a basic understanding of the field. We expect that, after reading this book, the reader feels comfortable with different key stages such as color and depth image processing, color and depth image representation, classification, machine learning, facial micro-expressions recognition, feature extraction and dimensionality reduction. The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment.Comment: This is the second edition of the boo

    Mathematical Problems in Rock Mechanics and Rock Engineering

    Get PDF
    With increasing requirements for energy, resources and space, rock engineering projects are being constructed more often and are operated in large-scale environments with complex geology. Meanwhile, rock failures and rock instabilities occur more frequently, and severely threaten the safety and stability of rock engineering projects. It is well-recognized that rock has multi-scale structures and involves multi-scale fracture processes. Meanwhile, rocks are commonly subjected simultaneously to complex static stress and strong dynamic disturbance, providing a hotbed for the occurrence of rock failures. In addition, there are many multi-physics coupling processes in a rock mass. It is still difficult to understand these rock mechanics and characterize rock behavior during complex stress conditions, multi-physics processes, and multi-scale changes. Therefore, our understanding of rock mechanics and the prevention and control of failure and instability in rock engineering needs to be furthered. The primary aim of this Special Issue “Mathematical Problems in Rock Mechanics and Rock Engineering” is to bring together original research discussing innovative efforts regarding in situ observations, laboratory experiments and theoretical, numerical, and big-data-based methods to overcome the mathematical problems related to rock mechanics and rock engineering. It includes 12 manuscripts that illustrate the valuable efforts for addressing mathematical problems in rock mechanics and rock engineering

    Studies of Molecular Precursors Used in FEBID Fabrication of Nanostructures

    Get PDF
    The adoption of nanotechnology is increasingly important in many aspects of our daily life influencing the clothes we wear and most of the electronic devices we use while also underpinning the development of drugs and medical techniques that we will need at some point in our lives. The methods by which nanoscale devices are fabricated is changing from a 'top down' etching based procedure to a 'bottom up' molecule by molecule deposition and assembly. The focus of the present research is the development, design, and analysis of new precursors for focused electron beam induced deposition (FEBID) and extreme ultraviolet nanolithography (EUVL) through a large pool of experimental and computational resources. The research is divided into two areas: gas - phase analysis of precursors (largely used for fragment and radicals' analysis, and molecular design) and surface and deposition science (physical deposition of precursors, simulation analysis of surface - molecule interactions and characterization of deposition processes to obtain optimal process parameters for molecular structures). It is necessary to collect data such as cross sections of electron - molecule interactions e.g., dissociative ionization (DI) and dissociative electron attachment (DEA) to provide accurate simulations that can be used to improve the FEBID and EUVL while understanding surface processes such as molecular absorption and diffusion to determine the structure and purity of the nanostructures formed by these methods. The objective of this thesis is to provide a gas - phase and deposition analysis of potential and widely used precursors for FEBID and EUVL at the nanoscale. To achieve this the experimental technique of velocity sliced map imaging (VsMI) was used in conjunction with theoretical tools such as density functional theory (DFT) simulations using Gaussian 16 software and evaluation of cross-section data for molecular dissociation at low electron energies of 0 - 20 eV using Quantemol-N. Results of the gas - phase analysis of negative ionic fragments formed by DEA and DI with their appearance, dissociation and ionization energies, angular distributions and kinetic energies, cross-sections for DEA fragmentation at low energy and excited states calculations at values up to 10 eV are presented. These results are used as the inputs to the models of the FEBID processes. The electronic, structural, and kinetic properties of several FEBID precursors are explored, and FEBID method used to create nanostructures using a Zeiss MeRiT SEM with GEMINI column operated at 20 kV. Analysis of the deposits was performed using EDX and atomic force microscopy (AFM) analysis as well as electron stimulated desorption (ESD) and temperature programmed desorption (TPD). Complementary simulations of the dynamics of processes at the surface were studied using MBN Explorer and surface - molecule interactions with great results in simulating the deposition process of islands and structures (results presented in Chapter 8)

    Synergies between Numerical Methods for Kinetic Equations and Neural Networks

    Get PDF
    The overarching theme of this work is the efficient computation of large-scale systems. Here we deal with two types of mathematical challenges, which are quite different at first glance but offer similar opportunities and challenges upon closer examination. Physical descriptions of phenomena and their mathematical modeling are performed on diverse scales, ranging from nano-scale interactions of single atoms to the macroscopic dynamics of the earth\u27s atmosphere. We consider such systems of interacting particles and explore methods to simulate them efficiently and accurately, with a focus on the kinetic and macroscopic description of interacting particle systems. Macroscopic governing equations describe the time evolution of a system in time and space, whereas the more fine-grained kinetic description additionally takes the particle velocity into account. The study of discretizing kinetic equations that depend on space, time, and velocity variables is a challenge due to the need to preserve physical solution bounds, e.g. positivity, avoiding spurious artifacts and computational efficiency. In the pursuit of overcoming the challenge of computability in both kinetic and multi-scale modeling, a wide variety of approximative methods have been established in the realm of reduced order and surrogate modeling, and model compression. For kinetic models, this may manifest in hybrid numerical solvers, that switch between macroscopic and mesoscopic simulation, asymptotic preserving schemes, that bridge the gap between both physical resolution levels, or surrogate models that operate on a kinetic level but replace computationally heavy operations of the simulation by fast approximations. Thus, for the simulation of kinetic and multi-scale systems with a high spatial resolution and long temporal horizon, the quote by Paul Dirac is as relevant as it was almost a century ago. The first goal of the dissertation is therefore the development of acceleration strategies for kinetic discretization methods, that preserve the structure of their governing equations. Particularly, we investigate the use of convex neural networks, to accelerate the minimal entropy closure method. Further, we develop a neural network-based hybrid solver for multi-scale systems, where kinetic and macroscopic methods are chosen based on local flow conditions. Furthermore, we deal with the compression and efficient computation of neural networks. In the meantime, neural networks are successfully used in different forms in countless scientific works and technical systems, with well-known applications in image recognition, and computer-aided language translation, but also as surrogate models for numerical mathematics. Although the first neural networks were already presented in the 1950s, the scientific discipline has enjoyed increasing popularity mainly during the last 15 years, since only now sufficient computing capacity is available. Remarkably, the increasing availability of computing resources is accompanied by a hunger for larger models, fueled by the common conception of machine learning practitioners and researchers that more trainable parameters equal higher performance and better generalization capabilities. The increase in model size exceeds the growth of available computing resources by orders of magnitude. Since 20122012, the computational resources used in the largest neural network models doubled every 3.43.4 months\footnote{\url{https://openai.com/blog/ai-and-compute/}}, opposed to Moore\u27s Law that proposes a 22-year doubling period in available computing power. To some extent, Dirac\u27s statement also applies to the recent computational challenges in the machine-learning community. The desire to evaluate and train on resource-limited devices sparked interest in model compression, where neural networks are sparsified or factorized, typically after training. The second goal of this dissertation is thus a low-rank method, originating from numerical methods for kinetic equations, to compress neural networks already during training by low-rank factorization. This dissertation thus considers synergies between kinetic models, neural networks, and numerical methods in both disciplines to develop time-, memory- and energy-efficient computational methods for both research areas

    Machine learning algorithms for efficient process optimisation of variable geometries at the example of fabric forming

    Get PDF
    Für einen optimalen Betrieb erfordern moderne Produktionssysteme eine sorgfältige Einstellung der eingesetzten Fertigungsprozesse. Physikbasierte Simulationen können die Prozessoptimierung wirksam unterstützen, jedoch sind deren Rechenzeiten oft eine erhebliche Hürde. Eine Möglichkeit, Rechenzeit einzusparen sind surrogate-gestützte Optimierungsverfahren (SBO1). Surrogates sind recheneffiziente, datengetriebene Ersatzmodelle, die den Optimierer im Suchraum leiten. Sie verbessern in der Regel die Konvergenz, erweisen sich aber bei veränderlichen Optimierungsaufgaben, etwa häufigen Bauteilanpassungen nach Kundenwunsch, als unhandlich. Um auch solche variablen Optimierungsaufgaben effizient zu lösen, untersucht die vorliegende Arbeit, wie jüngste Fortschritte im Maschinenlernen (ML) – im Speziellen bei neuronalen Netzen – bestehende SBO-Techniken ergänzen können. Dabei werden drei Hauptaspekte betrachtet: erstens, ihr Potential als klassisches Surrogate für SBO, zweitens, ihre Eignung zur effiziente Bewertung der Herstellbarkeit neuer Bauteilentwürfe und drittens, ihre Möglichkeiten zur effizienten Prozessoptimierung für variable Bauteilgeometrien. Diese Fragestellungen sind grundsätzlich technologieübergreifend anwendbar und werden in dieser Arbeit am Beispiel der Textilumformung untersucht. Der erste Teil dieser Arbeit (Kapitel 3) diskutiert die Eignung tiefer neuronaler Netze als Surrogates für SBO. Hierzu werden verschiedene Netzarchitekturen untersucht und mehrere Möglichkeiten verglichen, sie in ein SBO-Framework einzubinden. Die Ergebnisse weisen ihre Eignung für SBO nach: Für eine feste Beispielgeometrie minimieren alle Varianten erfolgreich und schneller als ein Referenzalgorithmus (genetischer Algorithmus) die Zielfunktion. Um die Herstellbarkeit variabler Bauteilgeometrien zu bewerten, untersucht Kapitel 4 anschließend, wie Geometrieinformationen in ein Prozess-Surrogate eingebracht werden können. Hierzu werden zwei ML-Ansätze verglichen, ein merkmals- und ein rasterbasierter Ansatz. Der merkmalsbasierte Ansatz scannt ein Bauteil nach einzelnen, prozessrelevanten Geometriemerkmalen, der rasterbasierte Ansatz hingegen interpretiert die Geometrie als Ganzes. Beide Ansätze können das Prozessverhalten grundsätzlich erlernen, allerdings erweist sich der rasterbasierte Ansatz als einfacher übertragbar auf neue Geometrievarianten. Die Ergebnisse zeigen zudem, dass hauptsächlich die Vielfalt und weniger die Menge der Trainingsdaten diese Übertragbarkeit bestimmt. Abschließend verbindet Kapitel 5 die Surrogate-Techniken für flexible Geometrien mit variablen Prozessparametern, um eine effiziente Prozessoptimierung für variable Bauteile zu erreichen. Hierzu interagiert ein ML-Algorithmus in einer Simulationsumgebung mit generischen Geometriebeispielen und lernt, welche Geometrie, welche Umformparameter erfordert. Nach dem Training ist der Algorithmus in der Lage, auch für nicht-generische Bauteilgeometrien brauchbare Empfehlungen auszugeben. Weiter zeigt sich, dass die Empfehlungen mit ähnlicher Geschwindigkeit wie die klassische SBO zum tatsächlichen Prozessoptimum konvergieren, jedoch kein bauteilspezifisches A-priori-Sampling nötig ist. Einmal trainiert, ist der entwickelte Ansatz damit effizienter. Insgesamt zeigt diese Arbeit, wie ML-Techniken gegenwärtige SBOMethoden erweitern und so die Prozess- und Produktoptimierung zu frühen Entwicklungszeitpunkten effizient unterstützen können. Die Ergebnisse der Untersuchungen münden in Folgefragen zur Weiterentwicklung der Methoden, etwa die Integration physikalischer Bilanzgleichungen, um die Modellprognosen physikalisch konsistenter zu machen

    Interrogating autism from a multidimensional perspective: an integrative framework.

    Get PDF
    Autism Spectrum Disorder (ASD) is a condition characterized by social and behavioral impairments, affecting approximately 1 in every 44 children in the United States. Common symptoms include difficulties in communication, interpersonal interactions, and behavior. While symptoms can manifest as early as infancy, obtaining an accurate diagnosis may require multiple visits to a pediatric specialist due to the subjective nature of the assessment, which may yield varying scores from different specialists. Despite growing evidence of the role of differences in brain development and/or environmental and/or genetic factors in autism development, the exact pathology of this disorder has yet to be fully elucidated by scientists. At present, the diagnosis of ASD typically involves a set of gold-standard diagnostic evaluations, such as the Autism Diagnostic Observation Schedule (ADOS), the Autism Diagnostic Interview-Revised (ADI-R), and the more cost-effective Social Responsive Scale (SRS). Administering these diagnostic tests, which involve assessing communication and behavioral patterns, along with obtaining a clinical history, requires the expertise of a team of qualified clinicians. This process is time-consuming, effortful, and involves a degree of subjectivity due to the reliance on clinical judgment. Aside from conventional observational assessments, recent developments in neuroimaging and machine learning offer a fast and objective alternative for diagnosing ASD using brain imaging. This comprehensive work explores the use of different imaging modalvities, namely structural MRI (sMRI) and resting-state functional MRI (rs-fMRI), to investigate their potential for autism diagnosis. The proposed study aims to offer a new approach and perspective in comprehending ASD as a multidimensional problem, within a behavioral space that is defined by one of the available ASD diagnostic tools. This dissertation introduces a thorough investigation of the utilization of feature engineering tools to extract distinctive insights from various brain imaging modalities, including the application of novel feature representations. Additionally, the use of a machine learning framework to aid in the precise classification of individuals with autism is also explored in detail. This extensive research, which draws upon large publicly available datasets, sheds light on the influence of various decisions made throughout the pipeline on diagnostic accuracy. Furthermore, it identifies brain regions that may be impacted and contribute to an autism diagnosis. The attainment of high global state-of-the-art cross-validated, and hold-out set accuracy validates the advantages of feature representation and engineering in extracting valuable information, as well as the potential benefits of employing neuroimaging for autism diagnosis. Furthermore, a suggested diagnostic report has been put forth to assist physicians in mapping diagnoses to underlying neuroimaging markers. This approach could enable an earlier, automated, and more objective personalized diagnosis

    Breaking the Curse of Dimensionality in Deep Neural Networks by Learning Invariant Representations

    Full text link
    Artificial intelligence, particularly the subfield of machine learning, has seen a paradigm shift towards data-driven models that learn from and adapt to data. This has resulted in unprecedented advancements in various domains such as natural language processing and computer vision, largely attributed to deep learning, a special class of machine learning models. Deep learning arguably surpasses traditional approaches by learning the relevant features from raw data through a series of computational layers. This thesis explores the theoretical foundations of deep learning by studying the relationship between the architecture of these models and the inherent structures found within the data they process. In particular, we ask What drives the efficacy of deep learning algorithms and allows them to beat the so-called curse of dimensionality-i.e. the difficulty of generally learning functions in high dimensions due to the exponentially increasing need for data points with increased dimensionality? Is it their ability to learn relevant representations of the data by exploiting their structure? How do different architectures exploit different data structures? In order to address these questions, we push forward the idea that the structure of the data can be effectively characterized by its invariances-i.e. aspects that are irrelevant for the task at hand. Our methodology takes an empirical approach to deep learning, combining experimental studies with physics-inspired toy models. These simplified models allow us to investigate and interpret the complex behaviors we observe in deep learning systems, offering insights into their inner workings, with the far-reaching goal of bridging the gap between theory and practice.Comment: PhD Thesis @ EPF
    corecore