108 research outputs found

    Deep learning applied to computational mechanics: A comprehensive review, state of the art, and the classics

    Full text link
    Three recent breakthroughs due to AI in arts and science serve as motivation: An award winning digital image, protein folding, fast matrix multiplication. Many recent developments in artificial neural networks, particularly deep learning (DL), applied and relevant to computational mechanics (solid, fluids, finite-element technology) are reviewed in detail. Both hybrid and pure machine learning (ML) methods are discussed. Hybrid methods combine traditional PDE discretizations with ML methods either (1) to help model complex nonlinear constitutive relations, (2) to nonlinearly reduce the model order for efficient simulation (turbulence), or (3) to accelerate the simulation by predicting certain components in the traditional integration methods. Here, methods (1) and (2) relied on Long-Short-Term Memory (LSTM) architecture, with method (3) relying on convolutional neural networks. Pure ML methods to solve (nonlinear) PDEs are represented by Physics-Informed Neural network (PINN) methods, which could be combined with attention mechanism to address discontinuous solutions. Both LSTM and attention architectures, together with modern and generalized classic optimizers to include stochasticity for DL networks, are extensively reviewed. Kernel machines, including Gaussian processes, are provided to sufficient depth for more advanced works such as shallow networks with infinite width. Not only addressing experts, readers are assumed familiar with computational mechanics, but not with DL, whose concepts and applications are built up from the basics, aiming at bringing first-time learners quickly to the forefront of research. History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics, even in well-known references. Positioning and pointing control of a large-deformable beam is given as an example.Comment: 275 pages, 158 figures. Appeared online on 2023.03.01 at CMES-Computer Modeling in Engineering & Science

    On fixed figure problems in fuzzy metric spaces

    Get PDF
    summary:Fixed circle problems belong to a realm of problems in metric fixed point theory. Specifically, it is a problem of finding self mappings which remain invariant at each point of the circle in the space. Recently this problem is well studied in various metric spaces. Our present work is in the domain of the extension of this line of research in the context of fuzzy metric spaces. For our purpose, we first define the notions of a fixed circle and of a fixed Cassini curve then determine suitable conditions which ensure the existence and uniqueness of a fixed circle (resp. a Cassini curve) for the self operators. Moreover, we present a result which prescribed that the fixed point set of fuzzy quasi-nonexpansive mapping is always closed. Our results are supported by examples

    The temporal pattern of impulses in primary afferents analogously encodes touch and hearing information

    Full text link
    An open question in neuroscience is the contribution of temporal relations between individual impulses in primary afferents in conveying sensory information. We investigated this question in touch and hearing, while looking for any shared coding scheme. In both systems, we artificially induced temporally diverse afferent impulse trains and probed the evoked perceptions in human subjects using psychophysical techniques. First, we investigated whether the temporal structure of a fixed number of impulses conveys information about the magnitude of tactile intensity. We found that clustering the impulses into periodic bursts elicited graded increases of intensity as a function of burst impulse count, even though fewer afferents were recruited throughout the longer bursts. The interval between successive bursts of peripheral neural activity (the burst-gap) has been demonstrated in our lab to be the most prominent temporal feature for coding skin vibration frequency, as opposed to either spike rate or periodicity. Given the similarities between tactile and auditory systems, second, we explored the auditory system for an equivalent neural coding strategy. By using brief acoustic pulses, we showed that the burst-gap is a shared temporal code for pitch perception between the modalities. Following this evidence of parallels in temporal frequency processing, we next assessed the perceptual frequency equivalence between the two modalities using auditory and tactile pulse stimuli of simple and complex temporal features in cross-sensory frequency discrimination experiments. Identical temporal stimulation patterns in tactile and auditory afferents produced equivalent perceived frequencies, suggesting an analogous temporal frequency computation mechanism. The new insights into encoding tactile intensity through clustering of fixed charge electric pulses into bursts suggest a novel approach to convey varying contact forces to neural interface users, requiring no modulation of either stimulation current or base pulse frequency. Increasing control of the temporal patterning of pulses in cochlear implant users might improve pitch perception and speech comprehension. The perceptual correspondence between touch and hearing not only suggests the possibility of establishing cross-modal comparison standards for robust psychophysical investigations, but also supports the plausibility of cross-sensory substitution devices

    International Conference on Mathematical Analysis and Applications in Science and Engineering – Book of Extended Abstracts

    Get PDF
    The present volume on Mathematical Analysis and Applications in Science and Engineering - Book of Extended Abstracts of the ICMASC’2022 collects the extended abstracts of the talks presented at the International Conference on Mathematical Analysis and Applications in Science and Engineering – ICMA2SC'22 that took place at the beautiful city of Porto, Portugal, in June 27th-June 29th 2022 (3 days). Its aim was to bring together researchers in every discipline of applied mathematics, science, engineering, industry, and technology, to discuss the development of new mathematical models, theories, and applications that contribute to the advancement of scientific knowledge and practice. Authors proposed research in topics including partial and ordinary differential equations, integer and fractional order equations, linear algebra, numerical analysis, operations research, discrete mathematics, optimization, control, probability, computational mathematics, amongst others. The conference was designed to maximize the involvement of all participants and will present the state-of- the-art research and the latest achievements.info:eu-repo/semantics/publishedVersio

    Proceedings of the 19th Sound and Music Computing Conference

    Get PDF
    Proceedings of the 19th Sound and Music Computing Conference - June 5-12, 2022 - Saint-Étienne (France). https://smc22.grame.f

    Brain Dynamics From Mathematical Perspectives: A Study of Neural Patterning

    Get PDF
    The brain is the central hub regulating thought, memory, vision, and many other processes occurring within the body. Neural information transmission occurs through the firing of billions of connected neurons, giving rise to a rich variety of complex patterning. Mathematical models are used alongside direct experimental approaches in understanding the underlying mechanisms at play which drive neural activity, and ultimately, in understanding how the brain works. This thesis focuses on network and continuum models of neural activity, and computational methods used in understanding the rich patterning that arises due to the interplay between non-local coupling and local dynamics. It advances the understanding of patterning in both cortical and sub-cortical domains by utilising the neural field framework in the modelling and analysis of thalamic tissue – where cellular currents are important in shaping the tissue firing response through the post-inhibitory rebound phenomenon – and of cortical tissue. The rich variety of patterning exhibited by different neural field models is demonstrated through a mixture of direct numerical simulation, as well as via a numerical continuation approach and an analytical study of patterned states such as synchrony, spatially extended periodic orbits, bumps, and travelling waves. Linear instability theory about these patterns is developed and used to predict the points at which solutions destabilise and alternative emergent patterns arise. Models of thalamic tissue often exhibit lurching waves, where activity travels across the domain in a saltatory manner. Here, a direct mechanism, showing the birth of lurching waves at a Neimark-Sacker-type instability of the spatially synchronous periodic orbit, is presented. The construction and stability analyses carried out in this thesis employ techniques from non-smooth dynamical systems (such as saltation methods) to treat the Heaviside nature of models. This is often coupled with an Evans function approach to determine the linear stability of patterned states. With the ever-increasing complexity of neural models that are being studied, there is a need to develop ways of systematically studying the non-trivial patterns they exhibit. Computational continuation methods are developed, allowing for such a study of periodic solutions and their stability across different parameter regimes, through the use of Newton-Krylov solvers. These techniques are complementary to those outlined above. Using these methods, the relationship between the speed of synaptic transmission and the emergent properties of periodic and travelling periodic patterns such as standing waves and travelling breathers is studied. Many different dynamical systems models of physical phenomena are amenable to analysis using these general computational methods (as long as they have the property that they are sufficiently smooth), and as such, their domain of applicability extends beyond the realm of mathematical neuroscience

    Simulation and Theory of Large-Scale Cortical Networks

    Get PDF
    Cerebral cortex is composed of intricate networks of neurons. These neuronal networks are strongly interconnected: every neuron receives, on average, input from thousands or more presynaptic neurons. In fact, to support such a number of connections, a majority of the volume in the cortical gray matter is filled by axons and dendrites. Besides the networks, neurons themselves are also highly complex. They possess an elaborate spatial structure and support various types of active processes and nonlinearities. In the face of such complexity, it seems necessary to abstract away some of the details and to investigate simplified models. In this thesis, such simplified models of neuronal networks are examined on varying levels of abstraction. Neurons are modeled as point neurons, both rate-based and spike-based, and networks are modeled as block-structured random networks. Crucially, on this level of abstraction, the models are still amenable to analytical treatment using the framework of dynamical mean-field theory. The main focus of this thesis is to leverage the analytical tractability of random networks of point neurons in order to relate the network structure, and the neuron parameters, to the dynamics of the neurons—in physics parlance, to bridge across the scales from neurons to networks. More concretely, four different models are investigated: 1) fully connected feedforward networks and vanilla recurrent networks of rate neurons; 2) block-structured networks of rate neurons in continuous time; 3) block-structured networks of spiking neurons; and 4) a multi-scale, data-based network of spiking neurons. We consider the first class of models in the light of Bayesian supervised learning and compute their kernel in the infinite-size limit. In the second class of models, we connect dynamical mean-field theory with large-deviation theory, calculate beyond mean-field fluctuations, and perform parameter inference. For the third class of models, we develop a theory for the autocorrelation time of the neurons. Lastly, we consolidate data across multiple modalities into a layer- and population-resolved model of human cortex and compare its activity with cortical recordings. In two detours from the investigation of these four network models, we examine the distribution of neuron densities in cerebral cortex and present a software toolbox for mean-field analyses of spiking networks

    Piecewise Linear Dynamical Systems: From Nodes to Networks

    Get PDF
    Piecewise linear (PWL) modelling has many useful applications in the applied sciences. Although the number of techniques for analysing nonsmooth systems has grown in recent years, this has typically focused on low dimensional systems and relatively little attention has been paid to networks. We aim to redress this balance with a focus on synchronous oscillatory network states. For networks with smooth nodal components, weak coupling theory, phase-amplitude reductions, and the master stability function are standard methodologies to assess the stability of the synchronous state. However, when network elements have some degree of nonsmoothness, these tools cannot be directly used and a more careful treatment is required. The work in this thesis addresses this challenge and shows how the use of saltation operators allows for an appropriate treatment of networks of PWL oscillators. This is used to augment all the aforementioned methods. The power of this formalism is illustrated by application to network problems ranging from mechanics to neuroscience

    Discrete Time Systems

    Get PDF
    Discrete-Time Systems comprehend an important and broad research field. The consolidation of digital-based computational means in the present, pushes a technological tool into the field with a tremendous impact in areas like Control, Signal Processing, Communications, System Modelling and related Applications. This book attempts to give a scope in the wide area of Discrete-Time Systems. Their contents are grouped conveniently in sections according to significant areas, namely Filtering, Fixed and Adaptive Control Systems, Stability Problems and Miscellaneous Applications. We think that the contribution of the book enlarges the field of the Discrete-Time Systems with signification in the present state-of-the-art. Despite the vertiginous advance in the field, we also believe that the topics described here allow us also to look through some main tendencies in the next years in the research area
    corecore