1,052 research outputs found

    Solving Partial Differential Equations Using Artificial Neural Networks

    Get PDF
    <p>This thesis presents a method for solving partial differential equations (PDEs) using articial neural networks. The method uses a constrained backpropagation (CPROP) approach for preserving prior knowledge during incremental training for solving nonlinear elliptic and parabolic PDEs adaptively, in non-stationary environments. Compared to previous methods that use penalty functions or Lagrange multipliers,</p><p>CPROP reduces the dimensionality of the optimization problem by using direct elimination, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic</p><p>and parabolic PDEs with changing parameters and non-homogeneous terms. The computational complexity analysis shows that CPROP compares favorably to existing methods of solution, and that it leads to considerable computational savings when subject to non-stationary environments.</p><p>The CPROP based approach is extended to a constrained integration (CINT) method for solving initial boundary value partial differential equations (PDEs). The CINT method combines classical Galerkin methods with CPROP in order to constrain the ANN to approximately satisfy the boundary condition at each stage of integration. The advantage of the CINT method is that it is readily applicable to PDEs in irregular domains and requires no special modification for domains with complex geometries. Furthermore, the CINT method provides a semi-analytical solution that is infinitely differentiable. The CINT method is demonstrated on two hyperbolic and one parabolic initial boundary value problems (IBVPs). These IBVPs are widely used and have known analytical solutions. When compared with Matlab's nite element (FE) method, the CINT method is shown to achieve significant improvements both in terms of computational time and accuracy. The CINT method is applied to a distributed optimal control (DOC) problem of computing optimal state and control trajectories for a multiscale dynamical system comprised of many interacting dynamical systems, or agents. A generalized reduced gradient (GRG) approach is presented in which the agent dynamics are described by a small system of stochastic dierential equations (SDEs). A set of optimality conditions is derived using calculus of variations, and used to compute the optimal macroscopic state and microscopic control laws. An indirect GRG approach is used to solve the optimality conditions numerically for large systems of agents. By assuming a parametric control law obtained from the superposition of linear basis functions, the agent control laws can be determined via set-point regulation, such</p><p>that the macroscopic behavior of the agents is optimized over time, based on multiple, interactive navigation objectives.</p><p>Lastly, the CINT method is used to identify optimal root profiles in water limited ecosystems. Knowledge of root depths and distributions is vital in order to accurately model and predict hydrological ecosystem dynamics. Therefore, there is interest in accurately predicting distributions for various vegetation types, soils, and climates. Numerical experiments were were performed that identify root profiles that maximize transpiration over a 10 year period across a transect of the Kalahari. Storm types were varied to show the dependence of the optimal profile on storm frequency and intensity. It is shown that more deeply distributed roots are optimal for regions where</p><p>storms are more intense and less frequent, and shallower roots are advantageous in regions where storms are less intense and more frequent.</p>Dissertatio

    A machine learning approach to parameter inference in gravitational-wave signal analysis

    Get PDF
    openGravitational Wave (GW) physics is now in its golden age thanks to modern interferometers. The fourth observing run is now ongoing with two of the four second-generation detectors, collecting GW signals coming from Compact Binary Coalescences (CBCs). These systems are formed by black holes and/or neutron stars which lose energy and angular momentum in favour of GW emission, spiraling toward each other until they merge. The characteristic waveform has a chirping behaviour, with a frequency increasing with time. These GW signals are gold mines of physical information on the emitting system. The data analysis of these signals has two main aspects: detection and parameter estimation. For what concerns detection, two approaches are used right now: matched filtering, which compares numerical waveform with raw interferometers' output to highlight the signal, and the study of bursts, which highlights the coherence of arbitrary signals in different detectors. Both these techniques need to be fast enough to allow for electromagnetic follow-up with a relatively short delay. The offline parameter inference process is based on Bayesian techniques and is rather lengthy (individual processing Markov Chain Monte Carlo runs can take a month or more). My thesis has the goal of introducing a fast parameter estimation for unmodeled (burst) methods which produce only phenomenological, de-noised waveforms with, at best, a rough estimate of only a few parameters. The implementation of an approach for fast parameter inference in this unmodeled analysis, taking as input the reconstructed waveform, could be extremely useful for multimessenger observations. In this context, Keith et al. (2021a) proposed to use Physics Informed Neural Networks (PINNs) in GW data analysis. These PINNs are a machine learning approach which includes physical prior information in the algorithm itself. Taking a clean chirping waveform as input, the algorithm of Keith et al. (2021a) demonstrated a successful application of this concept and was able to reconstruct the compact object's orbits before coalescence with great detail, starting only from a parameterized Post-Newtonian model. The PINN environment could become a key tool to infer parameters from GW signals with a simple physical ansatz. As part of my thesis work, I reviewed in detail GW physics and the PINN environment and I updated the algorithm described in Keith et al. (2021a). Their ground-breaking work introduces PINNs for the first time in the analysis of GW signals, however it does so without considering some important details. In particular, I noted that the algorithm of Keith et al. (2021a) spans a very constrained parameter space. In this thesis I introduce some of the missing details and I recode the algorithm from scratch. My implementation includes the learning of the phenomenological differential equation that describes the frequency evolution over time of the chirping GW, within a different, but more physical, parameter space. As a test, starting from a waveform as training data, and from the Newtonian approximation of the GW chirp, I infer the chirp mass, the GW phase and the frequency exponent in the differential equation. The resulting algorithm is robust and uses realistic physical conditions. This is a necessary first step to realize parameter inference with PINNs on real gravitational wave data

    A Series-Elastic Robot for Back-Pain Rehabilitation

    Get PDF
    Robotics research has been broadly expanding into various fields during the past decades. It is widely spread and best known for solving many technical necessities in different fields. With the rise of the industrial revolution, it upgraded many factories to use industrial robots to prevent the human operator from dangerous and hazardous tasks. The rapid development of application fields and their complexity have inspired researchers in the robotics community to find innovative solutions to meet the new desired requirements of the field. Currently, the creation of new needs outside the traditional industrial robots are demanding robots to attend to the new market and to assist humans in meeting their daily social needs (i.e., agriculture, construction, cleaning.). The future integration of robots into other types of production processes, added new requirements that require more safety, flexibility, and intelligence in robots. Areas of robotics has evolved into various fields. This dissertation addresses robotics research in four different areas: rehabilitation robots, biologically inspired robots, optimization techniques, and neural network implementation. Although these four areas may seem different from each other, they share some research topics and applications

    Random Neural Networks and Optimisation

    Get PDF
    In this thesis we introduce new models and learning algorithms for the Random Neural Network (RNN), and we develop RNN-based and other approaches for the solution of emergency management optimisation problems. With respect to RNN developments, two novel supervised learning algorithms are proposed. The first, is a gradient descent algorithm for an RNN extension model that we have introduced, the RNN with synchronised interactions (RNNSI), which was inspired from the synchronised firing activity observed in brain neural circuits. The second algorithm is based on modelling the signal-flow equations in RNN as a nonnegative least squares (NNLS) problem. NNLS is solved using a limited-memory quasi-Newton algorithm specifically designed for the RNN case. Regarding the investigation of emergency management optimisation problems, we examine combinatorial assignment problems that require fast, distributed and close to optimal solution, under information uncertainty. We consider three different problems with the above characteristics associated with the assignment of emergency units to incidents with injured civilians (AEUI), the assignment of assets to tasks under execution uncertainty (ATAU), and the deployment of a robotic network to establish communication with trapped civilians (DRNCTC). AEUI is solved by training an RNN tool with instances of the optimisation problem and then using the trained RNN for decision making; training is achieved using the developed learning algorithms. For the solution of ATAU problem, we introduce two different approaches. The first is based on mapping parameters of the optimisation problem to RNN parameters, and the second on solving a sequence of minimum cost flow problems on appropriately constructed networks with estimated arc costs. For the exact solution of DRNCTC problem, we develop a mixed-integer linear programming formulation, which is based on network flows. Finally, we design and implement distributed heuristic algorithms for the deployment of robots when the civilian locations are known or uncertain

    Simulation Intelligence: Towards a New Generation of Scientific Methods

    Full text link
    The original "Seven Motifs" set forth a roadmap of essential methods for the field of scientific computing, where a motif is an algorithmic method that captures a pattern of computation and data movement. We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system. Using this metaphor, we explore the nature of each layer of the simulation intelligence operating system stack (SI-stack) and the motifs therein: (1) Multi-physics and multi-scale modeling; (2) Surrogate modeling and emulation; (3) Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based modeling; (6) Probabilistic programming; (7) Differentiable programming; (8) Open-ended optimization; (9) Machine programming. We believe coordinated efforts between motifs offers immense opportunity to accelerate scientific discovery, from solving inverse problems in synthetic biology and climate science, to directing nuclear energy experiments and predicting emergent behavior in socioeconomic settings. We elaborate on each layer of the SI-stack, detailing the state-of-art methods, presenting examples to highlight challenges and opportunities, and advocating for specific ways to advance the motifs and the synergies from their combinations. Advancing and integrating these technologies can enable a robust and efficient hypothesis-simulation-analysis type of scientific method, which we introduce with several use-cases for human-machine teaming and automated science
    corecore