1,605 research outputs found

    SiSeRHMap v1.0: A simulator for mapped seismic response using a hybrid model

    Get PDF
    SiSeRHMap is a computerized methodology capable of drawing up prediction maps of seismic response. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches 5 and models are organized in a code-architecture composed of five interdependent modules. A GIS (Geographic Information System) Cubic Model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A metamodeling process confers a hybrid nature to the methodology. In this process, the one-dimensional linear 10 equivalent analysis produces acceleration response spectra of shear wave velocitythickness profiles, defined as trainers, which are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated Evolutionary Algorithm (EA) and the Levenberg–Marquardt Algorithm (LMA) as the final optimizer. In the fi15 nal step, the GCM Maps Executor module produces a serial map-set of a stratigraphic seismic response at different periods, grid-solving the calibrated Spectra model. In addition, the spectra topographic amplification is also computed by means of a numerical prediction model. This latter is built to match the results of the numerical simulations related to isolate reliefs using GIS topographic attributes. In this way, different sets 20 of seismic response maps are developed, on which, also maps of seismic design response spectra are defined by means of an enveloping technique

    Task 1 - Scenari di scuotimento - Deliverable D0: Tecniche di simulazione

    Get PDF
    Progetto INGV-DPC S3 “Scenari di scuotimento in aree di interesse prioritario e/o strategico”Published4.1. Metodologie sismologiche per l'ingegneria sismicaope

    Machine learning for fast and accurate assessment of earthquake source parameters

    Get PDF
    Erdbeben gehören zu den zerstörerischsten Naturgefahren auf diesem Planeten. Obwohl Erdbeben seit Jahrtausenden dokumentiert sing, bleiben viele Fragen zu Erdbeben unbeantwortet. Eine Frage ist die Vorhersagbarkeit von Brüchen: Inwieweit ist es möglich, die endgültige Größe eines Bebens zu bestimmen, bevor der zugrundeliegende Bruchprozess endet? Diese Frage ist zentral für Frühwarnsysteme. Die bisherigen Forschungsergebnisse zur Vorhersagbarkeit von Brüchen sind widersprüchlich. Die Menge an verfügbaren Daten für Erdbebenforschung wächst exponentiell und hat den Tera- bis Petabyte-Bereich erreicht. Während viele klassische Methoden, basierend auf manuellen Datenauswertungen, hier ihre Grenzen erreichen, ermöglichen diese Datenmengen den Einsatz hochparametrischer Modelle und datengetriebener Analysen. Insbesondere ermöglichen sie den Einsatz von maschinellem Lernen und deep learning. Diese Doktorarbeit befasst sich mit der Entwicklung von Methoden des maschinellen Lernens zur Untersuchung zur Erbebenanalyse. Wir untersuchen zuerst die Kalibrierung einer hochpräzisen Magnitudenskala in einem post hoc Scenario. Nachfolgend befassen wir uns mit Echtzeitanalyse von Erdbeben mittels deep learning. Wir präsentieren TEAM, eine Methode zur Frühwarnung. Auf TEAM aufbauend entwickeln wir TEAM-LM zur Echtzeitschätzung von Lokation und Magnitude eines Erdbebens. Im letzten Schritt untersuchen wir die Vorhersagbarkeit von Brüchen mittels TEAM-LM anhand eines Datensatzes von teleseismischen P-Wellen-Ankünften. Dieser Analyse stellen wir eine Untersuchung von Quellfunktionen großer Erdbeben gegenüber. Unsere Untersuchung zeigt, dass die Brüche großer Beben erst vorhersagbar sind, nachdem die Hälfte des Bebens vergangen ist. Selbst dann können weitere Subbrüche nicht vorhergesagt werden. Nichtsdestotrotz zeigen die hier entwickelten Methoden, dass deep learning die Echtzeitanalyse von Erdbeben wesentlich verbessert.Earthquakes are among the largest and most destructive natural hazards known to humankind. While records of earthquakes date back millennia, many questions about their nature remain open. One question is termed rupture predictability: to what extent is it possible to foresee the final size of an earthquake while it is still ongoing? This question is integral to earthquake early warning systems. Still, research on this question so far has reached contradictory conclusions. The amount of data available for earthquake research has grown exponentially during the last decades reaching now tera- to petabyte scale. This wealth of data, while making manual inspection infeasible, allows for data-driven analysis and complex models with high numbers of parameters, including machine and deep learning techniques. In seismology, deep learning already led to considerable improvements upon previous methods for many analysis tasks, but the application is still in its infancy. In this thesis, we develop machine learning methods for the study of rupture predictability and earthquake early warning. We first study the calibration of a high-confidence magnitude scale in a post hoc scenario. Subsequently, we focus on real-time estimation models based on deep learning and build the TEAM model for early warning. Based on TEAM, we develop TEAM-LM, a model for real-time location and magnitude estimation. In the last step, we use TEAM-LM to study rupture predictability. We complement this analysis with results obtained from a deep learning model based on moment rate functions. Our analysis shows that earthquake ruptures are not predictable early on, but only after their peak moment release, after approximately half of their duration. Even then, potential further asperities can not be foreseen. While this thesis finds no rupture predictability, the methods developed within this work demonstrate how deep learning methods make a high-quality real-time assessment of earthquakes practically feasible

    Doctor of Philosophy in Computing

    Get PDF
    dissertationAn important area of medical imaging research is studying anatomical diffeomorphic shape changes and detecting their relationship to disease processes. For example, neurodegenerative disorders change the shape of the brain, thus identifying differences between the healthy control subjects and patients affected by these diseases can help with understanding the disease processes. Previous research proposed a variety of mathematical approaches for statistical analysis of geometrical brain structure in three-dimensional (3D) medical imaging, including atlas building, brain variability quantification, regression, etc. The critical component in these statistical models is that the geometrical structure is represented by transformations rather than the actual image data. Despite the fact that such statistical models effectively provide a way for analyzing shape variation, none of them have a truly probabilistic interpretation. This dissertation contributes a novel Bayesian framework of statistical shape analysis for generic manifold data and its application to shape variability and brain magnetic resonance imaging (MRI). After we carefully define the distributions on manifolds, we then build Bayesian models for analyzing the intrinsic variability of manifold data, involving the mean point, principal modes, and parameter estimation. Because there is no closed-form solution for Bayesian inference of these models on manifolds, we develop a Markov Chain Monte Carlo method to sample the hidden variables from the distribution. The main advantages of these Bayesian approaches are that they provide parameter estimation and automatic dimensionality reduction for analyzing generic manifold-valued data, such as diffeomorphisms. Modeling the mean point of a group of images in a Bayesian manner allows for learning the regularity parameter from data directly rather than having to set it manually, which eliminates the effort of cross validation for parameter selection. In population studies, our Bayesian model of principal modes analysis (1) automatically extracts a low-dimensional, second-order statistics of manifold data variability and (2) gives a better geometric data fit than nonprobabilistic models. To make this Bayesian framework computationally more efficient for high-dimensional diffeomorphisms, this dissertation presents an algorithm, FLASH (finite-dimensional Lie algebras for shooting), that hugely speeds up the diffeomorphic image registration. Instead of formulating diffeomorphisms in a continuous variational problem, Flash defines a completely new discrete reparameterization of diffeomorphisms in a low-dimensional bandlimited velocity space, which results in the Bayesian inference via sampling on the space of diffeomorphisms being more feasible in time. Our entire Bayesian framework in this dissertation is used for statistical analysis of shape data and brain MRIs. It has the potential to improve hypothesis testing, classification, and mixture models

    Functional tissue engineering of human heart valve leaflets

    Get PDF

    Application of Stochastic Simulation Methods to System Identification

    Get PDF
    Reliable predictive models for the response of structures are a necessity for many branches of earthquake engineering, such as design, structural control, and structural health monitoring. However, the process of choosing an appropriate class of models to describe a system, known as model-class selection, and identifying the specific predictive model based on available data, known as system identification, is difficult. Variability in material properties, complex constitutive behavior, uncertainty in the excitations caused by earthquakes, and limited constraining information (relatively few channels of data, compared to the number of parameters needed for a useful predictive model) make system identification an ill-conditioned problem. In addition, model-class selection is not trivial, as it involves balancing predictive power with simplicity. These problems of system identification and model-class selection may be addressed using a Bayesian probabilistic framework that provides a rational, transparent method for combining prior knowledge of a system with measured data and for choosing between competing model classes. The probabilistic framework also allows for explicit quantification of the uncertainties associated with modeling a system. The essential idea is to use probability logic and Bayes' Theorem to give a measure of plausibility for a model or class of models that is updated with available data. Similar approaches have been used in the field of system identification, but many currently used methods for Bayesian updating focus on the model defined by the set of most plausible parameter values. The challenge for these approaches (referred to as asymptotic-approximation-based methods) is when one must deal with ill-conditioned problems, where there may be many models with high plausibility, rather than a single v dominant model. It is demonstrated here that ill-conditioned problems in system identification and model-class selection can be effectively addressed using stochastic simulation methods. This work focuses on the application of stochastic simulation to updating and comparing model classes in problems of: (1) development of empirical ground motion attenuation relations, (2) structural model updating using incomplete modal data for the purposes of structural health monitoring, and (3) identification of hysteretic structural models, including degrading models, from seismic structural response. The results for system identification and model-class selection in this work fall into three categories. First, in cases where the existing asymptotic approximation-based methods are appropriate (i.e., well-conditioned problems with one highest-plausibility model), the results obtained using stochastic simulation show good agreement with results from asymptotic-approximation-based methods. Second, for cases involving ill-conditioned problems based on simulated data, stochastic simulation methods are successfully applied to obtain results in a situation where the use of asymptotics is not feasible (specfically, the identification of hysteretic models). Third, preliminary studies using stochastic simulation to identify a deteriorating hysteretic model with relatively sparse real data from a structure damaged in the 1994 Northridge earthquake show that the high-plausibility models demonstrate behavior consistent with the observed damage, indicating that there is promise in applying these methods to ill-conditioned problems in the real world

    Hazelnut (Corylus avellana L.) response to water management

    Get PDF

    Integrated Circuits and Systems for Smart Sensory Applications

    Get PDF
    Connected intelligent sensing reshapes our society by empowering people with increasing new ways of mutual interactions. As integration technologies keep their scaling roadmap, the horizon of sensory applications is rapidly widening, thanks to myriad light-weight low-power or, in same cases even self-powered, smart devices with high-connectivity capabilities. CMOS integrated circuits technology is the best candidate to supply the required smartness and to pioneer these emerging sensory systems. As a result, new challenges are arising around the design of these integrated circuits and systems for sensory applications in terms of low-power edge computing, power management strategies, low-range wireless communications, integration with sensing devices. In this Special Issue recent advances in application-specific integrated circuits (ASIC) and systems for smart sensory applications in the following five emerging topics: (I) dedicated short-range communications transceivers; (II) digital smart sensors, (III) implantable neural interfaces, (IV) Power Management Strategies in wireless sensor nodes and (V) neuromorphic hardware
    corecore