240 research outputs found

    Modeling and Intelligent Control for Spatial Processes and Spatially Distributed Systems

    Full text link
    Dynamical systems are often characterized by their time-dependent evolution, named temporal dynamics. The space-dependent evolution of dynamical systems, named spatial dynamics, is another important domain of interest for many engineering applications. By studying both the spatial and temporal evolution, novel modeling and control applications may be developed for many industrial processes. One process of special interest is additive manufacturing, where a three-dimensional object is manufactured in a layer-wise fashion via a numerically controlled process. The material is printed over a spatial domain in each layer and subsequent layers are printed on top of each other. The spatial dynamics of the printing process over the layers is named the layer-to-layer spatial dynamics. Additive manufacturing provides great flexibility in terms of material selection and design geometry for modern manufacturing applications, and has been hailed as a cornerstone technology for smart manufacturing, or Industry 4.0, applications in industry. However, due to the issues in reliability and repeatability, the applicability of additive manufacturing in industry has been limited. Layer-to-layer spatial dynamics represent the dynamics of the printed part. Through the layer-to-layer spatial dynamics, it is possible to represent the physical properties of the part such as dimensional properties of each layer in the form of a heightmap over a spatial domain. Thus, by considering the spatial dynamics, it is possible to develop models and controllers for the physical properties of a printed part. This dissertation develops control-oriented models to characterize the spatial dynamics and layer-to-layer closed-loop controllers to improve the performance of the printed parts in the layer-to-layer spatial domain. In practice, additive manufacturing resources are often utilized as a fleet to improve the throughput and yield of a manufacturing system. An additive manufacturing fleet poses additional challenges in modeling, analysis, and control at a system-level. An additive manufacturing fleet is an instance of the more general class of spatially distributed systems, where the resources in the system (e.g., additive manufacturing machines, robots) are spatially distributed within the system. The goal is to efficiently model, analyze, and control spatially distributed systems by considering the system-level interactions of the resources. This dissertation develops a centralized system-level modeling and control framework for additive manufacturing fleets. Many monitoring and control applications rely on the availability of run-time, up-to-date representations of the physical resources (e.g., the spatial state of a process, connectivity and availability of resources in a fleet). Purpose-driven digital representations of the physical resources, known as digital twins, provide up-to-date digital representations of resources in run-time for analysis and control. This dissertation develops an extensible digital twin framework for cyber-physical manufacturing systems. The proposed digital twin framework is demonstrated through experimental case studies on abnormality detection, cyber-security, and spatial monitoring for additive manufacturing processes. The results and the contributions presented in this dissertation improve the performance and reliability of additive manufacturing processes and fleets for industrial applications, which in turn enables next-generation manufacturing systems with enhanced control and analysis capabilities through intelligent controllers and digital twins.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169635/1/baltaefe_1.pd

    Smart Sensing: Selection, Prediction and Monitoring

    Get PDF
    A sensor is a device which is used to detect physical parameters of interest like temperature, pressure, or strain, performing the so called sensing process. This kind of device has been widely adopted in different fields such as aeronautics, automotive, security, logistics, health-care and more. The essential difference between a smart sensor and a standard sensor is its intelligence capability: smart sensors are able to capture and elaborate data from the environment while communicating and interacting with other systems in order to make predictions and find intelligent solutions based on the application needs. The first part of this thesis is focused on the problem of sensor selection in the context of virtual sensing of temperature in indoor environments, a topic of paramount importance which allows to increase the accuracy of the predictive models employed in the following phases by providing more informative data. In particular, virtual sensing refers to the process of estimating or predicting physical parameters without relying on physical sensors, using computational algorithms and predictive models to gather and analyze data for accurate predictions. We analyze the literature, propose and evaluate methodologies and solutions for sensor selection and placement based on machine learning techniques, including evolutionary algorithms. Thereafter, once determined which physical sensors to wield, the focus shifts to the actual methodology for virtual sensing strategies for the prediction of temperatures allowing to uniformly monitor uncovered or unreachable locations, reducing the sensors deployment costs and providing, at the same time, a fallback solution in case of sensor failures. For this purpose, we conduct a comprehensive assessment of different virtual sensing strategies including novel solutions proposed based on recurrent neural networks and graph neural networks able to effectively exploit spatio-temporal features. The methodologies considered so far are able to accurately complete the information coming from real physical sensors, allowing us to effectively carry out monitoring tasks such as anomaly or event detection. Therefore, the final part of this work looks at sensors from another, more formal, point of view. Specifically, it is devoted to the study and design of a framework aimed at pairing monitoring and machine learning techniques in order to detect, in a preemptive manner, critical behaviours of a system that could lead to a failure. This is done extracting interpretable properties, expressed in a given temporal logic formalism, from sensor data. The proposed framework is evaluated through an experimental assessment performed on benchmark datasets, and then compared to previous approaches from the literature

    Modelling seismicity as a spatio-temporal point process using inlabru

    Get PDF
    Reliable deterministic prediction of earthquake occurrence is not possible at present, and may never be. In the absence of a reliable deterministic model, we need alternate strategies to manage the seismic hazard or the risk. This involves making statements of the likelihood or earthquake occurrence in space and time, including a fair and accurate description of the uncertainty around statements used in operational decision-making. Probabilistic Seismic Hazard Analysis (PSHA) and Operational Earthquake Forecasting (OEF) have the role of providing probabilistic statements on the hazard associated with earthquakes on long-term (decades to centuries) and short-term (days to decades) time frames respectively. Both PSHA and OEF rely on a source model able to describe the occurrence of earthquakes. PSHA models are commonly modelled using a spatially-variable Poisson process to describe earthquake occurrence. Therefore, they are calibrated on declustered catalogues which retains only the largest earthquakes in a sequence. OEF models, on the other hand, are commonly time-dependent models which describes the occurrence of all the events above a certain magnitude threshold including dependent events such as aftershocks or swarms. They are calibrated on the full earthquake catalogue and provide accurate descriptions of the clustering process and the time-evolution of earthquake sequences. The Epidemic-Type Aftershock Sequence (ETAS) model is the most commonly used model as time-dependent seismicity model and belongs to the general class of Hawkes (or self-exciting) processes. Under the ETAS model, any earthquake in the sequence has the ability of inducing (or triggering) its own subsequence of earthquakes in a cascade of events, as commonly observed in nature. The earthquake catalogue is then the union of a set of events occurring independently from each other (background events) and a set of events which have been induced or triggered by another (aftershocks). The reliability of PSHA or OEF strategies depends upon the reliability of the source model used to describe earthquake occurrence. In order to improve the source model, we need the ability to (a) incorporate hypotheses on earthquake occurrence in a model, and (b) validate the model against observed data. Both tasks are problematic. Indeed, the complex mathematical form of the ETAS model requires ad-hoc methodologies to perform inference on the model parameters. These methodologies then need further modification if the classical ETAS model is adjusted to introduce new hypotheses. Comparing forecasts produced by models incorporating different hypotheses which are and calibrated with different methods is problematic because it is difficult (if not impossible) to determine where the differences in the forecasts are coming from. Therefore, a unique framework capable of supporting ETAS models incorporating different hypotheses would be beneficial. Similarly, the validation step has to be done on models calibrated on the same data and producing forecasts for the same spatio-temporal region. Moreover the validation must ultimately be done against future data, unknown in the moment in which the forecasts are produced, to ensure that no information about the data used to validate the models is incorporated in the models themselves. Hence, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been founded with the role of gathering forecasting models and running fully-prospective forecasting experiments in an open environment. CSEP ensures that the models are validated fairly and using a set of community-agreed metrics which measure the agreement between forecasts and data on the outcomes. In this thesis, I present and apply a new Bayesian approximation technique for Hawkes process models (including ETAS). I also demonstrate the importance of one of the statistical properties that scores used to rank competing forecasts need to have in order to provide trustworthy results. The Bayesian framework allows an accurate description of the uncertainty around model parameters which can then be propagated to any quantity of interest. In the context of Bayesian statistics, the most commonly used techniques to perform inference are Markov Chain Monte Carlo (MCMC) techniques which are sampling-based methods. Instead, I use the Integrated Nested Laplace Approximation (INLA) to provide a deterministic approximation of the parameter posterior distribution instead of the random sampling. INLA is faster than MCMC for problems involving a large number of correlated parameters and offers an alternative way to implement complex statistical models which are infeasible (from a computational point of view) with MCMC. This provides researchers and practitioners with a statistical framework to formulate ETAS models incorporating different hypotheses, produce forecasts that accounts for uncertainty, and test them using CSEP procedures. I build on the work done to implement time-independent models for seismicity with INLA which provided a framework to study the effect of covariates such as depth, GPS displacement, heatflow, strain rate, and distance to the nearest fault but lacked the ability to describe the clustering process of earthquakes. I show that this work can be extended to include time-dependent Hawkes process models and run in a reasonable computational time using INLA. In this framework, the information from covariates can be incorporated both in modelling the rate of background events, and in modelling the number aftershocks. This resembles how information on covariates is incorporated in Generalized Linear Models (GLMs) which are widely used to study the effect of covariates on a range of phenomena. Indeed, this work offers a way to borrow ideas and techniques used with GLMs and apply them to seismicity analyses. To make the proposed technique widely accessible, I have developed a new R-package called ETAS.inlabru which offers user-friendly access to the proposed methodology. The ETAS.inlabru package is based on the inlabru R-package which offers access to the INLA methodology. In this thesis, I compared our approach with the MCMC technique implemented through the bayesianETAS package and shows that ETAS.inlabru provides similar results to bayesianETAS, but it is faster, scales more efficiently increasing the amount of data, and can support a wider range of ETAS models, specifically those involving multiple covariates. I believe that this work provides users with a reliable Bayesian framework for the ETAS model alleviating the burden of modifying/coding their own optimization routines and allowing more flexibility in the range of hypotheses that can be incorporated and validated. In this thesis, I have analysed the 2009 L’Aquila and 2016 Amatrice seismic sequences occurred in central Italy and found that the depth of the events have a negative effect on the aftershock productivity, and that models involving covariates show a better fit to the data than the classical ETAS model. On the statistical properties that scores needs to posses to provide trustworthy rankings of competing forecasts, I focus on the notion of proper scores. I show that the Parimutuel Gambling (PG) score, used to rank forecasts in previous CSEP experiments, has been used in situations in which is not proper. Indeed, I demonstrate that the PG score is proper only in a specific situation and improper in general. I compare its performances with two proper alternatives: the Brier and the Logarithmic (Log) scores. The simulation procedure employed for this part of the thesis can be easily adapted to study the properties of other validation procedures as the ones used in CSEP or to determine important quantities for the experimental design such as the amount of data with which the comparison should be performed. This contributes to the wider discussion on the statistical properties of CSEP tests, and is an additional step in determining sanity-checks that scoring rules have to pass before being used to validate earthquake forecasts in CSEP experiments

    New Perspectives in the Definition/Evaluation of Seismic Hazard through Analysis of the Environmental Effects Induced by Earthquakes

    Get PDF
    The devastating effects caused by the recent catastrophic earthquakes that took place all over the world from Japan, New Zealand, to Chile, as well as those occurring in the Mediterranean basin, have once again shown that ground motion, although a serious source of direct damage, is not the only parameter to be considered, with most damage being the result of coseismic geological effects that are directly connected to the earthquake source or caused by ground shaking. The primary environmental effects induced by earthquakes as well as the secondary effects (sensu Environmental Seismic Intensity - ESI 2007 scale) must be considered for a more correct and complete evaluation of seismic hazards, at both regional and local scales. This Special Issue aims to collect all contributions that, using different methodologies, integrate new data produced with multi-disciplinary and innovative methods. These methodologies are essential for the identification and characterization of seismically active areas, and for the development of new hazard models, obtained using different survey techniques. The topic attracted a lot of interest, 19 peer-reviewed articles were collected; moreover, different areas of the world have been analyzed through these methodologies: Italy, USA, Spain, Australia, Ecuador, Guatemala, South Korea, Kyrgyzstan, Mongolia, Russia, China, Japan, and Nepal

    Handbook of Mathematical Geosciences

    Get PDF
    This Open Access handbook published at the IAMG's 50th anniversary, presents a compilation of invited path-breaking research contributions by award-winning geoscientists who have been instrumental in shaping the IAMG. It contains 45 chapters that are categorized broadly into five parts (i) theory, (ii) general applications, (iii) exploration and resource estimation, (iv) reviews, and (v) reminiscences covering related topics like mathematical geosciences, mathematical morphology, geostatistics, fractals and multifractals, spatial statistics, multipoint geostatistics, compositional data analysis, informatics, geocomputation, numerical methods, and chaos theory in the geosciences

    How to Certify Machine Learning Based Safety-critical Systems? A Systematic Literature Review

    Full text link
    Context: Machine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called 'safety-critical' systems such as automotive or aeronautic has proven to be very challenging, since the shift in paradigm that ML brings completely changes traditional certification approaches. Objective: This paper aims to elucidate challenges related to the certification of ML-based safety-critical systems, as well as the solutions that are proposed in the literature to tackle them, answering the question 'How to Certify Machine Learning Based Safety-critical Systems?'. Method: We conduct a Systematic Literature Review (SLR) of research papers published between 2015 to 2020, covering topics related to the certification of ML systems. In total, we identified 217 papers covering topics considered to be the main pillars of ML certification: Robustness, Uncertainty, Explainability, Verification, Safe Reinforcement Learning, and Direct Certification. We analyzed the main trends and problems of each sub-field and provided summaries of the papers extracted. Results: The SLR results highlighted the enthusiasm of the community for this subject, as well as the lack of diversity in terms of datasets and type of models. It also emphasized the need to further develop connections between academia and industries to deepen the domain study. Finally, it also illustrated the necessity to build connections between the above mention main pillars that are for now mainly studied separately. Conclusion: We highlighted current efforts deployed to enable the certification of ML based software systems, and discuss some future research directions.Comment: 60 pages (92 pages with references and complements), submitted to a journal (Automated Software Engineering). Changes: Emphasizing difference traditional software engineering / ML approach. Adding Related Works, Threats to Validity and Complementary Materials. Adding a table listing papers reference for each section/subsection

    Monitoring and analysis system for performance troubleshooting in data centers

    Get PDF
    It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance troubleshooting in data centers to world’s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming. To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers. VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the performance issue. VScope’s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScope’s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ‘thin layer’ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.Ph.D

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    Turku Centre for Computer Science – Annual Report 2013

    Get PDF
    Due to a major reform of organization and responsibilities of TUCS, its role, activities, and even structures have been under reconsideration in 2013. The traditional pillar of collaboration at TUCS, doctoral training, was reorganized due to changes at both universities according to the renewed national system for doctoral education. Computer Science and Engineering and Information Systems Science are now accompanied by Mathematics and Statistics in newly established doctoral programs at both University of Turku and &Aring;bo Akademi University. Moreover, both universities granted sufficient resources to their respective programmes for doctoral training in these fields, so that joint activities at TUCS can continue. The outcome of this reorganization has the potential of proving out to be a success in terms of scientific profile as well as the quality and quantity of scientific and educational results.&nbsp; International activities that have been characteristic to TUCS since its inception continue strong. TUCS&rsquo; participation in European collaboration through EIT ICT Labs Master&rsquo;s and Doctoral School is now more active than ever. The new double degree programs at MSc and PhD level between University of Turku and Fudan University in Shaghai, P.R.China were succesfully set up and are&nbsp; now running for their first year. The joint students will add to the already international athmosphere of the ICT House.&nbsp; The four new thematic reseach programmes set up acccording to the decision by the TUCS Board have now established themselves, and a number of events and other activities saw the light in 2013. The TUCS Distinguished Lecture Series managed to gather a large audience with its several prominent speakers. The development of these and other research centre activities continue, and&nbsp; new practices and structures will be initiated to support the tradition of close academic collaboration.&nbsp; The TUCS&rsquo; slogan Where Academic Tradition Meets the Exciting Future has proven true throughout these changes. Despite of the dark clouds on the national and European economic sky, science and higher education in the field have managed to retain all the key ingredients for success. Indeed, the future of ICT and Mathematics in Turku seems exciting.</p
    corecore