8,233 research outputs found

    Post-Disaster Supply Chain Interdependent Critical Infrastructure System Restoration: A Review of Data Necessary and Available for Modeling

    Get PDF
    The majority of restoration strategies in the wake of large-scale disasters have focused on short-term emergency response solutions. Few consider medium- to long-term restoration strategies to reconnect urban areas to national supply chain interdependent critical infrastructure systems (SCICI). These SCICI promote the effective flow of goods, services, and information vital to the economic vitality of an urban environment. To re-establish the connectivity that has been broken during a disaster between the different SCICI, relationships between these systems must be identified, formulated, and added to a common framework to form a system-level restoration plan. To accomplish this goal, a considerable collection of SCICI data is necessary. The aim of this paper is to review what data are required for model construction, the accessibility of these data, and their integration with each other. While a review of publically available data reveals a dearth of real-time data to assist modeling long-term recovery following an extreme event, a significant amount of static data does exist and these data can be used to model the complex interdependencies needed. For the sake of illustration, a particular SCICI (transportation) is used to highlight the challenges of determining the interdependencies and creating models capable of describing the complexity of an urban environment with the data publically available. Integration of such data as is derived from public domain sources is readily achieved in a geospatial environment, after all geospatial infrastructure data are the most abundant data source and while significant quantities of data can be acquired through public sources, a significant effort is still required to gather, develop, and integrate these data from multiple sources to build a complete model. Therefore, while continued availability of high quality, public information is essential for modeling efforts in academic as well as government communities, a more streamlined approach to a real-time acquisition and integration of these data is essential

    Systematic data analysis-based validation of simulation models with heterogeneous data sources

    Get PDF
    Complex networked computer systems are subjected to upgrades on a continuous basis. Modeling and simulation (M&S) of such systems helps with guiding their engineering processes when testing design options on the real system is not an option. Too often many system’s operational conditions need to be assumed in order to focus on the questions at hand, a typical case being the exogenous workload. Meanwhile, soaring amounts of monitoring information is logged to analyze the system’s performance in search for improvement opportunities. Concurrently, research questions mutate as operational conditions vary throughout its lifetime. This context poses many challenges to assess the validity of simulation models. As the empirical knowledge base of the system grows, the question arises whether a simulation model that was once deemed valid could be invalidated in the context of unprecedented operation conditions. This work presents a conceptual framework and a practical prototype that helps with answering this question in a systematic, automated way. MASADA parses recorded operation intervals and automatically parameterizes, launches, and validates simulation experiments. MASADA has been tested in the data acquisition network of the ATLAS particle physics experiment at CERN. The result is an efficient framework for validating our models on a continuous basis as new particle collisions impose unpredictable network workloads.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    Topics in perturbation analysis for stochastic hybrid systems

    Get PDF
    Control and optimization of Stochastic Hybrid Systems (SHS) constitute increasingly active fields of research. However, the size and complexity of SHS frequently render the use of exhaustive verification techniques prohibitive. In this context, Perturbation Analysis techniques, and in particular Infinitesimal Perturbation Analysis (IPA), have proven to be particularly useful for this class of systems. This work focuses on applying IPA to two different problems: Traffic Light Control (TLC) and control of cancer progression, both of which are viewed as dynamic optimization problems in an SHS environment. The first part of this thesis addresses the TLC problem for a single intersection modeled as a SHS. A quasi-dynamic control policy is proposed based on partial state information defined by detecting whether vehicle backlogs are above or below certain controllable threshold values. At first, the threshold parameters are controlled while assuming fixed cycle lengths and online gradient estimates of a cost metric with respect to these controllable parameters are derived using IPA techniques. These estimators are subsequently used to iteratively adjust the threshold values so as to improve overall system performance. This quasi-dynamic analysis of the TLC\ problem is subsequently extended to parameterize the control policy by green and red cycle lengths as well as queue content thresholds. IPA estimators necessary to simultaneously control the light cycles and thresholds are rederived and thereafter incorporated into a standard gradient based scheme in order to further ameliorate system performance. In the second part of this thesis, the problem of controlling cancer progression is formulated within a Stochastic Hybrid Automaton (SHA) framework. Leveraging the fact that cell-biologic changes necessary for cancer development may be schematized as a series of discrete steps, an integrative closed-loop framework is proposed for describing the progressive development of cancer and determining optimal personalized therapies. First, the problem of cancer heterogeneity is addressed through a novel Mixed Integer Linear Programming (MILP) formulation that integrates somatic mutation and gene expression data to infer the temporal sequence of events from cross-sectional data. This formulation is tested using both simulated data and real breast cancer data with matched somatic mutation and gene expression measurements from The Cancer Genome Atlas (TCGA). Second, the use of basic IPA techniques for optimal personalized cancer therapy design is introduced and a methodology applicable to stochastic models of cancer progression is developed. A case study of optimal therapy design for advanced prostate cancer is performed. Given the importance of accurate modeling in conjunction with optimal therapy design, an ensuing analysis is performed in which sensitivity estimates with respect to several model parameters are evaluated and critical parameters are identified. Finally, the tradeoff between system optimality and robustness (or, equivalently, fragility) is explored so as to generate valuable insights on modeling and control of cancer progression

    Predictive Maintenance on the Machining Process and Machine Tool

    Get PDF
    This paper presents the process required to implement a data driven Predictive Maintenance (PdM) not only in the machine decision making, but also in data acquisition and processing. A short review of the different approaches and techniques in maintenance is given. The main contribution of this paper is a solution for the predictive maintenance problem in a real machining process. Several steps are needed to reach the solution, which are carefully explained. The obtained results show that the Preventive Maintenance (PM), which was carried out in a real machining process, could be changed into a PdM approach. A decision making application was developed to provide a visual analysis of the Remaining Useful Life (RUL) of the machining tool. This work is a proof of concept of the methodology presented in one process, but replicable for most of the process for serial productions of pieces

    Benchmarking Data Acquisition event building network performance for the ATLAS HL-LHC upgrade

    Get PDF
    The ATLAS experiment’s data acquisition (DAQ) system will be extensively updated to take full advantage of the High-Luminosity LHC (HL-LHC) upgrade, allowing it to record data at unprecedented rates. The detector will be read out at 1 MHz, generating over 5 TB/s of data. This design poses significant challenges for the Ethernet-based network, which will have to transport 20 times more data than during Run 3. The increased data rate, data sizes and number of servers will exacerbate the TCP Incast effect observed in the past, making it impossible to fully exploit the capabilities of the network and limiting the performance of the processing farm. We present exhaustive and systematic experiments to define buffering requirements in network equipment to minimise the effects of TCP Incast and reduce the impact on processing applications. Both deep and shallow buffer switches were stress-tested using DAQ traffic patterns in a test environment at approximately 10% of the expected HL-LHC DAQ system size. As the desired HL-LHC system hardware is not currently available and the laboratory size is significantly smaller, the tests aim to extrapolate buffer requirements to the expected operating point. A novel analytical formula and new simulation models have been developed to cross-validate the results. The results of these evaluations will contribute to the decision-making process for the acquisitions of network hardware for the HL-LHC DAQ

    Proceedings, MSVSCC 2014

    Get PDF
    Proceedings of the 8th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 17, 2014 at VMASC in Suffolk, Virginia

    Deep Model for Improved Operator Function State Assessment

    Get PDF
    A deep learning framework is presented for engagement assessment using EEG signals. Deep learning is a recently developed machine learning technique and has been applied to many applications. In this paper, we proposed a deep learning strategy for operator function state (OFS) assessment. Fifteen pilots participated in a flight simulation from Seattle to Chicago. During the four-hour simulation, EEG signals were recorded for each pilot. We labeled 20- minute data as engaged and disengaged to fine-tune the deep network and utilized the remaining vast amount of unlabeled data to initialize the network. The trained deep network was then used to assess if a pilot was engaged during the four-hour simulation

    Jäähdytystornien mallintaminen ja lämmön talteenotto

    Get PDF
    The European Organization for Nuclear Research (CERN) has agreed on a commitment to minimize the environmental impact of the wide range of activities the Laboratory carries out. A key measure is waste heat recovery (WHR), as approximately 75 % of the power consumed in the electricity intensive particle accelerator complex, is dissipated to the sky as waste heat by cooling towers. A project for installing waste heat recovery to the cooling system of the Large Hadron Collider (LHC) at LHC point 8 has started and is expected to be operational in the beginning of 2020, after the second LHC long shutdown in 2018-2019. The operation of WHR causes a risk of temperature transients in case of unexpected WHR shutdowns. A dynamic simulation model of the cooling towers is needed to verify robustness against these temperature transients. In this Thesis, a thorough literature review of existing evaporative cooling tower modeling methods is performed, and the hybrid modeling method presented by Jin & al. (2007), is implemented to simulate the cooling towers at LHC point 8. The developed model is validated against real operational data. To the authors knowledge, this study is the first published use case of this evaporative cooling tower modeling method. A selection of anticipated sudden WHR shutdown scenarios are simulated in virtual commissioning environment with real programmable logic controller (PLC) to verify robustness of the cooling system against sudden temperature transients. A conclusion is that the cooling towers and their current control scheme is sufficient in dampening the anticipated temperature transients. This knowledge allows the WHR installation project to proceed.Euroopan hiukkasfysiikan tutktimuskeskuksella CERN:illä on tavoitteenaan minimoida laboration aktiviteettien ympäristövaikutukset. Tämän tavoitteen saavuttamisessa lämmön talteenotto (LTO) on tärkeässä roolissa, sillä energiaintensiivisessä hiukkaskiihdytinkompleksissa noin 75 prosenttia käytetystä sähkötehosta haihdutetaan taivaalle jäähdytystornien kautta hukkalämpönä. Toisen pitkän käyttökaton aikana vuosina 2019 - 2020 suuren hadronitörmäyttimen (LHC) pisteelle 8 asennetaan lämmön talteenottojärjestelmä. Lämmön talteenoton käyttö aiheuttaa riskin lämpötilavaihteluiden syntymiselle, jos hukkalämmön vastaanotto katkeaa äkillisesti. Dynaaminen jäähdytystornien simulaointimalli tarvitaan suuren hadronitörmäyttimen toiminnan kannalta kriittisen jäähdytysjärjestelmän luotettavuuden varmistamiseksi. Tässä diplomityössä esitellään kattava kirjallisuuskatsaus jäähdytystornien mallinusmenetelmistä ja implementoidaan Jin & al. (2007) julkaisema hybridi-menetelmä LHC pisteen 8 jäähdytystornien simuloimiseksi. Malli validoidaan järjestelmästä mitattua dataa vasten. Tehdyn kirjallisuusselvityksen perusteella tämä diplomityö on ensimmäinen julkaisu, jossa tätä jäähdystornien mallinusmenetelmää sovelletaan käytössä olevan järjestelmän simuloimiseen. Järjestelmän lämpötilatransienttien vaimmenuskyvyn tutkimiseksi jäähdytystornimallilla simuloidaan eri vuodenaikoina odetettavissa olevia lämmön talteenoton pysähtymisiä virtual commissioning -ympäristössä, jossa todellinen ohjelmoitava logiikka (Programmable Logic Controller PLC) ohjaa mallia. Simulointien tulokset osoittavat, että nykyinen jäähdytyskapasiteetti ja ohjauslogiikka vaimentavat odotettavissa olevia transientteja tehokkaasti. Tämän tiedon perusteella lämmön talteenottoprojekti voi edetä

    Computer-based methods of knowledge generation in science - What can the computer tell us about the world?

    Get PDF
    Der Computer hat die wissenschaftliche Praxis in fast allen Disziplinen signifikant verändert. Neben traditionellen Quellen für neue Erkenntnisse wie beispielsweise Beobachtungen, deduktiven Argumenten oder Experimenten, werden nun regelmäßig auch computerbasierte Methoden wie ‚Computersimulationen‘ und ‚Machine Learning‘ als solche Quellen genannt. Dieser Wandel in der Wissenschaft bringt wissenschaftsphilosophische Fragen in Bezug auf diese neuen Methoden mit sich. Eine der naheliegendsten Fragen ist dabei, ob diese neuen Methoden dafür geeignet sind, als Quellen für neue Erkenntnisse zu dienen. Dieser Frage wird in der vorliegenden Arbeit nachgegangen, wobei ein besonderer Fokus auf einem der zentralen Probleme der computerbasierten Methoden liegt: der Opazität. Computerbasierte Methoden werden als opak bezeichnet, wenn der kausale Zusammenhang zwischen Input und Ergebnis nicht nachvollziehbar ist. Zentrale Fragen dieser Arbeit sind, ob Computersimulationen und Machine Learning Algorithmen opak sind, ob die Opazität bei beiden Methoden von der gleichen Natur ist und ob die Opazität verhindert, mit computerbasierten Methoden neue Erkenntnisse zu erlangen. Diese Fragen werden nah an der naturwissenschaftlichen Praxis untersucht; insbesondere die Teilchenphysik und das ATLAS-Experiment am CERN dienen als wichtige Fallbeispiele. Die Arbeit basiert auf fünf Artikeln. In den ersten beiden Artikeln werden Computersimulationen mit zwei anderen Methoden – Experimenten und Argumenten – verglichen, um sie methodologisch einordnen zu können und herauszuarbeiten, welche Herausforderungen beim Erkenntnisgewinn Computersimulationen von den anderen Methoden unterscheiden. Im ersten Artikel werden Computersimulationen und Experimente verglichen. Aufgrund der Vielfalt an Computersimulationen ist es jedoch nicht sinnvoll, einen pauschalen Vergleich mit Experimenten durchzuführen. Es werden verschiedene epistemische Aspekte herausgearbeitet, auf deren Basis der Vergleich je nach Anwendungskontext durchgeführt werden sollte. Im zweiten Artikel wird eine von Claus Beisbart formulierte Position diskutiert, die Computersimulationen als Argumente versteht. Dieser ‚Argument View‘ beschreibt die Funktionsweise von Computersimulationen sehr gut und ermöglicht es damit, Fragen zur Opazität und zum induktiven Charakter von Computersimulationen zu beantworten. Wie mit Computersimulationen neues Wissen erlangt werden kann, kann der Argument View alleine jedoch nicht ausreichend beantworten. Der dritte Artikel beschäftigt sich mit der Rolle von Modellen in der theoretischen Ökologie. Modelle sind zentraler Bestandteil von Computersimulationen und Machine Learning Algorithmen. Die Fragen über die Beziehung von Phänomenen und Modellen, die hier anhand von Beispielen aus der Ökologie betrachtet werden, sind daher für die epistemischen Fragen dieser Arbeit von zentraler Bedeutung. Der vierte Artikel bildet das Bindeglied zwischen den Themen Computersimulation und Machine Learning. In diesem Artikel werden verschiedene Arten von Opazität definiert und Computersimulationen und Machine Learning Algorithmen anhand von Beispielen aus der Teilchenphysik daraufhin untersucht, welche Arten von Opazität jeweils vorhanden sind. Es wird argumentiert, dass Opazität für den Erkenntnisgewinn mithilfe von Computer-simulationen kein prinzipielles Problem darstellt, Model-Opazität jedoch für Machine Learning Algorithmen eine Quelle von fundamentaler Opazität sein könnte. Im fünften Artikel wird dieselbe Terminologie auf den Bereich von Schachcomputern angewandt. Der Vergleich zwischen einem traditionellen Schachcomputer und einem Schachcomputer, der auf einem neuronalen Netz basiert ermöglicht die Illustration der Konsequenzen der unterschiedlichen Opazitäten. Insgesamt ermöglicht die Arbeit eine methodische Einordnung von Computersimulationen und zeigt, dass sich weder mit einem Bezug auf Experimente noch auf Argumente alleine klären lässt, wie Computersimulationen zu neuen Erkenntnissen führen. Eine klare Definition der jeweils vorhanden Opazitäten ermöglicht eine Abgrenzung von den eng verwandten Machine Learning Algorithmen

    Causal influence of brainstem response to transcutaneous vagus nerve stimulation on cardiovagal outflow

    Get PDF
    background: the autonomic response to transcutaneous auricular vagus nerve stimulation (taVNS) has been linked to the engagement of brainstem circuitry modulating autonomic outflow. However, the physiological mechanisms supporting such efferent vagal responses are not well understood, particularly in humans. hypothesis: we present a paradigm for estimating directional brain-heart interactions in response to taVNS. We propose that our approach is able to identify causal links between the activity of brainstem nuclei involved in autonomic control and cardiovagal outflow. methods: we adopt an approach based on a recent reformulation of granger causality that includes permutation-based, nonparametric statistics. The method is applied to ultrahigh field (7T) functional magnetic resonance imaging (fMRI) data collected on healthy subjects during taVNS. results: our framework identified taVNS-evoked functional brainstem responses with superior sensitivity compared to prior conventional approaches, confirming causal links between taVNS stimulation and fMRI response in the nucleus tractus solitarii (NTS). furthermore, our causal approach elucidated potential mechanisms by which information is relayed between brainstem nuclei and cardiovagal, i.e., high-frequency heart rate variability, in response to taVNS. Our findings revealed that key brainstem nuclei, known from animal models to be involved in cardiovascular control, exert a causal influence on taVNS-induced cardiovagal outflow in humans. conclusion: our causal approach allowed us to noninvasively evaluate directional interactions between fMRI BOLD signals from brainstem nuclei and cardiovagal outflow
    corecore