103 research outputs found

    An Ensemble Kalman-Particle Predictor-Corrector Filter for Non-Gaussian Data Assimilation

    Full text link
    An Ensemble Kalman Filter (EnKF, the predictor) is used make a large change in the state, followed by a Particle Filer (PF, the corrector) which assigns importance weights to describe non-Gaussian distribution. The weights are obtained by nonparametric density estimation. It is demonstrated on several numerical examples that the new predictor-corrector filter combines the advantages of the EnKF and the PF and that it is suitable for high dimensional states which are discretizations of solutions of partial differential equations.Comment: ICCS 2009, to appear; 9 pages; minor edit

    Passive Radio Frequency-based 3D Indoor Positioning System via Ensemble Learning

    Full text link
    Passive radio frequency (PRF)-based indoor positioning systems (IPS) have attracted researchers' attention due to their low price, easy and customizable configuration, and non-invasive design. This paper proposes a PRF-based three-dimensional (3D) indoor positioning system (PIPS), which is able to use signals of opportunity (SoOP) for positioning and also capture a scenario signature. PIPS passively monitors SoOPs containing scenario signatures through a single receiver. Moreover, PIPS leverages the Dynamic Data Driven Applications System (DDDAS) framework to devise and customize the sampling frequency, enabling the system to use the most impacted frequency band as the rated frequency band. Various regression methods within three ensemble learning strategies are used to train and predict the receiver position. The PRF spectrum of 60 positions is collected in the experimental scenario, and three criteria are applied to evaluate the performance of PIPS. Experimental results show that the proposed PIPS possesses the advantages of high accuracy, configurability, and robustness.Comment: DDDAS 202

    Security and Privacy Dimensions in Next Generation DDDAS/Infosymbiotic Systems: A Position Paper

    Get PDF
    AbstractThe omnipresent pervasiveness of personal devices will expand the applicability of the Dynamic Data Driven Application Systems (DDDAS) paradigm in innumerable ways. While every single smartphone or wearable device is potentially a sensor with powerful computing and data capabilities, privacy and security in the context of human participants must be addressed to leverage the infinite possibilities of dynamic data driven application systems. We propose a security and privacy preserving framework for next generation systems that harness the full power of the DDDAS paradigm while (1) ensuring provable privacy guarantees for sensitive data; (2) enabling field-level, intermediate, and central hierarchical feedback-driven analysis for both data volume mitigation and security; and (3) intrinsically addressing uncertainty caused either by measurement error or security-driven data perturbation. These thrusts will form the foundation for secure and private deployments of large scale hybrid participant-sensor DDDAS systems of the future

    Explainable Human-in-the-loop Dynamic Data-Driven Digital Twins

    Full text link
    Digital Twins (DT) are essentially Dynamic Data-driven models that serve as real-time symbiotic "virtual replicas" of real-world systems. DT can leverage fundamentals of Dynamic Data-Driven Applications Systems (DDDAS) bidirectional symbiotic sensing feedback loops for its continuous updates. Sensing loops can consequently steer measurement, analysis and reconfiguration aimed at more accurate modelling and analysis in DT. The reconfiguration decisions can be autonomous or interactive, keeping human-in-the-loop. The trustworthiness of these decisions can be hindered by inadequate explainability of the rationale, and utility gained in implementing the decision for the given situation among alternatives. Additionally, different decision-making algorithms and models have varying complexity, quality and can result in different utility gained for the model. The inadequacy of explainability can limit the extent to which humans can evaluate the decisions, often leading to updates which are unfit for the given situation, erroneous, compromising the overall accuracy of the model. The novel contribution of this paper is an approach to harnessing explainability in human-in-the-loop DDDAS and DT systems, leveraging bidirectional symbiotic sensing feedback. The approach utilises interpretable machine learning and goal modelling to explainability, and considers trade-off analysis of utility gained. We use examples from smart warehousing to demonstrate the approach.Comment: 10 pages, 1 figure, submitted to the 4th International Conference on InfoSymbiotics/Dynamic Data Driven Applications Systems (DDDAS2022

    An introduction to a porous shape memory alloy dynamic data driven application system

    Get PDF
    Shape Memory Alloys are capable of changing their crystallographic structure due to changes of temperature and/or stress. Our research focuses on three points: (1) Iterative Homogenization of Porous SMAs: Development of a Multiscale Model of porous SMAs utilizing iterative homogenization and based on existing knowledge of constitutive modeling of polycrystalline SMAs. (2) DDDAS: Develop tools to turn on and off the sensors and heating unit(s), to monitor on-line data streams, to change scales based on incoming data, and to control what type of data is generated. The application must have the capability to be run and steered remotely. (3) Modeling and applications of porous SMA: Vibration isolation devices with SMA and porous SMA components for aerospace applications will be analyzed and tested. Numerical tools for modeling porous SMAs with a second viscous phase will be developed. The outcome will be a robust, three-dimensional, multiscale model of porous SMA that can be used in complicated, real-life structural analysis of SMA components using a DDDAS framework. © 2012 Published by Elsevier Ltd

    Trajectory Optimization of Meteorological Sampling

    Get PDF
    Swarming involves controlling multiple unmanned aerial systems or UAS in formation through the use of controls and algorithms. Swarm systems may be distributed and not rely on a central controller. As a result, this gives the system the potential to be robust and scalable, allowing for flexibility for the engineers to approach problems differently. Based on a variety of a few models and algorithms, such as artificial potential fields (APFs), agent-based modeling, dynamic data driven application systems (DDDAS), and virtual structures, it may be determined that using a variation of one of these would be the best course of action for formation flight for a swarm of UASs. Choosing the right controller is dependent on what works best for acquiring atmospheric data in a coordinated formation. Current atmospheric data is commonly taken using a weather tower or mesonet. A mesonet is typically a 10m high tower with a pressure, temperature, humidity sensor placed at the top. Deciding which controller can be used to not only take useful atmospheric data, but in many cases replace a mesonet due to mobility and customization is the goal. A wind profile is a transient matter, so using a swarm vs using one drone or a mesonet helps to solve the issues that the latter two run into due to time and space. A swarm can record multiple points at one time due to each agent being a data point representation, whereas a single drone can only account for a single location in time. A swarm using a virtual structure (VS) can cover a variety of amounts of space in a coordinated shape. A meosnet is stationary and only oriented vertically and an uncoordinated group of UAS does not have the capability to operate together. This leaves the capability that a VS swarm has to fill in the gaps or even replace the traditional approaches. An array of sensor packages with mobility, coordinated movement, and endless data points could give the VS swarm the advantage in atmospheric data sampling

    Dynamic Data Driven Application System for Wildfire Spread Simulation

    Get PDF
    Wildfires have significant impact on both ecosystems and human society. To effectively manage wildfires, simulation models are used to study and predict wildfire spread. The accuracy of wildfire spread simulations depends on many factors, including GIS data, fuel data, weather data, and high-fidelity wildfire behavior models. Unfortunately, due to the dynamic and complex nature of wildfire, it is impractical to obtain all these data with no error. Therefore, predictions from the simulation model will be different from what it is in a real wildfire. Without assimilating data from the real wildfire and dynamically adjusting the simulation, the difference between the simulation and the real wildfire is very likely to continuously grow. With the development of sensor technologies and the advance of computer infrastructure, dynamic data driven application systems (DDDAS) have become an active research area in recent years. In a DDDAS, data obtained from wireless sensors is fed into the simulation model to make predictions of the real system. This dynamic input is treated as the measurement to evaluate the output and adjust the states of the model, thus to improve simulation results. To improve the accuracy of wildfire spread simulations, we apply the concept of DDDAS to wildfire spread simulation by dynamically assimilating sensor data from real wildfires into the simulation model. The assimilation system relates the system model and the observation data of the true state, and uses analysis approaches to obtain state estimations. We employ Sequential Monte Carlo (SMC) methods (also called particle filters) to carry out data assimilation in this work. Based on the structure of DDDAS, this dissertation presents the data assimilation system and data assimilation results in wildfire spread simulations. We carry out sensitivity analysis for different densities, frequencies, and qualities of sensor data, and quantify the effectiveness of SMC methods based on different measurement metrics. Furthermore, to improve simulation results, the image-morphing technique is introduced into the DDDAS for wildfire spread simulation

    Dynamic data driven applications systems (DDDAS) for multidisciplinary optimisation (MDO)

    Full text link
    [ES] Nowadays, the majority of optimisation processes that are followed to obtain new optimum designs involve expensive simulations that are costly and time comsuming. Besides, designs involving aerodynamics are usually highly constrained in terms of infeasible geometries to be avoided so that it is really important to provide the optimisers effective datum or starting points that enable them to reach feasible solutions. This MSc Thesis aims to continue the development of an alternative design methodology applied to a 2D airfoil at a cruise flight condition by combining concepts of Dynamic Data Driven Application Systems (DDDAS) paradigm with Multiobjec- tive Optimisation. For this purpose, a surrogate model based on experimental data has been used to run a multiobjective optimisation and the given optimum designs have been considered as starting points for a direct optimisation, saving number of evaluations in the process. Throughout this work, a technique for retrieving experi- mental airfoil lift and drag coefficients was conducted. Later, a new parametrisation technique using Class-Shape Transformation (CST) was implemented in order to map the considered airfoils into the design space. Then, a response surface model considering Radial Basis Functions (RBF) and Kriging approaches was constructed and the multiobjective optimisation to maximise lift and minimise drag was under- taken using stochastic algorithms, MOTSII and NSGA. Alternatively, a full direct optimisation from datum airfoil and a direct optimisation from optimum surrogate- based optimisation designs were performed with Xfoil and the results were compared. As an outcome, the developed design methodology based on the combination of surrogate-based and direct optimisation was proved to be more effective than a single full direct optimisation to make the whole process faster by saving number of evaluations. In addition, further work guidelines are presented to show potential directions in which to expand and improve this methodology.Patón Pozo, PJ. (2016). Dynamic data driven applications systems (DDDAS) for multidisciplinary optimisation (MDO). Universitat Politècnica de València. http://hdl.handle.net/10251/142210TFG

    Distributed Particle Filters for Data Assimilation in Simulation of Large Scale Spatial Temporal Systems

    Get PDF
    Assimilating real time sensor into a running simulation model can improve simulation results for simulating large-scale spatial temporal systems such as wildfire, road traffic and flood. Particle filters are important methods to support data assimilation. While particle filters can work effectively with sophisticated simulation models, they have high computation cost due to the large number of particles needed in order to converge to the true system state. This is especially true for large-scale spatial temporal simulation systems that have high dimensional state space and high computation cost by themselves. To address the performance issue of particle filter-based data assimilation, this dissertation developed distributed particle filters and applied them to large-scale spatial temporal systems. We first implemented a particle filter-based data assimilation framework and carried out data assimilation to estimate system state and model parameters based on an application of wildfire spread simulation. We then developed advanced particle routing methods in distributed particle filters to route particles among the Processing Units (PUs) after resampling in effective and efficient manners. In particular, for distributed particle filters with centralized resampling, we developed two routing policies named minimal transfer particle routing policy and maximal balance particle routing policy. For distributed PF with decentralized resampling, we developed a hybrid particle routing approach that combines the global routing with the local routing to take advantage of both. The developed routing policies are evaluated from the aspects of communication cost and data assimilation accuracy based on the application of data assimilation for large-scale wildfire spread simulations. Moreover, as cloud computing is gaining more and more popularity; we developed a parallel and distributed particle filter based on Hadoop & MapReduce to support large-scale data assimilation
    • …
    corecore