11 research outputs found

    A Dynamic Data Driven Application System for Vehicle Tracking

    Get PDF
    AbstractTracking the movement of vehicles in urban environments using fixed position sensors, mobile sensors, and crowd-sourced data is a challenging but important problem in applications such as law enforcement and defense. A dynamic data driven application system (DDDAS) is described to track a vehicle's movements by repeatedly identifying the vehicle under investigation from live image and video data, predicting probable future locations, and repositioning sensors or retargeting requests for information in order to reacquire the vehicle. An overview of the envisioned system is described that includes image processing algorithms to detect and recapture the vehicle from live image data, a computational framework to predict probable vehicle locations at future points in time, and a power aware data distribution management system to disseminate data and requests for information over ad hoc wireless communication networks. A testbed under development in the midtown area of Atlanta, Georgia in the United States is briefly described

    Digital twins that learn and correct themselves

    Get PDF
    Digital twins can be defined as digital representations of physical entities that employ real‐time data to enable understanding of the operating conditions of these entities. Here we present a particular type of digital twin that involves a combination of computer vision, scientific machine learning, and augmented reality. This novel digital twin is able, therefore, to see, to interpret what it sees—and, if necessary, to correct the model it is equipped with—and presents the resulting information in the form of augmented reality. The computer vision capabilities allow the twin to receive data continuously. As any other digital twin, it is equipped with one or more models so as to assimilate data. However, if persistent deviations from the predicted values are found, the proposed methodology is able to correct on the fly the existing models, so as to accommodate them to the measured reality. Finally, the suggested methodology is completed with augmented reality capabilities so as to render a completely new type of digital twin. These concepts are tested against a proof‐of‐concept model consisting on a nonlinear, hyperelastic beam subjected to moving loads whose exact position is to be determined

    Distributed Particle Filters for Data Assimilation in Simulation of Large Scale Spatial Temporal Systems

    Get PDF
    Assimilating real time sensor into a running simulation model can improve simulation results for simulating large-scale spatial temporal systems such as wildfire, road traffic and flood. Particle filters are important methods to support data assimilation. While particle filters can work effectively with sophisticated simulation models, they have high computation cost due to the large number of particles needed in order to converge to the true system state. This is especially true for large-scale spatial temporal simulation systems that have high dimensional state space and high computation cost by themselves. To address the performance issue of particle filter-based data assimilation, this dissertation developed distributed particle filters and applied them to large-scale spatial temporal systems. We first implemented a particle filter-based data assimilation framework and carried out data assimilation to estimate system state and model parameters based on an application of wildfire spread simulation. We then developed advanced particle routing methods in distributed particle filters to route particles among the Processing Units (PUs) after resampling in effective and efficient manners. In particular, for distributed particle filters with centralized resampling, we developed two routing policies named minimal transfer particle routing policy and maximal balance particle routing policy. For distributed PF with decentralized resampling, we developed a hybrid particle routing approach that combines the global routing with the local routing to take advantage of both. The developed routing policies are evaluated from the aspects of communication cost and data assimilation accuracy based on the application of data assimilation for large-scale wildfire spread simulations. Moreover, as cloud computing is gaining more and more popularity; we developed a parallel and distributed particle filter based on Hadoop & MapReduce to support large-scale data assimilation

    Damage identification scheme based on compressive sensing

    Get PDF
    Civil infrastructures are critical to every nation, due to their substantial investment, long service period, and enormous negative impacts after failure. However, they inevitably deteriorate during their service lives. Therefore, methods capable of assessing conditions and identifying damage in a structure timely and accurately have drawn increasing attention. Recently, compressive sensing (CS), a significant breakthrough in signal processing, has been proposed to capture and represent compressible signals at a rate significantly below the traditional Nyquist rate. Due to its sound theoretical background and notable influence, this methodology has been successfully applied in many research areas. In order to explore its application in structural damage identification, a new CS-based damage identification scheme is proposed in this paper, by regarding damage identification problems as pattern classification problems. The time domain structural responses are transferred to the frequency domain as sparse representation, and then the numerical simulated data under various damage scenarios will be used to train a feature matrix as input information.This matrix can be used for damage identification through an optimization process. This will be one of the first few applications of this advanced technique to structural engineering areas. In order to demonstrate its effectiveness, numerical simulation results on a complex pipe soil interaction model are used to train the parameters and then to identify the simulated pipe degradation damage and free-spanning damage. To further demonstrate the method, vibration tests of a steel pipe laid on the ground are carried out. The measured acceleration time histories are used for damage identification. Both numerical and experimental verification results confirm that the proposed damage identification scheme will be a promising tool for structural health monitoring

    On the cloud deployment of a session abstraction for service/data aggregation

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaThe global cyber-infrastructure comprehends a growing number of resources, spanning over several abstraction layers. These resources, which can include wireless sensor devices or mobile networks, share common requirements such as richer inter-connection capabilities and increasing data consumption demands. Additionally, the service model is now widely spread, supporting the development and execution of distributed applications. In this context, new challenges are emerging around the “big data” topic. These challenges include service access optimizations, such as data-access context sharing, more efficient data filtering/ aggregation mechanisms, and adaptable service access models that can respond to context changes. The service access characteristics can be aggregated to capture specific interaction models. Moreover, ubiquitous service access is a growing requirement, particularly regarding mobile clients such as tablets and smartphones. The Session concept aggregates the service access characteristics, creating specific interaction models, which can then be re-used in similar contexts. Existing Session abstraction implementations also allow dynamic reconfigurations of these interaction models, so that the model can adapt to context changes, based on service, client or underlying communication medium variables. Cloud computing on the other hand, provides ubiquitous access, along with large data persistence and processing services. This thesis proposes a Session abstraction implementation, deployed on a Cloud platform, in the form of a middleware. This middleware captures rich/dynamic interaction models between users with similar interests, and provides a generic mechanism for interacting with datasources based on multiple protocols. Such an abstraction contextualizes service/users interactions, can be reused by other users in similar contexts. This Session implementation also permits data persistence by saving all data in transit in a Cloud-based repository, The aforementioned middleware delivers richer datasource-access interaction models, dynamic reconfigurations, and allows the integration of heterogenous datasources. The solution also provides ubiquitous access, allowing client connections from standard Web browsers or Android based mobile devices

    Wireless Monitoring Systems for Long-Term Reliability Assessment of Bridge Structures based on Compressed Sensing and Data-Driven Interrogation Methods.

    Full text link
    The state of the nation’s highway bridges has garnered significant public attention due to large inventories of aging assets and insufficient funds for repair. Current management methods are based on visual inspections that have many known limitations including reliance on surface evidence of deterioration and subjectivity introduced by trained inspectors. To address the limitations of current inspection practice, structural health monitoring (SHM) systems can be used to provide quantitative measures of structural behavior and an objective basis for condition assessment. SHM systems are intended to be a cost effective monitoring technology that also automates the processing of data to characterize damage and provide decision information to asset managers. Unfortunately, this realization of SHM systems does not currently exist. In order for SHM to be realized as a decision support tool for bridge owners engaged in performance- and risk-based asset management, technological hurdles must still be overcome. This thesis focuses on advancing wireless SHM systems. An innovative wireless monitoring system was designed for permanent deployment on bridges in cold northern climates which pose an added challenge as the potential for solar harvesting is reduced and battery charging is slowed. First, efforts advancing energy efficient usage strategies for WSNs were made. With WSN energy consumption proportional to the amount of data transmitted, data reduction strategies are prioritized. A novel data compression paradigm termed compressed sensing is advanced for embedment in a wireless sensor microcontroller. In addition, fatigue monitoring algorithms are embedded for local data processing leading to dramatic data reductions. In the second part of the thesis, a radical top-down design strategy (in contrast to global vibration strategies) for a monitoring system is explored to target specific damage concerns of bridge owners. Data-driven algorithmic approaches are created for statistical performance characterization of long-term bridge response. Statistical process control and reliability index monitoring are advanced as a scalable and autonomous means of transforming data into information relevant to bridge risk management. Validation of the wireless monitoring system architecture is made using the Telegraph Road Bridge (Monroe, Michigan), a multi-girder short-span highway bridge that represents a major fraction of the U.S. national inventory.PhDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116749/1/ocosean_1.pd

    Hybrid Twin in Complex System Settings

    Get PDF
    Los beneficios de un conocimiento profundo de los procesos tecnológicos e industriales de nuestro mundo son incuestionables. La optimización, el análisis inverso o el control basado en la simulación son algunos de los procedimientos que pueden llevarse a cabo una vez que los conocimientos anteriores se transforman en valor para las empresas. Con ello se consiguen mejores tecnologías que acaban beneficiando enormemente a la sociedad. Pensemos en una actividad rutinaria para muchas personas hoy en día, como coger un avión. Todos los procedimientos anteriores se llevan a cabo en el diseño del avión, en el control a bordo y en el mantenimiento, lo que culmina en un producto tecnológicamente eficiente en cuanto a recursos. Este alto valor añadido es lo que está impulsando a la Ciencia de la Ingeniería Basada en la Simulación (Simulation Based Engineering Science, SBES) a introducir importantes mejoras en estos procedimientos, lo que ha supuesto avances importantes en una gran variedad de sectores como la sanidad, las telecomunicaciones o la ingeniería.Sin embargo, la SBES se enfrenta actualmente a varias dificultades para proporcionar resultados precisos en escenarios industriales complejos. Una de ellas es el elevado coste computacional asociado a muchos problemas industriales, que limita seriamente o incluso inhabilita los procesos clave descritos anteriormente. Otro problema es que, en otras aplicaciones, los modelos más precisos (que a su vez son los más caros computacionalmente) no son capaces de tener en cuenta todos los detalles que rigen el sistema físico estudiado, con desviaciones observadas que parecen escapar de nuestro conocimiento.Por lo tanto, en este contexto, a lo largo de este manuscrito se proponen novedosas estrategias y técnicas numéricas para hacer frente a los retos a los que se enfrenta la SBES. Para ello, se analizan diferentes aplicaciones industriales.El panorama anterior junto con el exhaustivo desarrollo producido en la Ciencia de Datos, brinda además una oportunidad perfecta para los denominados Dynamic Data Driven Application Systems (DDDAS), cuyo objetivo principal es fusionar los algoritmos clásicos de simulación con los datos procedentes de medidas experimentales. En este escenario, los datos y las simulaciones ya no estarían desacoplados, sino que formarían una relación simbiótica que alcanzaría hitos inconcebibles hasta estos días. Más en detalle, los datos ya no se entenderán como una calibración estática de un determinado modelo constitutivo, sino que el modelo se corregirá dinámicamente tan pronto como los datos experimentales y las simulaciones tiendan a diverger.Por esta razón, la presente tesis ha hecho especial énfasis en las técnicas de reducción de modelos, ya que no sólo son una herramienta para reducir la complejidad computacional, sino también un elemento clave para cumplir con las restricciones de tiempo real que surgen del marco de los DDDAS.Además, esta tesis presenta nuevas metodologías basadas en datos para enriquecer el denominado paradigma Hybrid Twin. Un paradigma cuya motivación radica en su habilidad de posibilitar los DDDAS. ¿Cómo? combinando soluciones paramétricas y técnicas de reducción de modelos con correcciones dinámicas generadas “al vuelo'' basadas en los datos experimentales recogidos en cada instante.<br /

    Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation: Special Report of the Intergovernmental Panel on Climate Change

    Get PDF
    This Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX) has been jointly coordinated by Working Groups I (WGI) and II (WGII) of the Intergovernmental Panel on Climate Change (IPCC). The report focuses on the relationship between climate change and extreme weather and climate events, the impacts of such events, and the strategies to manage the associated risks. The IPCC was jointly established in 1988 by the World Meteorological Organization (WMO) and the United Nations Environment Programme (UNEP), in particular to assess in a comprehensive, objective, and transparent manner all the relevant scientific, technical, and socioeconomic information to contribute in understanding the scientific basis of risk of human-induced climate change, the potential impacts, and the adaptation and mitigation options. Beginning in 1990, the IPCC has produced a series of Assessment Reports, Special Reports, Technical Papers, methodologies, and other key documents which have since become the standard references for policymakers and scientists.This Special Report, in particular, contributes to frame the challenge of dealing with extreme weather and climate events as an issue in decisionmaking under uncertainty, analyzing response in the context of risk management. The report consists of nine chapters, covering risk management; observed and projected changes in extreme weather and climate events; exposure and vulnerability to as well as losses resulting from such events; adaptation options from the local to the international scale; the role of sustainable development in modulating risks; and insights from specific case studies

    Compressed Sensing and Time-Parallel Reduced-Order Modeling for Structural Health Monitoring using a DDDAS

    No full text
    Abstract. This paper discusses recent progress achieved in two areas related to the development of a Dynamic Data Driven Applications System (DDDAS) for structural and material health monitoring and critical event prediction. The first area concerns the development and demonstration of a sensor data compression algorithm and its application to the detection of structural damage. The second area concerns the prediction in near real-time of the transient dynamics of a structural system using a nonlinear reduced-order model and a time-parallel ODE (Ordinary Differential Equation) solver.

    Fast modelling of gas reservoirs using non-intrusive reduced order modelling and machine learning

    Get PDF
    This work focussed on developing approximate methods for rapidly estimating gas field production performance. Proper orthogonal decomposition (POD) - Radial basis function (RBF) and POD-Autoencoder (AE) Non Intrusive Reduced Order Models (NIROMs) were considered. The accuracy and speed of both NIROMs were evaluated for modelling different aspects of gas field modelling including reservoirs with time-varying and mixed production controls, reservoirs with and without aquifer pressure support, and for wells that were (or not ) shut-in during production lifecycle. These NIROMs were applied to predicting the performance of four gas reservoir models: a homogeneous synthetic model; a heterogeneous gas field with 3 wells and structures similar to the Norne Field; a water coning model in radian grid; and a sector model of a real gas field provided by Woodside Petroleum. The POD-RBF and POD-AE NIROMs were trained using the simulation solutions from a commercial reservoir simulator (ECLIPSE): grid distributions of pressure and saturations as well as time series production data such as production rates, cumulative productions and pressures. Different cases were run based on typical input parameters usually used in field performance studies. The simulation solutions were then standardised to zero mean and reduced into hyperspace using POD. In most cases, the optimum number of POD basis functions (99.9% energy criterion) of the solutions (training data) were used to reduce the training data into a lower-dimensional hyperspace space. The reduced training data and their corresponding parameter values were combined to form sample and response arrays based on a cause and effect pattern. RBF or AE was then used to interpolate the weighting coefficients that represented the dynamics of the gas reservoir as captured within the reduced training data. These weighting coefficients were used to propagate the prediction of new unseen simulation cases for the duration of predictions. The simulation results from either or both NIROMs was then compared against the simulation solution of the same cases in ECLIPSE. It was found that the POD-RBF is a better predictive tool for gas field modelling. It is faster, more accurate and consistent than the POD-AE, giving satisfactory predictions with up to 99% accuracy and 2 orders of magnitude speed-up. No single POD-AE is sufficient for predicting different production scenarios, besides, the process of arriving at a suitable POD-AE involves finetuning several hyper-parameters by trial and error, which may be more burdensome for practising petroleum engineers. The accuracy of NIROM’s prediction of production variable is generally improved by using more than the optimal number of POD-basis functions, while predictions of grid distributed properties are satisfactorily predicted with the optimal number of POD-basis functions. NIROM’s accuracy is dependent on whether the range of parameters of the prediction, their duration and specific production scenarios (such as having mixed production controls or aquifer pressure support) reflect those of the training cases. Overall, the number of training runs, the size of the reservoir model as well as the number of time intervals at which simulation output data is required all affect the speed of training both NIROMs for prediction. Other contributions of this work include showing that the linear RBF is the most suitable RBF for gas field modelling; developing a novel normalisation approach for time-varying parameters; and applying NIROMs to seasonally varying production scenarios with mixed production controls. This work is the first time that the POD-AE has been developed and evaluated for petroleum field development planning.Open Acces
    corecore