2,946 research outputs found

    An Integrated Decision-Support Information System on the Impact of Extreme Natural Hazards on Critical Infrastructure

    Get PDF
    In this paper, we introduce an Integrated Decision-Support Tool (IDST v2.0) which was developed as part of the INFRARISK project (https://www.infrarisk-fp7.eu/). The IDST is an online tool which demonstrates the implementation of a risk-based stress testing methodology for analyzing the potential impact of natural hazards on transport infrastructure networks. The IDST is enabled with a set of software workflow processes that allow the definition of multiple cascading natural hazards, geospatial coverage and impact on important large infrastructure, including those which are critical to transport networks in Europe. Stress tests on these infrastructure are consequently performed together with the automated generation of useful case study reports for practitioners. An exemplar stress test study using the IDST is provided in this paper. In this study, risks and consequences of an earthquake-triggered landslide scenario in Northern Italy is described. Further, it provides a step-by-step account of the developed stress testing overarching methodology which is applied to the impact on a road network of the region of interest

    Models of everywhere revisited: a technological perspective

    Get PDF
    The concept ‘models of everywhere’ was first introduced in the mid 2000s as a means of reasoning about the environmental science of a place, changing the nature of the underlying modelling process, from one in which general model structures are used to one in which modelling becomes a learning process about specific places, in particular capturing the idiosyncrasies of that place. At one level, this is a straightforward concept, but at another it is a rich multi-dimensional conceptual framework involving the following key dimensions: models of everywhere, models of everything and models at all times, being constantly re-evaluated against the most current evidence. This is a compelling approach with the potential to deal with epistemic uncertainties and nonlinearities. However, the approach has, as yet, not been fully utilised or explored. This paper examines the concept of models of everywhere in the light of recent advances in technology. The paper argues that, when first proposed, technology was a limiting factor but now, with advances in areas such as Internet of Things, cloud computing and data analytics, many of the barriers have been alleviated. Consequently, it is timely to look again at the concept of models of everywhere in practical conditions as part of a trans-disciplinary effort to tackle the remaining research questions. The paper concludes by identifying the key elements of a research agenda that should underpin such experimentation and deployment

    Monitoring Of Remote Hydrocarbon Wells Using Azure Internet Of Things

    Get PDF
    Remote monitoring of hydrocarbon wells is a tedious and meticulously thought out task performed to create a cyber-physical bridge between the asset and the owner. There are many systems and techniques on the market that offer this solution but due to their lack of interoperability and/or decentralized architecture they begin to fall apart when remote assets become farther away from the client. This results in extreme latency and thus poor decision making. Microsoft\u27s Azure IoT Edge was the focus of this writing. Coupled with off-the-shelf hardware, Azure\u27s IoT Edge services were integrated with an existing unit simulating a remote hydrocarbon well. This combination successfully established a semi-autonomous IIoT Edge device that can monitor, process, store, and transfer data locally on the remote device itself. These capabilities were performed utilizing an edge computing architecture that drastically reduced infrastructure and pushed intelligence and responsibility to the source of the data. This application of Azure IoT Edge laid a foundation from which a plethora of solutions can be built, enhancing the intelligence capability of this asset. This study demonstrates edge computing\u27s ability to mitigate latency loops, reduce network stress, and handle intermittent connectivity. Further experimentation and analysis will have to be performed at a larger scale to determine if the resources implemented will suffice for production level operations

    A service-oriented middleware for integrated management of crowdsourced and sensor data streams in disaster management

    Get PDF
    The increasing number of sensors used in diverse applications has provided a massive number of continuous, unbounded, rapid data and requires the management of distinct protocols, interfaces and intermittent connections. As traditional sensor networks are error-prone and difficult to maintain, the study highlights the emerging role of “citizens as sensors” as a complementary data source to increase public awareness. To this end, an interoperable, reusable middleware for managing spatial, temporal, and thematic data using Sensor Web Enablement initiative services and a processing engine was designed, implemented, and deployed. The study found that its approach provided effective sensor data-stream access, publication, and filtering in dynamic scenarios such as disaster management, as well as it enables batch and stream management integration. Also, an interoperability analytics testing of a flood citizen observatory highlighted even variable data such as those provided by the crowd can be integrated with sensor data stream. Our approach, thus, offers a mean to improve near-real-time applications

    Visual Viper: a portable visualization library for streamlined scientific communications.

    Get PDF
    À medida que o sector da saúde passa por uma transformação digital, a afluência de dados de saúde para profissionais de saúde e investigadores tem disparado. A crescente necessidade de criar visualizações de dados para compreender esta informação levou ao desenvolvimento do Visual Viper, uma biblioteca Python destinada a automatizar a visualização de dados, para agilizar o processo frequentemente trabalhoso de gerar visualizações. O Visual Viper usa Vega-Lite, uma gramática de alto nível de gráficos interativos, para criar visualizações a partir de várias fontes de dados de investigação através de uma interface de programação de aplicações ('application programming interface' - API) conveniente. Esta automação poupa tempo e facilita a consistência da comunicação científica. A funcionalidade da biblioteca compreende componentes interligados: começa por obter dados de uma fonte selecionada, segue-se a transformação destes dados para se adequarem aos requisitos de visualização. subsequentemente o Visual Viper renderiza os gráficos usando Vega-Lite e exporta-os para uso na comunicação científica. Este pipeline é implementado utilizando uma arquitetura de 'plugins' modular e extensível, que permite acomodar diferentes fontes de dados e tipos de visualização. Cada etapa permite pode ser modificada de forma independente, possibilitando uma personalização extensa com base em casos de uso específicos e sem afetar a funcionalidade geral da biblioteca. Alguns paradigmas importantes usados no desenvolvimento do Visual Viper incluem a programação orientada a objetos ('object oriented programming' - OOP) e desenvolvimento orientado a testes ('test-driven development' - TDD), ambos proporcionando estrutura, eficiência e funcionalidade. Ao usar OOP, a biblioteca adota uma estrutura clara para o código, tornando-o mais fácil de gerir e manter. Os princípios de encapsulamento, herança e polimorfismo proporcionam eficiência e flexibilidade, enquanto o uso de classes como 'DatasetBuilder', 'ChartNotationBuilder' e 'ChartDeployer' facilitam a reutilização de código. A classe 'DatasetBuilder' é projetada para obter e pré-processar dados de várias fontes, a classe 'ChartNotationBuilder' é responsável por criar o layout do gráfico e estética visual baseada nos dados pré-processados, e a classe 'ChartDeployer' lida com a implantação das visualizações finalizadas. Cada uma dessas classes encapsula funções e dados relacionados, reduzindo a complexidade e tornando o código mais fácil de manter e estender. TDD também apresentou um papel crucial no desenvolvimento do Visual Viper. Esta abordagem, que envolve escrever testes antes do código propriamente dito, garante que todas as funções estão a funcionar como pretendido, levando assim a uma melhoria da qualidade do código, simplificação da depuração e um ciclo de desenvolvimento mais rápido. A implementação do Visual Viper garante que este pode funcionar em vários ambientes sem alterações significativas (é agnóstico ao ambiente), e pode operar independentemente em máquinas locais, AWS Lambda, ou como uma API Web. Em conclusão, o Visual Viper fornece uma ferramenta robusta e flexível para a visualização de dados, reforçando a eficiência da comunicação científica no sector da saúde.As the healthcare sector undergoes digital transformation, the influx of data for health professionals and researchers has surged. The increased need for data visualizations to comprehend this information led to the development of Visual Viper, a Python library aimed at automating data visualization, to streamline the often labor-intensive process of generating visualizations. Visual Viper uses Vega-Lite, a high-level grammar of interactive graphics, to create visualizations from various research data sources via a convenient application programming interface (API). This automation saves time and facilitates the consistency of science communication. The library's functionality comprises interconnected components. It begins by retrieving data from a selected source, followed by transforming this data to suit visualization requirements. Subsequently, Visual Viper renders the charts using Vega-Lite and deploys them for use in scientific communication. Implemented within a modular and extensible plugin architecture, it accommodates different data sources and visualization types. Each stage allows independent modification, enabling extensive customization based on specific use-cases without affecting the library's overall functionality. Some important paradigms used in Visual Viper's development include the application of object-oriented programming (OOP) and test-driven development (TDD). By using OOP, the library adopts a structured codebase that is easier to manage and maintain. The principles of encapsulation, inheritance, and polymorphism ensure efficiency and flexibility, while the use of classes facilitates code reuse. The 'DatasetBuilder' class is designed to fetch and preprocess data from various sources, 'ChartNotationBuilder' class is responsible for creating the chart layout and visual aesthetics based on the preprocessed data, and the 'ChartDeployer' class handles the deployment of the finished visualizations. These classes encapsulate related functions and data, reducing complexity and aiding code maintenance and extension. The TDD approach, which involves writing tests before the actual code, ensures all functions are operating as intended, thus leading to improved code quality, simplified debugging, and a faster development cycle. Its implementation ensures the library can run in various environments without significant changes (Environment Agnostic), and it can operate independently on local machines, Lambda, or as a Web API (Serverless Deployment). Future steps for Visual Viper include development of plugins including Google Sheets Dataset Builder and Figma Chart Deployer and creation of additional Vega Lite Chart Notation Builders such as Bar Chart, Survival Chart, and Forest Plot. Once these steps are complete, Visual Viper will be packaged as an importable Python package, with an efficiency evaluation to follow. In conclusion, Visual Viper provides a robust and flexible tool for data visualization, bolstering the efficiency of scientific communication in the healthcare sector

    Fog Computing Architecture for Indoor Disaster Management

    Get PDF
    Most people spend their time indoors. Indoors have a higher complexity than outdoors. Moreover, today's building structures are increasingly sophisticated and complex, which can create problems when a disaster occurs in the room. Fire is one of the disasters that often occurs in a building. For that, we need disaster management that can minimize the risk of casualties. Disaster management with cloud computing has been extensively investigated in other studies. Traditional ways of centralizing data in the cloud are almost scalable as they cannot cater to many latency-critical IoT applications, and this results in too high network traffic when the number of objects and services increased. It will be especially problematic when in a disaster that requires a quick response. The Fog infrastructure is the beginning of the answer to such problems. This research started with an analysis of literature and hot topics related to fog computing and indoor disasters, which later became the basis for creating a fog computing-based architecture for indoor disasters. In this research, fog computing is used as the backbone in disaster management architecture in buildings. MQTT is used as a messaging protocol with the advantages of simplicity and speed. This research proposes a disaster architecture for indoor disasters, mainly fire disasters

    Reasoning cartographic knowledge in deep learning-based map generalization with explainable AI

    Get PDF
    Cartographic map generalization involves complex rules, and a full automation has still not been achieved, despite many efforts over the past few decades. Pioneering studies show that some map generalization tasks can be partially automated by deep neural networks (DNNs). However, DNNs are still used as black-box models in previous studies. We argue that integrating explainable AI (XAI) into a DL-based map generalization process can give more insights to develop and refine the DNNs by understanding what cartographic knowledge exactly is learned. Following an XAI framework for an empirical case study, visual analytics and quantitative experiments were applied to explain the importance of input features regarding the prediction of a pre-trained ResU-Net model. This experimental case study finds that the XAI-based visualization results can easily be interpreted by human experts. With the proposed XAI workflow, we further find that the DNN pays more attention to the building boundaries than the interior parts of the buildings. We thus suggest that boundary intersection over union is a better evaluation metric than commonly used intersection over union in qualifying raster-based map generalization results. Overall, this study shows the necessity and feasibility of integrating XAI as part of future DL-based map generalization development frameworks
    corecore