2,611 research outputs found

    VULNERABILITY ASSESSMENT OF CRITICAL OIL AND GAS INFRASTRUCTURES TO CLIMATE CHANGE IMPACTS IN THE NIGER DELTA

    Get PDF
    Oil and gas infrastructures are being severely impacted by extreme climate change-induced disasters such as flood, storm, tidal surges, and rising temperature in the Niger Delta with high. There is a high potential for disruption of upstream and downstream activities as the world climate continues to change. The lack of knowledge of the criticality and vulnerability of infrastructures could further exacerbate impacts and the assets management value chain. This thesis, therefore, applied a criteria-based systematic evaluation of the criticality and vulnerability of selected critical oil and gas infrastructure to climate change impacts in the Niger Delta. It applied multi-criteria decision-making analysis (MCDA) tool – analytic hierarchy process (AHP), in prioritising systems according to their vulnerability and criticality and recommended sustainable adaptation mechanisms. Through a critical review of relevant literature, seven (7) criteria each for criticality and vulnerability assessment were synthesised accordingly and implemented in the assessment process. A further exploratory investigation, physical examination of infrastructures, focus groups and elite interviews were conducted to identify possible vulnerable infrastructures and scope qualitative and quantitative data for analysis using Mi-AHP spreadsheet. Results prioritised the criticality of infrastructures in the following order: terminals (27.1%), flow stations (18.5%), roads/bridges (15.5%), and transformers/high voltage cables (11.1%) while the least critical are loading bays (8.6%) and oil wellheads (5.1%). Further analysis indicated that the most vulnerable critical infrastructures are: pipelines (25%), terminals (17%) and roads/bridges (14%) while transformers/high voltage cables and oil wellheads where ranked as least vulnerable with 11% and 9% respectively. In addition to vulnerability assessment, an extended documentary analysis of groundwater geospatial stream flow and water discharge rate monitoring models suggest that an in-situ rise in groundwater level and increase in water discharge rate (WDR) at the upper Niger River could indicate a high probability of flood event at the lower Delta, hence further exacerbates the vulnerability of critical infrastructures. Accordingly, physical examination of infrastructures suggests that an increase in regional and ambient temperature disrupts the functionality of compressors and optimal operation of Flow Stations and inevitably exacerbate corrosion of cathodic systems when mixed with the saltwater flood from the Atlantic. The thesis produced a flexible conceptual framework for the vulnerability assessment of critical oil/gas infrastructures, contextualised and recommended sustainable climate adaptation strategies for the Niger Delta oil/gas industry. Some of these strategies include installation of industrial groundwater and water discharge rate monitoring systems, construction of elevated platforms for critical infrastructures installations, substitution of cathodic pipes with duplex stainless and glass reinforcement epoxy pipes. Others include proper channelisation of drainages and river systems around critical platforms, use of unmanned aerial vehicles (UAVs) for flood monitoring and the establishment of inter-organisational climate impact assessment groups in the oil/gas industry. Climate impact assessment (CIA) is suggested for oil and gas projects as part of best practice in the environmental management and impact assessment framework

    Adaptive Latency Insensitive Protocols

    Get PDF
    Latency-insensitive design copes with excessive delays typical of global wires in current and future IC technologies. It achieves its goal via encapsulation of synchronous logic blocks in wrappers that communicate through a latency-insensitive protocol (LIP) and pipelined interconnects. Previously proposed solutions suffer from an excessive performance penalty in terms of throughput or from a lack of generality. This article presents an adaptive LIP that outperforms previous static implementations, as demonstrated by two relevant cases — a microprocessor and an MPEG encoder — whose components we made insensitive to the latencies of their interconnections through a newly developed wrapper. We also present an informal exposition of the theoretical basis of adaptive LIPs, as well as implementation detail

    Monitoring On-line Timing Information to Support Mixed-Critical Workloads

    Get PDF
    International audienceMany/multi-cores architectures provide tremendous increase in computation power, increasing the possibility of executing additional tasks on the system. In critical embedded systems, e.g. aeronautical systems, the uncertainty of the non-uniform and concurrent memory access scheme prohibits the full utilization of the system potentials. Classical Worst Case Execution Time (WCET) estimation techniques upper bound the memory accesses -considering a fully congested memory bus - resulting in safe, but pessimistic, bounds. The proposed approach explores the increase in the system utilization by less critical tasks, while guaranteeing the safety of the critical task

    Qduino: a cyber-physical programming platform for multicore Systems-on-Chip

    Full text link
    Emerging multicore Systems-on-Chip are enabling new cyber-physical applications such as autonomous drones, driverless cars and smart manufacturing using web-connected 3D printers. Common to those applications is a communicating task pipeline, to acquire and process sensor data and produce outputs that control actuators. As a result, these applications usually have timing requirements for both individual tasks and task pipelines formed for sensor data processing and actuation. Current cyber-physical programming platforms, such as Arduino and embedded Linux with the POSIX interface do not allow application developers to specify those timing requirements. Moreover, none of them provide the programming interface to schedule tasks and map them to processor cores, while managing I/O in a predictable manner, on multicore hardware platforms. Hence, this thesis presents the Qduino programming platform. Qduino adopts the simplicity of the Arduino API, with additional support for real-time multithreaded sketches on multicore architectures. Qduino allows application developers to specify timing properties of individual tasks as well as task pipelines at the design stage. To this end, we propose a mathematical framework to derive each task’s budget and period from the specified end-to-end timing requirements. The second part of the thesis is motivated by the observation that at the center of these pipelines are tasks that typically require complex software support, such as sensor data fusion or image processing algorithms. These features are usually developed by many man-year engineering efforts and thus commonly seen on General-Purpose Operating Systems (GPOS). Therefore, in order to support modern, intelligent cyber-physical applications, we enhance the Qduino platform’s extensibility by taking advantage of the Quest-V virtualized partitioning kernel. The platform’s usability is demonstrated by building a novel web-connected 3D printer and a prototypical autonomous drone framework in Qduino

    How to Build a Mixed-Criticality System in Industry?

    Get PDF
    In the last decade, the rapid evolution of diverse functionalities and execution platform led safety-critical systems towards integrating components/functions/applications with different ‘criticality’ in a shared hardware platform, i.e., Mixed-Criticality Systems (MCS)s. In academia, hundreds of publications has been proposed upon a commonly used model, i.e., Vestal’s model. Even so, because of the mismatched concepts between academia and industry, current academic models can not be exported to a real industrial system. This paper discusses the mismatched concepts from the system architecture perspective, with a potential solution being proposed

    Quantifying fault recovery in multiprocessor systems

    Get PDF
    Various aspects of reliable computing are formalized and quantified with emphasis on efficient fault recovery. The mathematical model which proves to be most appropriate is provided by the theory of graphs. New measures for fault recovery are developed and the value of elements of the fault recovery vector are observed to depend not only on the computation graph H and the architecture graph G, but also on the specific location of a fault. In the examples, a hypercube is chosen as a representative of parallel computer architecture, and a pipeline as a typical configuration for program execution. Dependability qualities of such a system is defined with or without a fault. These qualities are determined by the resiliency triple defined by three parameters: multiplicity, robustness, and configurability. Parameters for measuring the recovery effectiveness are also introduced in terms of distance, time, and the number of new, used, and moved nodes and edges

    A Perspective on Safety and Real-Time Issues for GPU Accelerated ADAS

    Get PDF
    The current trend in designing Advanced Driving Assistance System (ADAS) is to enhance their computing power by using modern multi/many core accelerators. For many critical applications such as pedestrian detection, line following, and path planning the Graphic Processing Unit (GPU) is the most popular choice for obtaining orders of magnitude increases in performance at modest power consumption. This is made possible by exploiting the general purpose nature of today's GPUs, as such devices are known to express unprecedented performance per watt on generic embarrassingly parallel workloads (as opposed of just graphical rendering, as GPUs where only designed to sustain in previous generations). In this work, we explore novel challenges that system engineers have to face in terms of real-time constraints and functional safety when the GPU is the chosen accelerator. More specifically, we investigate how much of the adopted safety standards currently applied for traditional platforms can be translated to a GPU accelerated platform used in critical scenarios

    Combined Radiation Detection Methods for Assay of Higher Actinides in Separation Processes (AFCI)

    Full text link
    The ultimate objective of this project is to develop technology to detect and accurately measure quantities of higher actinides in processing systems without taking frequent samples. These systems include used fuel receipt, separations batches, and pipelines. A variety of measurements may be combined to calculate flow rates of actinide elements with a to-be-determined precision. Nuclear and decay characteristics of materials during processing will be acquired, conceptual designs of monitoring systems will be developed, radiation transport studies will be conducted to develop an understanding of operational regimes, and experiments will be performed to confirm performance. Radiation transport and scoping studies will be conducted to investigate combined gamma-ray, neutron, and active and passive detection techniques to measure quantities and isotopic constituents contained during separations and intermediate storage. Scoping and design studies will first be performed using validated data sets (decay properties and reaction cross sections) and the radiation transport code MCNPX. Basic measurements will then be performed and compared to predictions
    • …
    corecore