4,328 research outputs found
Framework for integrated oil pipeline monitoring and incident mitigation systems
Wireless Sensor Nodes (motes) have witnessed rapid development in the last two decades. Though the design considerations for Wireless Sensor Networks (WSNs) have been widely discussed in the literature, limited investigation has been done for their application in pipeline surveillance. Given the increasing number of pipeline incidents across the globe, there is an urgent need for innovative and effective solutions for deterring the incessant pipeline incidents and attacks. WSN pose as a suitable candidate for such solutions, since they can be used to measure, detect and provide actionable information on pipeline physical characteristics such as temperature, pressure, video, oil and gas motion and environmental parameters. This paper presents specifications of motes for pipeline surveillance based on integrated systems architecture. The proposed architecture utilizes a Multi-Agent System (MAS) for the realization of an Integrated Oil Pipeline Monitoring and Incident Mitigation System (IOPMIMS) that can effectively monitor and provide actionable information for pipelines. The requirements and components of motes, different threats to pipelines and ways of detecting such threats presented in this paper will enable better deployment of pipeline surveillance systems for incident mitigation. It was identified that the shortcomings of the existing wireless sensor nodes as regards their application to pipeline surveillance are not effective for surveillance systems. The resulting specifications provide a framework for designing a cost-effective system, cognizant of the design considerations for wireless sensor motes used in pipeline surveillance
Recent Progress in Wide-Area Surveillance: Protecting Our Pipeline Infrastructure
The pipeline industry has millions of miles of pipes buried along the length and breadth of the country. Since none of the areas through which pipelines run are to be used for other activities, it needs to be monitored so as to know whether the right-of-way (RoW) of the pipeline is encroached upon at any point in time.
Rapid advances made in the area of sensor technology have enabled the use of high end video acquisition systems to monitor the RoW of pipelines. The images captured by aerial data acquisition systems are affected by a host of factors that include light sources, camera characteristics, geometric positions and environmental conditions.
We present a multistage framework for the analysis of aerial imagery for automatic detection and identification of machinery threats along the pipeline RoW which would be capable of taking into account the constraints that come with aerial imagery such as low resolution, lower frame rate, large variations in illumination, motion blurs, etc. The proposed framework is described from three directions.
In the first part of the framework, a method is developed to eliminate regions from imagery that are not considered to be a threat to the pipeline. This method makes use of monogenic phase features into a cascade of pre-trained classifiers to eliminate unwanted regions.
The second part of the framework is a part-based object detection model for searching specific targets which are considered as threat objects.
The third part of the framework is to assess the severity of the threats to pipelines in terms of computing the geolocation and the temperature information of the threat objects. The proposed scheme is tested on the real-world dataset that were captured along the pipeline RoW
Recommended from our members
Near real-time monitoring of buried oil pipeline right-of-way for third-party incursion
Many security systems employing different methods have been proposed to protect buried oil pipelines transporting petroleum products from the well head via the refinery to: depots and other receiving stations. Currently there is a security gap in the monitoring of these buried pipelines in real time and in keeping them protected from third party interference. This thesis addresses the problem of monitoring these systems by developing an automated image analysis system with the aid of a low-cost multisensory Unmanned Aerial Vehicle (UAV) for monitoring of buried pipeline right-of-way (ROW). The method used in this research is based on the identification of threat objects of interest from the video frame sequences of the pipeline right-of-way acquired by the UAV. This is achieved by training the system to recognise objects of interest using trained correlation filters. To determine the geographical location of detected objects, the Video frame sequences captured by the UAV platform were ortho-rectified to form ortho-images which were then mosaicked to form a seamless Digital Surface Model (DSM) covering the test area using a photogrammetry model. The DSM formed from the mosaicking of ortho-images is then emerged with a digital globe for geo-referencing of detected objects. Experiments were carried out on a test field located in United Kingdom and Nigeria, where video and telemetry data were collected, then processed using the techniques created in this research. The results demonstrated that the developed correlation filter was able to detect objects of interest despite the distortions that come with the object image, due to the fact that the expected distortion was compensated for using the training images. When compared with the 6 control points in the digital globe the accuracy of the two-dimension DSM gave a misalignment error of between 2 and 3 metres
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
Advances in Object and Activity Detection in Remote Sensing Imagery
The recent revolution in deep learning has enabled considerable development in the fields of object and activity detection. Visual object detection tries to find objects of target classes with precise localisation in an image and assign each object instance a corresponding class label. At the same time, activity recognition aims to determine the actions or activities of an agent or group of agents based on sensor or video observation data. It is a very important and challenging problem to detect, identify, track, and understand the behaviour of objects through images and videos taken by various cameras. Together, objects and their activity recognition in imaging data captured by remote sensing platforms is a highly dynamic and challenging research topic. During the last decade, there has been significant growth in the number of publications in the field of object and activity recognition. In particular, many researchers have proposed application domains to identify objects and their specific behaviours from air and spaceborne imagery. This Special Issue includes papers that explore novel and challenging topics for object and activity detection in remote sensing images and videos acquired by diverse platforms
A Wide Area Multiview Static Crowd Estimation System Using UAV and 3D Training Simulator
Crowd size estimation is a challenging problem, especially when the crowd is spread over a significant geographical area. It has applications in monitoring of rallies and demonstrations and in calculating the assistance requirements in humanitarian disasters. Therefore, accomplishing a crowd surveillance system for large crowds constitutes a significant issue. UAV-based techniques are an appealing choice for crowd estimation over a large region, but they present a variety of interesting challenges, such as integrating per-frame estimates through a video without counting individuals twice. Large quantities of annotated training data are required to design, train, and test such a system. In this paper, we have first reviewed several crowd estimation techniques, existing crowd simulators and data sets available for crowd analysis. Later, we have described a simulation system to provide such data, avoiding the need for tedious and error-prone manual annotation. Then, we have evaluated synthetic video from the simulator using various existing single-frame crowd estimation techniques. Our findings show that the simulated data can be used to train and test crowd estimation, thereby providing a suitable platform to develop such techniques. We also propose an automated UAV-based 3D crowd estimation system that can be used for approximately static or slow-moving crowds, such as public events, political rallies, and natural or man-made disasters. We evaluate the results by applying our new framework to a variety of scenarios with varying crowd sizes. The proposed system gives promising results using widely accepted metrics including MAE, RMSE, Precision, Recall, and F1 score to validate the results
Key technologies for safe and autonomous drones
Drones/UAVs are able to perform air operations that are very difficult to be performed by manned aircrafts. In addition, drones' usage brings significant economic savings and environmental benefits, while reducing risks to human life. In this paper, we present key technologies that enable development of drone systems. The technologies are identified based on the usages of drones (driven by COMP4DRONES project use cases). These technologies are grouped into four categories: U-space capabilities, system functions, payloads, and tools. Also, we present the contributions of the COMP4DRONES project to improve existing technologies. These contributions aim to ease drones’ customization, and enable their safe operation.This project has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 826610. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Spain, Austria, Belgium, Czech Republic, France, Italy, Latvia, Netherlands. The total project budget is 28,590,748.75 EUR (excluding ESIF partners), while the requested grant is 7,983,731.61 EUR to ECSEL JU, and 8,874,523.84 EUR of National and ESIF Funding. The project has been started on 1st October 2019
- …