195,186 research outputs found

    Project RISE: Recognizing Industrial Smoke Emissions

    Full text link
    Industrial smoke emissions pose a significant concern to human health. Prior works have shown that using Computer Vision (CV) techniques to identify smoke as visual evidence can influence the attitude of regulators and empower citizens to pursue environmental justice. However, existing datasets are not of sufficient quality nor quantity to train the robust CV models needed to support air quality advocacy. We introduce RISE, the first large-scale video dataset for Recognizing Industrial Smoke Emissions. We adopted a citizen science approach to collaborate with local community members to annotate whether a video clip has smoke emissions. Our dataset contains 12,567 clips from 19 distinct views from cameras that monitored three industrial facilities. These daytime clips span 30 days over two years, including all four seasons. We ran experiments using deep neural networks to establish a strong performance baseline and reveal smoke recognition challenges. Our survey study discussed community feedback, and our data analysis displayed opportunities for integrating citizen scientists and crowd workers into the application of Artificial Intelligence for social good.Comment: Technical repor

    Image processing for hydraulic jump free-surface detection: coupled gradient/machine learning model

    Get PDF
    High-frequency oscillations and high surface aeration, induced by the strong turbulence, make water depth measurement for hydraulic jumps a persistently challenging task. The investigation of the hydraulic jump behaviour persists as an important research theme, especially with regards to the stilling basin design. Reliable knowledge of time-averaged and extreme values along a depth profile can help develop an adequate design of a stilling basin, improve safety and aid the understanding of the jump phenomenon. This paper presents an attempt of mitigating certain limitations of existing depth measurement methods by adopting a non-intrusive computer vision-based approach to measuring water depth profile of a hydraulic jump. The proposed method analyses video data in order to detect the boundary between the air-water mixture and the laboratory flume wall. This is achieved by coupling two computer vision methods: (1) analysis of the vertical image gradients, and (2) general-purpose edge detection using a deep neural network model. While the gradient analysis technique alone can provide adequate results, its performance can be significantly improved in combination with a neural network model which incorporates a "human-like" vision in the algorithm. The model coupling reduces the likelihood of false detections and improves the overall detection accuracy. The proposed method is tested in two experiments with different degrees of jump aeration. Results show that the coupled model can reliably and accurately capture the instantaneous depth profile along the jump, with low sensitivity to image noise and flow aeration. The coupled model presented fewer false detections than the gradient-based model, and offered consistent performance in regions of high, as well as low aeration. The proposed approach allows for automated detection of the free-surface interface and expands the potential of computer vision-based measurement methods in hydraulics

    Tree Memory Networks for Modelling Long-term Temporal Dependencies

    Full text link
    In the domain of sequence modelling, Recurrent Neural Networks (RNN) have been capable of achieving impressive results in a variety of application areas including visual question answering, part-of-speech tagging and machine translation. However this success in modelling short term dependencies has not successfully transitioned to application areas such as trajectory prediction, which require capturing both short term and long term relationships. In this paper, we propose a Tree Memory Network (TMN) for modelling long term and short term relationships in sequence-to-sequence mapping problems. The proposed network architecture is composed of an input module, controller and a memory module. In contrast to related literature, which models the memory as a sequence of historical states, we model the memory as a recursive tree structure. This structure more effectively captures temporal dependencies across both short term and long term sequences using its hierarchical structure. We demonstrate the effectiveness and flexibility of the proposed TMN in two practical problems, aircraft trajectory modelling and pedestrian trajectory modelling in a surveillance setting, and in both cases we outperform the current state-of-the-art. Furthermore, we perform an in depth analysis on the evolution of the memory module content over time and provide visual evidence on how the proposed TMN is able to map both long term and short term relationships efficiently via a hierarchical structure

    A fully automatic CAD-CTC system based on curvature analysis for standard and low-dose CT data

    Get PDF
    Computed tomography colonography (CTC) is a rapidly evolving noninvasive medical investigation that is viewed by radiologists as a potential screening technique for the detection of colorectal polyps. Due to the technical advances in CT system design, the volume of data required to be processed by radiologists has increased significantly, and as a consequence the manual analysis of this information has become an increasingly time consuming process whose results can be affected by inter- and intrauser variability. The aim of this paper is to detail the implementation of a fully integrated CAD-CTC system that is able to robustly identify the clinically significant polyps in the CT data. The CAD-CTC system described in this paper is a multistage implementation whose main system components are: 1) automatic colon segmentation; 2) candidate surface extraction; 3) feature extraction; and 4) classification. Our CAD-CTC system performs at 100% sensitivity for polyps larger than 10 mm, 92% sensitivity for polyps in the range 5 to 10 mm, and 57.14% sensitivity for polyps smaller than 5 mm with an average of 3.38 false positives per dataset. The developed system has been evaluated on synthetic and real patient CT data acquired with standard and low-dose radiation levels
    corecore