848 research outputs found

    Project RISE: Recognizing Industrial Smoke Emissions

    Full text link
    Industrial smoke emissions pose a significant concern to human health. Prior works have shown that using Computer Vision (CV) techniques to identify smoke as visual evidence can influence the attitude of regulators and empower citizens to pursue environmental justice. However, existing datasets are not of sufficient quality nor quantity to train the robust CV models needed to support air quality advocacy. We introduce RISE, the first large-scale video dataset for Recognizing Industrial Smoke Emissions. We adopted a citizen science approach to collaborate with local community members to annotate whether a video clip has smoke emissions. Our dataset contains 12,567 clips from 19 distinct views from cameras that monitored three industrial facilities. These daytime clips span 30 days over two years, including all four seasons. We ran experiments using deep neural networks to establish a strong performance baseline and reveal smoke recognition challenges. Our survey study discussed community feedback, and our data analysis displayed opportunities for integrating citizen scientists and crowd workers into the application of Artificial Intelligence for social good.Comment: Technical repor

    Auroral Image Processing Techniques - Machine Learning Classification and Multi-Viewpoint Analysis

    Get PDF
    Every year, millions of scientific images are acquired in order to study the auroral phenomena. The accumulated data contain a vast amount of untapped information that can be used in auroral science. Yet, auroral research has traditionally been focused on case studies, where one or a few auroral events have been investigated and explained in detail. Consequently, theories have often been developed on the basis of limited data sets, which can possibly be biased in location, spatial resolution or temporal resolution. Advances in technology and data processing now allow for acquisition and analysis of large image data sets. These tools have made it feasible to perform statistical studies based on auroral data from numerous events, varying geophysical conditions and multiple locations in the Arctic and Antarctic. Such studies require reliable auroral image processing techniques to organize, extract and represent the auroral information in a scientifically rigorous manner, preferably with a minimal amount of user interaction. This dissertation focuses on two such branches of image processing techniques: machine learning classification and multi-viewpoint analysis. Machine learning classification: This thesis provides an in-depth description on the implementation of machine learning methods for auroral image classification; from raw images to labeled data. The main conclusion of this work is that convolutional neural networks stand out as a particularly suitable classifier for auroral image data, achieving up to 91 % average class-wise accuracy. A major challenge is that most auroral images have an ambiguous auroral form. These images can not be readily labeled without establishing an auroral morphology, where each class is clearly defined. Multi-viewpoint analysis: Three multi-viewpoint analysis techniques are evaluated and described in this work: triangulation, shell-projection and 3-D reconstruction. These techniques are used for estimating the volume distribution of artificially induced aurora and the height and horizontal distribution of a newly reported auroral feature: Lumikot aurora. The multi-viewpoint analysis techniques are compared and methods for obtaining uncertainty estimates are suggested. Overall, this dissertation evaluates and describes auroral image processing techniques that require little or no user input. The presented methods may therefore facilitate statistical studies such as: probability studies of auroral classes, investigations of the evolution and formation of auroral structures, and studies of the height and distribution of auroral displays. Furthermore, automatic classification and cataloging of large image data sets will support auroral scientists in finding the data of interest, reducing the needed time for manual inspection of auroral images

    Automated greenhouse gas plume detection from satellite data using an unsupervised clustering algorithm

    Get PDF
    A crucial part of tackling the problem of climate change is the monitoring of human-caused greenhouse gas emissions. To reach a global scale, greenhouse gas measuring satellites appear to be the best solution. The massive amounts of data produced by the satellites has increased the need for automated, efficient tools to extract knowledge from the data. Emissions from point sources, such as power plants, can produce distinct plumes that are visible from satellite data. Automated plume detection is key to identify and monitor the largest sources of human-caused greenhouse gas emissions. This thesis presents a comprehensive literature review of existing plume detection methods. Moreover, a new unsupervised plume detection method, called SCEA (Spatial Clustering of Elevated-valued Areas), is introduced. Inspired by the DBSCAN algorithm, SCEA is a clustering algorithm that finds distinct high-valued areas in non-gridded data points. The performance of the SCEA algorithm is evaluated with the simulated satellite data set of SMARTCARB in its ability to find point sources with co-located plumes in different noise scenarios. The SCEA algorithm reached a precision of 0.974, 0.884, and 0.661 in noise-free, low-noise, and high-noise scenarios, respectively. For point sources with annual emissions of 1000 tonnes, the SCEA reached a recall of 0.758, 0.660, and 0.548 for noise-free, low-noise, and high-noise scenarios, respectively

    AI for climate science

    Get PDF

    BoatNet: automated small boat composition detection using deep learning on satellite imagery

    Get PDF
    Tracking and measuring national carbon footprints is key to achieving the ambitious goals set by the Paris Agreement on carbon emissions. According to statistics, more than 10% of global transportation carbon emissions result from shipping. However, accurate tracking of the emissions of the small boat segment is not well established. Past research looked into the role played by small boat fleets in terms of greenhouse gases, but this has relied either on high-level technological and operational assumptions or the installation of global navigation satellite system sensors to understand how this vessel class behaves. This research is undertaken mainly in relation to fishing and recreational boats. With the advent of open-access satellite imagery and its ever-increasing resolution, it can support innovative methodologies that could eventually lead to the quantification of greenhouse gas emissions. Our work used deep learning algorithms to detect small boats in three cities in the Gulf of California in Mexico. The work produced a methodology named BoatNet that can detect, measure and classify small boats with leisure boats and fishing boats even under low-resolution and blurry satellite images, achieving an accuracy of 93.9% with a precision of 74.0%. Future work should focus on attributing a boat activity to fuel consumption and operational profile to estimate small boat greenhouse gas emissions in any given region

    Beyond here and now: Evaluating pollution estimation across space and time from street view images with deep learning

    Get PDF
    Advances in computer vision, driven by deep learning, allows for the inference of environmental pollution and its potential sources from images. The spatial and temporal generalisability of image-based pollution models is crucial in their real-world application, but is currently understudied, particularly in low-income countries where infrastructure for measuring the complex patterns of pollution is limited and modelling may therefore provide the most utility. We employed convolutional neural networks (CNNs) for two complementary classification models, in both an end-to-end approach and as an interpretable feature extractor (object detection), to estimate spatially and temporally resolved fine particulate matter (PM2.5) and noise levels in Accra, Ghana. Data used for training the models were from a unique dataset of over 1.6 million images collected over 15 months at 145 representative locations across the city, paired with air and noise measurements. Both end-to-end CNN and object-based approaches surpassed null model benchmarks for predicting PM2.5 and noise at single locations, but performance deteriorated when applied to other locations. Model accuracy diminished when tested on images from locations unseen during training, but improved by sampling a greater number of locations during model training, even if the total quantity of data was reduced. The end-to-end models used characteristics of images associated with atmospheric visibility for predicting PM2.5, and specific objects such as vehicles and people for noise. The results demonstrate the potential and challenges of image-based, spatiotemporal air pollution and noise estimation, and that robust, environmental modelling with images requires integration with traditional sensor networks
    • …
    corecore