49,007 research outputs found

    Spectral absorption of biomass burning aerosol determined from retrieved single scattering albedo during ARCTAS

    Get PDF
    Actinic flux, as well as aerosol chemical and optical properties, were measured aboard the NASA DC-8 aircraft during the ARCTAS (Arctic Research of the Composition of the Troposphere from Aircraft and Satellites) mission in Spring and Summer 2008. These measurements were used in a radiative transfer code to retrieve spectral (350-550 nm) aerosol single scattering albedo (SSA) for biomass burning plumes encountered on 17 April and 29 June. Retrieved SSA values were subsequently used to calculate the absorption Angstrom exponent (AAE) over the 350-500 nm range. Both plumes exhibited enhanced spectral absorption with AAE values that exceeded 1 (6.78 ± 0.38 for 17 April and 3.34 ± 0.11 for 29 June). This enhanced absorption was primarily due to organic aerosol (OA) which contributed significantly to total absorption at all wavelengths for both 17 April (57.7%) and 29 June (56.2%). OA contributions to absorption were greater at UV wavelengths than at visible wavelengths for both cases. Differences in AAE values between the two cases were attributed to differences in plume age and thus to differences in the ratio of OA and black carbon (BC) concentrations. However, notable differences between AAE values calculated for the OA (AAEOA) for 17 April (11.15 ± 0.59) and 29 June (4.94 ± 0.19) suggested differences in the plume AAE values might also be due to differences in organic aerosol composition. The 17 April OA was much more oxidized than the 29 June OA as denoted by a higher oxidation state value for 17 April (+0.16 vs. -0.32). Differences in the AAEOA, as well as the overall AAE, were thus also possibly due to oxidation of biomass burning primary organic aerosol in the 17 April plume that resulted in the formation of OA with a greater spectral-dependence of absorption. © Author(s) 2012. CC Attribution 3.0 License

    To be or not to Be? - First Evidence for Neutrinoless Double Beta Decay

    Full text link
    Double beta decay is indispensable to solve the question of the neutrino mass matrix together with ν\nu oscillation experiments. Recent analysis of the most sensitive experiment since nine years - the HEIDELBERG-MOSCOW experiment in Gran-Sasso - yields a first indication for the neutrinoless decay mode. This result is the first evidence for lepton number violation and proves the neutrino to be a Majorana particle. We give the present status of the analysis in this report. It excludes several of the neutrino mass scenarios allowed from present neutrino oscillation experiments - only degenerate scenarios and those with inverse mass hierarchy survive. This result allows neutrinos to still play an important role as dark matter in the Universe. To improve the accuracy of the present result, considerably enlarged experiments are required, such as GENIUS. A GENIUS Test Facility has been funded and will come into operation by early 2003.Comment: 16 pages, latex, 10 figures, Talk was presented at International Conference "Neutrinos and Implications for Physics Beyond the Standard Model", Oct. 11-13, 2002, Stony Brook, USA, Proc. (2003) ed. by R. Shrock, also see Home Page of Heidelberg Non-Accelerator Particle Physics Group: http://www.mpi-hd.mpg.de/non_acc

    SpaceNet MVOI: a Multi-View Overhead Imagery Dataset

    Full text link
    Detection and segmentation of objects in overheard imagery is a challenging task. The variable density, random orientation, small size, and instance-to-instance heterogeneity of objects in overhead imagery calls for approaches distinct from existing models designed for natural scene datasets. Though new overhead imagery datasets are being developed, they almost universally comprise a single view taken from directly overhead ("at nadir"), failing to address a critical variable: look angle. By contrast, views vary in real-world overhead imagery, particularly in dynamic scenarios such as natural disasters where first looks are often over 40 degrees off-nadir. This represents an important challenge to computer vision methods, as changing view angle adds distortions, alters resolution, and changes lighting. At present, the impact of these perturbations for algorithmic detection and segmentation of objects is untested. To address this problem, we present an open source Multi-View Overhead Imagery dataset, termed SpaceNet MVOI, with 27 unique looks from a broad range of viewing angles (-32.5 degrees to 54.0 degrees). Each of these images cover the same 665 square km geographic extent and are annotated with 126,747 building footprint labels, enabling direct assessment of the impact of viewpoint perturbation on model performance. We benchmark multiple leading segmentation and object detection models on: (1) building detection, (2) generalization to unseen viewing angles and resolutions, and (3) sensitivity of building footprint extraction to changes in resolution. We find that state of the art segmentation and object detection models struggle to identify buildings in off-nadir imagery and generalize poorly to unseen views, presenting an important benchmark to explore the broadly relevant challenge of detecting small, heterogeneous target objects in visually dynamic contexts.Comment: Accepted into IEEE International Conference on Computer Vision (ICCV) 201

    Digital image correlation (DIC) analysis of the 3 December 2013 Montescaglioso landslide (Basilicata, Southern Italy). Results from a multi-dataset investigation

    Get PDF
    Image correlation remote sensing monitoring techniques are becoming key tools for providing effective qualitative and quantitative information suitable for natural hazard assessments, specifically for landslide investigation and monitoring. In recent years, these techniques have been successfully integrated and shown to be complementary and competitive with more standard remote sensing techniques, such as satellite or terrestrial Synthetic Aperture Radar interferometry. The objective of this article is to apply the proposed in-depth calibration and validation analysis, referred to as the Digital Image Correlation technique, to measure landslide displacement. The availability of a multi-dataset for the 3 December 2013 Montescaglioso landslide, characterized by different types of imagery, such as LANDSAT 8 OLI (Operational Land Imager) and TIRS (Thermal Infrared Sensor), high-resolution airborne optical orthophotos, Digital Terrain Models and COSMO-SkyMed Synthetic Aperture Radar, allows for the retrieval of the actual landslide displacement field at values ranging from a few meters (2–3 m in the north-eastern sector of the landslide) to 20–21 m (local peaks on the central body of the landslide). Furthermore, comprehensive sensitivity analyses and statistics-based processing approaches are used to identify the role of the background noise that affects the whole dataset. This noise has a directly proportional relationship to the different geometric and temporal resolutions of the processed imagery. Moreover, the accuracy of the environmental-instrumental background noise evaluation allowed the actual displacement measurements to be correctly calibrated and validated, thereby leading to a better definition of the threshold values of the maximum Digital Image Correlation sub-pixel accuracy and reliability (ranging from 1/10 to 8/10 pixel) for each processed dataset

    Scan matching by cross-correlation and differential evolution

    Get PDF
    Scan matching is an important task, solved in the context of many high-level problems including pose estimation, indoor localization, simultaneous localization and mapping and others. Methods that are accurate and adaptive and at the same time computationally efficient are required to enable location-based services in autonomous mobile devices. Such devices usually have a wide range of high-resolution sensors but only a limited processing power and constrained energy supply. This work introduces a novel high-level scan matching strategy that uses a combination of two advanced algorithms recently used in this field: cross-correlation and differential evolution. The cross-correlation between two laser range scans is used as an efficient measure of scan alignment and the differential evolution algorithm is used to search for the parameters of a transformation that aligns the scans. The proposed method was experimentally validated and showed good ability to match laser range scans taken shortly after each other and an excellent ability to match laser range scans taken with longer time intervals between them.Web of Science88art. no. 85

    A Deep Pyramid Deformable Part Model for Face Detection

    Full text link
    We present a face detection algorithm based on Deformable Part Models and deep pyramidal features. The proposed method called DP2MFD is able to detect faces of various sizes and poses in unconstrained conditions. It reduces the gap in training and testing of DPM on deep features by adding a normalization layer to the deep convolutional neural network (CNN). Extensive experiments on four publicly available unconstrained face detection datasets show that our method is able to capture the meaningful structure of faces and performs significantly better than many competitive face detection algorithms

    Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness

    Full text link
    In recent years, the notion of local robustness (or robustness for short) has emerged as a desirable property of deep neural networks. Intuitively, robustness means that small perturbations to an input do not cause the network to perform misclassifications. In this paper, we present a novel algorithm for verifying robustness properties of neural networks. Our method synergistically combines gradient-based optimization methods for counterexample search with abstraction-based proof search to obtain a sound and ({\delta}-)complete decision procedure. Our method also employs a data-driven approach to learn a verification policy that guides abstract interpretation during proof search. We have implemented the proposed approach in a tool called Charon and experimentally evaluated it on hundreds of benchmarks. Our experiments show that the proposed approach significantly outperforms three state-of-the-art tools, namely AI^2 , Reluplex, and Reluval
    corecore