31,076 research outputs found

    SAM-2 ground-truth plan: Correlative measurements for the Stratospheric Aerosol Measurement-2 (SAM 2) sensor on the Nimbus G satellite

    Get PDF
    The SAM-2 will fly aboard the Nimbus-G satellite for launch in the fall of 1978 and measure stratospheric vertical profiles of aerosol extinction in high latitude bands. The plan gives details of the location and times for the simultaneous satellite/correlative measurements for the nominal launch time, the rationale and choice of the correlative sensors, their characteristics and expected accuracies, and the conversion of their data to extinction profiles. The SAM-2 expected instrument performance and data inversion results are presented. Various atmospheric models representative of polar stratospheric aerosols are used in the SAM-2 and correlative sensor analyses

    A New Vehicle Localization Scheme Based on Combined Optical Camera Communication and Photogrammetry

    Full text link
    The demand for autonomous vehicles is increasing gradually owing to their enormous potential benefits. However, several challenges, such as vehicle localization, are involved in the development of autonomous vehicles. A simple and secure algorithm for vehicle positioning is proposed herein without massively modifying the existing transportation infrastructure. For vehicle localization, vehicles on the road are classified into two categories: host vehicles (HVs) are the ones used to estimate other vehicles' positions and forwarding vehicles (FVs) are the ones that move in front of the HVs. The FV transmits modulated data from the tail (or back) light, and the camera of the HV receives that signal using optical camera communication (OCC). In addition, the streetlight (SL) data are considered to ensure the position accuracy of the HV. Determining the HV position minimizes the relative position variation between the HV and FV. Using photogrammetry, the distance between FV or SL and the camera of the HV is calculated by measuring the occupied image area on the image sensor. Comparing the change in distance between HV and SLs with the change in distance between HV and FV, the positions of FVs are determined. The performance of the proposed technique is analyzed, and the results indicate a significant improvement in performance. The experimental distance measurement validated the feasibility of the proposed scheme

    Design, development and fabrication of a Precision Autocollimating Solar Sensor /PASS/

    Get PDF
    Precision Autocollimating Solar Sensor /PASS/ for Solar Pointing Aerobee Rocket Control System /SPARCS/ progra

    Lidar measurements of stratospheric aerosols over Menlo Park, California, October 1972 - March 1974

    Get PDF
    During an 18-month period, 30 nighttime observations of stratospheric aerosols were made using a ground based ruby lidar located near the Pacific coast of central California (37.5 deg. N, 122.2 deg. W). Vertical profiles of the lidar scattering ratio and the particulate backscattering coefficient were obtained by reference to a layer of assumed negligible particulate content. An aerosol layer centered near 21 km was clearly evident in all observations, but its magnitude and vertical distribution varied considerably throughout the observation period. A reduction of particulate backscattering in the 23- to 30-km layer during late January 1973 appears to have been associated with the sudden stratospheric warming which occurred at that time

    Robust sound event detection in bioacoustic sensor networks

    Full text link
    Bioacoustic sensors, sometimes known as autonomous recording units (ARUs), can record sounds of wildlife over long periods of time in scalable and minimally invasive ways. Deriving per-species abundance estimates from these sensors requires detection, classification, and quantification of animal vocalizations as individual acoustic events. Yet, variability in ambient noise, both over time and across sensors, hinders the reliability of current automated systems for sound event detection (SED), such as convolutional neural networks (CNN) in the time-frequency domain. In this article, we develop, benchmark, and combine several machine listening techniques to improve the generalizability of SED models across heterogeneous acoustic environments. As a case study, we consider the problem of detecting avian flight calls from a ten-hour recording of nocturnal bird migration, recorded by a network of six ARUs in the presence of heterogeneous background noise. Starting from a CNN yielding state-of-the-art accuracy on this task, we introduce two noise adaptation techniques, respectively integrating short-term (60 milliseconds) and long-term (30 minutes) context. First, we apply per-channel energy normalization (PCEN) in the time-frequency domain, which applies short-term automatic gain control to every subband in the mel-frequency spectrogram. Secondly, we replace the last dense layer in the network by a context-adaptive neural network (CA-NN) layer. Combining them yields state-of-the-art results that are unmatched by artificial data augmentation alone. We release a pre-trained version of our best performing system under the name of BirdVoxDetect, a ready-to-use detector of avian flight calls in field recordings.Comment: 32 pages, in English. Submitted to PLOS ONE journal in February 2019; revised August 2019; published October 201

    Robust Intrinsic and Extrinsic Calibration of RGB-D Cameras

    Get PDF
    Color-depth cameras (RGB-D cameras) have become the primary sensors in most robotics systems, from service robotics to industrial robotics applications. Typical consumer-grade RGB-D cameras are provided with a coarse intrinsic and extrinsic calibration that generally does not meet the accuracy requirements needed by many robotics applications (e.g., highly accurate 3D environment reconstruction and mapping, high precision object recognition and localization, ...). In this paper, we propose a human-friendly, reliable and accurate calibration framework that enables to easily estimate both the intrinsic and extrinsic parameters of a general color-depth sensor couple. Our approach is based on a novel two components error model. This model unifies the error sources of RGB-D pairs based on different technologies, such as structured-light 3D cameras and time-of-flight cameras. Our method provides some important advantages compared to other state-of-the-art systems: it is general (i.e., well suited for different types of sensors), based on an easy and stable calibration protocol, provides a greater calibration accuracy, and has been implemented within the ROS robotics framework. We report detailed experimental validations and performance comparisons to support our statements

    Integration of LIDAR and IFSAR for mapping

    Get PDF
    LiDAR and IfSAR data is now widely used for a number of applications, particularly those needing a digital elevation model. The data is often complementary to other data such as aerial imagery and high resolution satellite data. This paper will review the current data sources and the products and then look at the ways in which the data can be integrated for particular applications. The main platforms for LiDAR are either helicopter or fixed wing aircraft, often operating at low altitudes, a digital camera is frequently included on the platform, there is an interest in using other sensors such as 3 line cameras of hyperspectral scanners. IfSAR is used from satellite platforms, or from aircraft, the latter are more compatible with LiDAR for integration. The paper will examine the advantages and disadvantages of LiDAR and IfSAR for DEM generation and discuss the issues which still need to be dealt with. Examples of applications will be given and particularly those involving the integration of different types of data. Examples will be given from various sources and future trends examined
    • …
    corecore