1,266 research outputs found

    Mapping Migratory Bird Prevalence Using Remote Sensing Data Fusion

    Get PDF
    This is the publisher’s final pdf. The published article is copyrighted by the Public Library of Science and can be found at: http://www.plosone.org/home.action.Background: Improved maps of species distributions are important for effective management of wildlife under increasing anthropogenic pressures. Recent advances in lidar and radar remote sensing have shown considerable potential for mapping forest structure and habitat characteristics across landscapes. However, their relative efficacies and integrated use in habitat mapping remain largely unexplored. We evaluated the use of lidar, radar and multispectral remote sensing data in predicting multi-year bird detections or prevalence for 8 migratory songbird species in the unfragmented temperate deciduous forests of New Hampshire, USA. \ud \ud Methodology and Principal Findings: A set of 104 predictor variables describing vegetation vertical structure and variability from lidar, phenology from multispectral data and backscatter properties from radar data were derived. We tested the accuracies of these variables in predicting prevalence using Random Forests regression models. All data sets showed more than 30% predictive power with radar models having the lowest and multi-sensor synergy ("fusion") models having highest accuracies. Fusion explained between 54% and 75% variance in prevalence for all the birds considered. Stem density from discrete return lidar and phenology from multispectral data were among the best predictors. Further analysis revealed different relationships between the remote sensing metrics and bird prevalence. Spatial maps of prevalence were consistent with known habitat preferences for the bird species. \ud \ud Conclusion and Significance: Our results highlight the potential of integrating multiple remote sensing data sets using machine-learning methods to improve habitat mapping. Multi-dimensional habitat structure maps such as those generated from this study can significantly advance forest management and ecological research by facilitating fine-scale studies at both stand and landscape level

    Friction, Vibration and Dynamic Properties of Transmission System under Wear Progression

    Get PDF
    This reprint focuses on wear and fatigue analysis, the dynamic properties of coating surfaces in transmission systems, and non-destructive condition monitoring for the health management of transmission systems. Transmission systems play a vital role in various types of industrial structure, including wind turbines, vehicles, mining and material-handling equipment, offshore vessels, and aircrafts. Surface wear is an inevitable phenomenon during the service life of transmission systems (such as on gearboxes, bearings, and shafts), and wear propagation can reduce the durability of the contact coating surface. As a result, the performance of the transmission system can degrade significantly, which can cause sudden shutdown of the whole system and lead to unexpected economic loss and accidents. Therefore, to ensure adequate health management of the transmission system, it is necessary to investigate the friction, vibration, and dynamic properties of its contact coating surface and monitor its operating conditions

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Continuous time-varying biasing approach for spectrally tunable infrared detectors

    Get PDF
    In a recently demonstrated algorithmic spectral-tuning technique by Jang et al. [Opt. Express 19, 19454-19472, (2011)], the reconstruction of an object’s emissivity at an arbitrarily specified spectral window of interest in the long-wave infrared region was achieved. The technique relied upon forming a weighted superposition of a series of photocurrents from a quantum dots-in-a-well (DWELL) photodetector operated at discrete static biases that were applied serially. Here, the technique is generalized such that a continuously varying biasing voltage is employed over an extended acquisition time, in place using a series of fixed biases over each sub-acquisition time, which totally eliminates the need for the post-processing step comprising the weighted superposition of the discrete photocurrents. To enable this capability, an algorithm is developed for designing the time-varying bias for an arbitrary spectral-sensing window of interest. Since continuous-time biasing can be implemented within the readout circuit of a focal-plane array, this generalization would pave the way for the implementation of the algorithmic spectral tuning in focal-plane arrays within in each frame time without the need for on-sensor multiplications and additions. The technique is validated by means of simulations in the context of spectrometry and object classification while using experimental data for the DWELL under realistic signal-to-noise ratios

    Fusion of Video and Multi-Waveform FMCW Radar for Traffic Surveillance

    Get PDF
    Modern frequency modulated continuous wave (FMCW) radar technology provides the ability to modify the system transmission frequency as a function of time, which in turn provides the ability to generate multiple output waveforms from a single radar unit. Current low-power multi-waveform FMCW radar techniques lack the ability to reliably associate measurements from the various waveform sections in the presence of multiple targets and multiple false detections within the field-of-view. Two approaches are developed here to address this problem. The first approach takes advantage of the relationships between the waveform segments to generate a weighting function for candidate combinations of measurements from the waveform sections. This weighting function is then used to choose the best candidate combinations to form polar-coordinate measurements. Simulations show that this approach provides a ten to twenty percent increase in the probability of correct association over the current approach while reducing the number of false alarms in generated in the process, but still fails to form a measurement if a detection form a waveform section is missing. The second approach models the multi-waveform FMCW radar as a set of independent sensors and uses distributed data fusion to fuse estimates from those individual sensors within a tracking structure. Tracking in this approach is performed directly with the raw frequency and angle measurements from the waveform segments. This removes the need for data association between the measurements from the individual waveform segments. A distributed data fusion model is used again to modify the radar tracking systems to include a video sensor to provide additional angular and identification information into the system. The combination of the radar and vision sensors, as an end result, provides an enhanced roadside tracking system
    • …
    corecore