101 research outputs found

    Imaging through obscurants using time-correlated single-photon counting in the short-wave infrared

    Get PDF
    Single-photon time-of-flight (ToF) light detection and ranging (LiDAR) systems have emerged in recent years as a candidate technology for high-resolution depth imaging in challenging environments, such as long-range imaging and imaging in scattering media. This Thesis investigates the potential of two ToF single-photon depth imaging systems based on the time-correlated single-photon (TCSPC) technique for imaging targets in highly scattering environments. The high sensitivity and picosecond timing resolution afforded by the TCSPC technique offers high-resolution depth profiling of remote targets while maintaining low optical power levels. Both systems comprised a pulsed picosecond laser source with an operating wavelength of 1550 nm, and employed InGaAs/InP SPAD detectors. The main benefits of operating in the shortwave infrared (SWIR) band include improved atmospheric transmission, reduced solar background, as well as increased laser eye-safety thresholds over visible band sensors. Firstly, a monostatic scanning transceiver unit was used in conjunction with a single-element Peltier-cooled InGaAs/InP SPAD detector to attain sub-centimetre resolution three-dimensional images of long-range targets obscured by camouflage netting or in high levels of scattering media. Secondly, a bistatic system, which employed a 32 × 32 pixel format InGaAs/InP SPAD array was used to obtain rapid depth profiles of targets which were flood-illuminated by a higher power pulsed laser source. The performance of this system was assessed in indoor and outdoor scenarios in the presence of obscurants and high ambient background levels. Bespoke image processing algorithms were developed to reconstruct both the depth and intensity images for data with very low signal returns and short data acquisition times, illustrating the practicality of TCSPC-based LiDAR systems for real-time image acquisition in the SWIR wavelength region - even in the photon-starved regime.The Defence Science and Technology Laboratory ( Dstl) National PhD Schem

    Time-Gated Topographic LIDAR Scene Simulation

    Get PDF
    The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model has been developed at the RochesterInstitute of Technology (RIT) for over a decade. The model is an established, first-principles based scene simulationtool that has been focused on passive multi- and hyper-spectral sensing from the visible to long wave infrared (0.4 to 14 µm). Leveraging photon mapping techniques utilized by the computer graphics community, a first-principles based elastic Light Detection and Ranging (LIDAR) model was incorporated into the passive radiometry framework so that the model calculates arbitrary, time-gated radiances reaching the sensor for both the atmospheric and topographicreturns. The active LIDAR module handles a wide variety of complicated scene geometries, a diverse set of surface and participating media optical characteristics, multiple bounce and multiple scattering effects, and a flexible suite of sensormodels. This paper will present the numerical approaches employed to predict sensor reaching radiances andcomparisons with analytically predicted results. Representative data sets generated by the DIRSIG model for a topographical LIDAR will be shown. Additionally, the results from phenomenological case studies including standard terrain topography, forest canopy penetration, and camouflaged hard targets will be presented

    Time-gated topographic LIDAR scene simulation

    Full text link

    Short-Wave Infrared Diffuse Reflectance of Textile Materials

    Get PDF
    This thesis analyzes the reflectance behavior of textiles in the short-wave infrared (SWIR) band (1 – 2 microns) in order to identify/design potential diagnostic tools that allow the remote detection of human presence in a scene. Analyzing the spectral response of fabrics in the SWIR band has gained significant interest in the remote sensing community since it provides a potential path to discriminate camouflaged clothing from backgrounds that appear similar to the object of interest in the visible band. Existing research, originating primarily from the textiles community, has thoroughly documented the behavior of clothing fabrics in the visible band. Other work has shown that the differences in spectral response in the SWIR band allows for discrimination of materials that otherwise have the same visible spectral response. This work expands on those efforts in order to quantify the reflectance behavior and to better understand the physical basis for that behavior

    High-resolution, slant-angle scene generation and validation of concealed targets in DIRSIG

    Get PDF
    Traditionally, synthetic imagery has been constructed to simulate images captured with low resolution, nadir-viewing sensors. Advances in sensor design have driven a need to simulate scenes not only at higher resolutions but also from oblique view angles. The primary efforts of this research include: real image capture, scene construction and modeling, and validation of the synthetic imagery in the reflective portion of the spectrum. High resolution imagery was collected of an area named MicroScene at the Rochester Institute of Technology using the Chester F. Carlson Center for Imaging Science\u27s MISI and WASP sensors using an oblique view angle. Three Humvees, the primary targets, were placed in the scene under three different levels of concealment. Following the collection, a synthetic replica of the scene was constructed and then rendered with the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model configured to recreate the scene both spatially and spectrally based on actual sensor characteristics. Finally, a validation of the synthetic imagery against the real images of MicroScene was accomplished using a combination of qualitative analysis, Gaussian maximum likelihood classification, grey-level co-occurrence matrix derived texture metrics, and the RX algorithm. The model was updated following each validation using a cyclical development approach. The purpose of this research is to provide a level of confidence in the synthetic imagery produced by DIRSIG so that it can be used to train and develop algorithms for real world concealed target detection

    Combining omnidirectional vision with polarization vision for robot navigation

    Get PDF
    La polarisation est le phénomène qui décrit les orientations des oscillations des ondes lumineuses qui sont limitées en direction. La lumière polarisée est largement utilisée dans le règne animal,à partir de la recherche de nourriture, la défense et la communication et la navigation. Le chapitre (1) aborde brièvement certains aspects importants de la polarisation et explique notre problématique de recherche. Nous visons à utiliser un capteur polarimétrique-catadioptrique car il existe de nombreuses applications qui peuvent bénéficier d'une telle combinaison en vision par ordinateur et en robotique, en particulier pour l'estimation d'attitude et les applications de navigation. Le chapitre (2) couvre essentiellement l'état de l'art de l'estimation d'attitude basée sur la vision.Quand la lumière non-polarisée du soleil pénètre dans l'atmosphère, l'air entraine une diffusion de Rayleigh, et la lumière devient partiellement linéairement polarisée. Le chapitre (3) présente les motifs de polarisation de la lumière naturelle et couvre l'état de l'art des méthodes d'acquisition des motifs de polarisation de la lumière naturelle utilisant des capteurs omnidirectionnels (par exemple fisheye et capteurs catadioptriques). Nous expliquons également les caractéristiques de polarisation de la lumière naturelle et donnons une nouvelle dérivation théorique de son angle de polarisation.Notre objectif est d'obtenir une vue omnidirectionnelle à 360 associée aux caractéristiques de polarisation. Pour ce faire, ce travail est basé sur des capteurs catadioptriques qui sont composées de surfaces réfléchissantes et de lentilles. Généralement, la surface réfléchissante est métallique et donc l'état de polarisation de la lumière incidente, qui est le plus souvent partiellement linéairement polarisée, est modifiée pour être polarisée elliptiquement après réflexion. A partir de la mesure de l'état de polarisation de la lumière réfléchie, nous voulons obtenir l'état de polarisation incident. Le chapitre (4) propose une nouvelle méthode pour mesurer les paramètres de polarisation de la lumière en utilisant un capteur catadioptrique. La possibilité de mesurer le vecteur de Stokes du rayon incident est démontré à partir de trois composants du vecteur de Stokes du rayon réfléchi sur les quatre existants.Lorsque les motifs de polarisation incidents sont disponibles, les angles zénithal et azimutal du soleil peuvent être directement estimés à l'aide de ces modèles. Le chapitre (5) traite de l'orientation et de la navigation de robot basées sur la polarisation et différents algorithmes sont proposés pour estimer ces angles dans ce chapitre. A notre connaissance, l'angle zénithal du soleil est pour la première fois estimé dans ce travail à partir des schémas de polarisation incidents. Nous proposons également d'estimer l'orientation d'un véhicule à partir de ces motifs de polarisation.Enfin, le travail est conclu et les possibles perspectives de recherche sont discutées dans le chapitre (6). D'autres exemples de schémas de polarisation de la lumière naturelle, leur calibrage et des applications sont proposées en annexe (B).Notre travail pourrait ouvrir un accès au monde de la vision polarimétrique omnidirectionnelle en plus des approches conventionnelles. Cela inclut l'orientation bio-inspirée des robots, des applications de navigation, ou bien la localisation en plein air pour laquelle les motifs de polarisation de la lumière naturelle associés à l'orientation du soleil à une heure précise peuvent aboutir à la localisation géographique d'un véhiculePolarization is the phenomenon that describes the oscillations orientations of the light waves which are restricted in direction. Polarized light has multiple uses in the animal kingdom ranging from foraging, defense and communication to orientation and navigation. Chapter (1) briefly covers some important aspects of polarization and explains our research problem. We are aiming to use a polarimetric-catadioptric sensor since there are many applications which can benefit from such combination in computer vision and robotics specially robot orientation (attitude estimation) and navigation applications. Chapter (2) mainly covers the state of art of visual based attitude estimation.As the unpolarized sunlight enters the Earth s atmosphere, it is Rayleigh-scattered by air, and it becomes partially linearly polarized. This skylight polarization provides a signi cant clue to understanding the environment. Its state conveys the information for obtaining the sun orientation. Robot navigation, sensor planning, and many other applications may bene t from using this navigation clue. Chapter (3) covers the state of art in capturing the skylight polarization patterns using omnidirectional sensors (e.g fisheye and catadioptric sensors). It also explains the skylight polarization characteristics and gives a new theoretical derivation of the skylight angle of polarization pattern. Our aim is to obtain an omnidirectional 360 view combined with polarization characteristics. Hence, this work is based on catadioptric sensors which are composed of reflective surfaces and lenses. Usually the reflective surface is metallic and hence the incident skylight polarization state, which is mostly partially linearly polarized, is changed to be elliptically polarized after reflection. Given the measured reflected polarization state, we want to obtain the incident polarization state. Chapter (4) proposes a method to measure the light polarization parameters using a catadioptric sensor. The possibility to measure the incident Stokes is proved given three Stokes out of the four reflected Stokes. Once the incident polarization patterns are available, the solar angles can be directly estimated using these patterns. Chapter (5) discusses polarization based robot orientation and navigation and proposes new algorithms to estimate these solar angles where, to the best of our knowledge, the sun zenith angle is firstly estimated in this work given these incident polarization patterns. We also propose to estimate any vehicle orientation given these polarization patterns. Finally the work is concluded and possible future research directions are discussed in chapter (6). More examples of skylight polarization patterns, their calibration, and the proposed applications are given in appendix (B). Our work may pave the way to move from the conventional polarization vision world to the omnidirectional one. It enables bio-inspired robot orientation and navigation applications and possible outdoor localization based on the skylight polarization patterns where given the solar angles at a certain date and instant of time may infer the current vehicle geographical location.DIJON-BU Doc.électronique (212319901) / SudocSudocFranceF

    Finding a signal hidden among noise: how can predators overcome camouflage strategies?

    Get PDF
    This is the final version. Available on open access from the Royal Society via the DOI in this recordSubstantial progress has been made in the last 15 years regarding how prey use a variety of visual camouflage types to exploit both predator visual processing and cognition, including background matching, disruptive coloration, countershading, and masquerade. In contrast, much less attention has been paid to how predators might overcome these defences. Such strategies might include the evolution of more acute senses, the co-opting of other senses not targeted by camouflage, changes in cognition such as forming search images, and using behaviours that change the relationship between the cryptic individual and the environment or disturb prey and cause movement. Here we evaluate the methods through which visual camouflage prevents detection and recognition, and discuss if and how predators might evolve, develop, or learn counter-adaptations to overcome these.Biotechnology & Biological Sciences Research Council (BBSRC)Royal Society Dorothy Hodgkin Fellowshi

    Algorithme de synthèse d'ouverture basé sur un détecteur à sous espace

    Get PDF
    Cet article traite de l'imagerie de cibles localisées sous un couvert forestier à l'aide d'un Radar à Synthèse d'Ouverture (RSO) Large Bande, fonctionnant en UHF-VHF (fréquence centrale entre 100 MHz et 1 GHz). Partant de l'idée qu'une cible est constituée d'un ensemble de plaques, nous développons un algorithme original d'imagerie basé sur un détecteur à sous espace. Nous montrons que quelque soit l'orientation d'une plaque, l'ensemble de ses réponses appartient à un sous espace restreint et nous projetons le signal reçu réel dans ce sous espace. Ensuite, cet algorithme est testé sur des cas simples de simulation où la cible est une plaque et les performances de détection de cet algorithme sont comparées avec un algorithme traditionnel d'imagerie RSO. Les résultats montrent une nette amélioration des performances de localisation et de détection dans le cas de plaques noyées dans du bruit blanc, gaussien

    Analysis of Polarimetric Synthetic Aperture Radar and Passive Visible Light Polarimetric Imaging Data Fusion for Remote Sensing Applications

    Get PDF
    The recent launch of spaceborne (TerraSAR-X, RADARSAT-2, ALOS-PALSAR, RISAT) and airborne (SIRC, AIRSAR, UAVSAR, PISAR) polarimetric radar sensors, with capability of imaging through day and night in almost all weather conditions, has made polarimetric synthetic aperture radar (PolSAR) image interpretation and analysis an active area of research. PolSAR image classification is sensitive to object orientation and scattering properties. In recent years, significant work has been done in many areas including agriculture, forestry, oceanography, geology, terrain analysis. Visible light passive polarimetric imaging has also emerged as a powerful tool in remote sensing for enhanced information extraction. The intensity image provides information on materials in the scene while polarization measurements capture surface features, roughness, and shading, often uncorrelated with the intensity image. Advantages of visible light polarimetric imaging include high dynamic range of polarimetric signatures and being comparatively straightforward to build and calibrate. This research is about characterization and analysis of the basic scattering mechanisms for information fusion between PolSAR and passive visible light polarimetric imaging. Relationships between these two modes of imaging are established using laboratory measurements and image simulations using the Digital Image and Remote Sensing Image Generation (DIRSIG) tool. A novel low cost laboratory based S-band (2.4GHz) PolSAR instrument is developed that is capable of capturing 4 channel fully polarimetric SAR image data. Simple radar targets are formed and system calibration is performed in terms of radar cross-section. Experimental measurements are done using combination of the PolSAR instrument with visible light polarimetric imager for scenes capturing basic scattering mechanisms for phenomenology studies. The three major scattering mechanisms studied in this research include single, double and multiple bounce. Single bounce occurs from flat surfaces like lakes, rivers, bare soil, and oceans. Double bounce can be observed from two adjacent surfaces where one horizontal flat surface is near a vertical surface such as buildings and other vertical structures. Randomly oriented scatters in homogeneous media produce a multiple bounce scattering effect which occurs in forest canopies and vegetated areas. Relationships between Pauli color components from PolSAR and Degree of Linear Polarization (DOLP) from passive visible light polarimetric imaging are established using real measurements. Results show higher values of the red channel in Pauli color image (|HH-VV|) correspond to high DOLP from double bounce effect. A novel information fusion technique is applied to combine information from the two modes. In this research, it is demonstrated that the Degree of Linear Polarization (DOLP) from passive visible light polarimetric imaging can be used for separation of the classes in terms of scattering mechanisms from the PolSAR data. The separation of these three classes in terms of the scattering mechanisms has its application in the area of land cover classification and anomaly detection. The fusion of information from these particular two modes of imaging, i.e. PolSAR and passive visible light polarimetric imaging, is a largely unexplored area in remote sensing and the main challenge in this research is to identify areas and scenarios where information fusion between the two modes is advantageous for separation of the classes in terms of scattering mechanisms relative to separation achieved with only PolSAR
    corecore