135 research outputs found

    Synthetic Aperture LADAR Automatic Target Recognizer Design and Performance Prediction via Geometric Properties of Targets

    Get PDF
    Synthetic Aperture LADAR (SAL) has several phenomenology differences from Synthetic Aperture RADAR (SAR) making it a promising candidate for automatic target recognition (ATR) purposes. The diffuse nature of SAL results in more pixels on target. Optical wavelengths offers centimeter class resolution with an aperture baseline that is 10,000 times smaller than an SAR baseline. While diffuse scattering and optical wavelengths have several advantages, there are also a number of challenges. The diffuse nature of SAL leads to a more pronounced speckle effect than in the SAR case. Optical wavelengths are more susceptible to atmospheric noise, leading to distortions in formed imagery. While these advantages and disadvantages are studied and understood in theory, they have yet to be put into practice. This dissertation aims to quantify the impact switching from specular SAR to diffuse SAL has on algorithm design. In addition, a methodology for performance prediction and template generation is proposed given the geometric and physical properties of CAD models. This methodology does not rely on forming images, and alleviates the computational burden of generating multiple speckle fields and redundant ray-tracing. This dissertation intends to show that the performance of template matching ATRs on SAL imagery can be accurately and rapidly estimated by analyzing the physical and geometric properties of CAD models

    Pose independent target recognition system using pulsed Ladar imagery

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 95-97).Although a number of object recognition techniques have been developed to process LADAR scanned terrain scenes, these techniques have had limited success in target discrimination in part due to low-resolution data and limits in available computation power. We present a pose-independent Automatic Target Detection and Recognition System that uses data from an airborne 3D imaging Ladar sensor. The Automatic Target Recognition system uses geometric shape and size signatures from target models to detect and recognize targets under heavy canopy and camouflage cover in extended terrain scenes. A method for data integration was developed to register multiple scene views to obtain a more complete 3D surface signature of a target. Automatic target detection was performed using the general approach of"3D cueing," which determines and ranks regions of interest within a large-scale scene based on the likelihood that they contain the respective target. Each region of interest is then passed to an ATR algorithm to accurately identify the target from among a library of target models. Automatic target recognition was performed using spin-image surface matching, a pose-independent algorithm that determines correspondences between a scene and a target of interest. Given a region of interest within a large-scale scene, the ATR algorithm either identifies the target from among a library of 10 target models or reports a "none of the above" outcome. The system performance was demonstrated on five measured scenes with targets both out in the open and under heavy canopy cover, where the target occupied between 1 to 10% of the scene by volume. The ATR section of the system was successfully demonstrated for twelve measured data scenes with targets both out in the open andunder heavy canopy and camouflage cover. Correct target identification was also demonstrated for targets with multiple movable parts that are in arbitrary orientations. The system achieved a high recognition rate (over 99%) along with a low false alarm rate (less than 0.01%) The contributions of this thesis research are: 1) I implemented a novel technique for reconstructing multiple-view 3D Ladar scenes. 2) I demonstrated that spin-image-based detection and recognition is feasible for terrain data collected in the field with a sensor that may be used in a tactical situation and 3) I demonstrated recognition of articulated objects, with multiple movable parts. Immediate benefits of the presented work will be to the area of Automatic Target Recognition of military ground vehicles, where the vehicles of interest may include articulated components with variable position relative to the body, and come in many possible configurations. Other application areas include human detection and recognition for Homeland Security, and registration of large or extended terrain scenes.by Alexandru N. Vasile.M.Eng

    Display and Analysis of Tomographic Reconstructions of Multiple Synthetic Aperture LADAR (SAL) images

    Get PDF
    Synthetic aperture ladar (SAL) is similar to synthetic aperture radar (SAR) in that it can create range/cross-range slant plane images of the illuminated scatters; however, SAL has wavelengths 10,000x smaller than SAR enabling a relatively narrow real aperture, diffraction limited beam widths. The relatively narrow real aperture resolutions allow for multiple slant planes to be created for a single target with reasonable range/aperture combinations. These multiple slant planes can be projected into a single slant plane projections (as in SAR). It can also be displayed as a 3-D image with asymmetric resolutions, diffraction limited in the dimension orthogonal to the SAL baseline. Multiple images with diversity in angle orthogonal to SAL baselines can be used to synthesize resolution with tomographic techniques and enhance the diffraction limited resolution. The goal of this research is to explore methods to enhance the diffraction limited resolutions with multiple observations and/or multiple slant plane imaging with SAL systems. Specifically, metrics associated with the information content of the tomographic based 3 dimensional reconstructions of SAL intensity imagery will be investigated to see how it changes as a function of number of slant planes in the SAL images and number of elevation observations are varied

    Spectral LADAR: Active Range-Resolved Imaging Spectroscopy

    Get PDF
    Imaging spectroscopy using ambient or thermally generated optical sources is a well developed technique for capturing two dimensional images with high per-pixel spectral resolution. The per-pixel spectral data is often a sufficient sampling of a material's backscatter spectrum to infer chemical properties of the constituent material to aid in substance identification. Separately, conventional LADAR sensors use quasi-monochromatic laser radiation to create three dimensional images of objects at high angular resolution, compared to RADAR. Advances in dispersion engineered photonic crystal fibers in recent years have made high spectral radiance optical supercontinuum sources practical, enabling this study of Spectral LADAR, a continuous polychromatic spectrum augmentation of conventional LADAR. This imaging concept, which combines multi-spectral and 3D sensing at a physical level, is demonstrated with 25 independent and parallel LADAR channels and generates point cloud images with three spatial dimensions and one spectral dimension. The independence of spectral bands is a key characteristic of Spectral LADAR. Each spectral band maintains a separate time waveform record, from which target parameters are estimated. Accordingly, the spectrum computed for each backscatter reflection is independently and unambiguously range unmixed from multiple target reflections that may arise from transmission of a single panchromatic pulse. This dissertation presents the theoretical background of Spectral LADAR, a shortwave infrared laboratory demonstrator system constructed as a proof-of-concept prototype, and the experimental results obtained by the prototype when imaging scenes at stand off ranges of 45 meters. The resultant point cloud voxels are spectrally classified into a number of material categories which enhances object and feature recognition. Experimental results demonstrate the physical level combination of active backscatter spectroscopy and range resolved sensing to produce images with a level of complexity, detail, and accuracy that is not obtainable with data-level registration and fusion of conventional imaging spectroscopy and LADAR. The capabilities of Spectral LADAR are expected to be useful in a range of applications, such as biomedical imaging and agriculture, but particularly when applied as a sensor in unmanned ground vehicle navigation. Applications to autonomous mobile robotics are the principal motivators of this study, and are specifically addressed

    Workshop on Advanced Technologies for Planetary Instruments, part 1

    Get PDF
    This meeting was conceived in response to new challenges facing NASA's robotic solar system exploration program. This volume contains papers presented at the Workshop on Advanced Technologies for Planetary Instruments on 28-30 Apr. 1993. This meeting was conceived in response to new challenges facing NASA's robotic solar system exploration program. Over the past several years, SDIO has sponsored a significant technology development program aimed, in part, at the production of instruments with these characteristics. This workshop provided an opportunity for specialists from the planetary science and DoD communities to establish contacts, to explore common technical ground in an open forum, and more specifically, to discuss the applicability of SDIO's technology base to planetary science instruments

    Cognitively-Engineered Multisensor Data Fusion Systems for Military Applications

    Get PDF
    The fusion of imagery from multiple sensors is a field of research that has been gaining prominence in the scientific community in recent years. The technical aspects of combining multisensory information have been and are currently being studied extensively. However, the cognitive aspects of multisensor data fusion have not received so much attention. Prior research in the field of cognitive engineering has shown that the cognitive aspects of any human-machine system should be taken into consideration in order to achieve systems that are both safe and useful. The goal of this research was to model how humans interpret multisensory data, and to evaluate the value of a cognitively-engineered multisensory data fusion system as an effective, time-saving means of presenting information in high- stress situations. Specifically, this research used principles from cognitive engineering to design, implement, and evaluate a multisensor data fusion system for pilots in high-stress situations. Two preliminary studies were performed, and concurrent protocol analysis was conducted to determine how humans interpret and mentally fuse information from multiple sensors in both low- and high-stress environments. This information was used to develop a model for human processing of information from multiple data sources. This model was then implemented in the development of algorithms for fusing imagery from several disparate sensors (visible and infrared). The model and the system as a whole were empirically evaluated in an experiment with fighter pilots in a simulated combat environment. The results show that the model is an accurate depiction of how humans interpret information from multiple disparate sensors, and that the algorithms show promise for assisting fighter pilots in quicker and more accurate target identification

    Super Resolution Image Enhancement for a Flash Lidar: Back Projection Method

    Get PDF
    In this paper a new image processing technique for flash LIDAR data is presented as a potential tool to enable safe and precise spacecraft landings in future robotic or crewed lunar and planetary missions. Flash LIDARs can generate, in real-time, range data that can be interpreted as a 3-dimensional (3-D) image and transformed into a corresponding digital elevation map (DEM). The NASA Autonomous Landing and Hazard Avoidance (ALHAT) project is capitalizing on this new technology by developing, testing and analyzing flash LIDARs to detect hazardous terrain features such as craters, rocks, and slopes during the descent phase of spacecraft landings. Using a flash LIDAR for this application looks very promising, however through theoretical and simulation analysis the ALHAT team has determined that a single frame, or mosaic, of flash LIDAR data may not be sufficient to build a landing site DEM with acceptable spatial resolution, precision, size, or for a mosaic, in time, to meet current system requirements. One way to overcome this potential limitation is by enhancing the flash LIDAR output images. We propose a new super-resolution algorithm applicable to flash LIDAR range data that will create a DEM with sufficient accuracy, precision and size to meet current ALHAT requirements. The performance of our super-resolution algorithm is analyzed by processing data generated during a series of simulation runs by a high fidelity model of a flash LIDAR imaging a high resolution synthetic lunar elevation map. The flash LIDAR model is attached to a simulated spacecraft by a gimbal that points the LIDAR to a target landing site. For each simulation run, a sequence of flash LIDAR frames is recorded and processed as the spacecraft descends toward the landing site. Each run has a different trajectory profile with varying LIDAR look angles of the terrain. We process the output LIDAR frames using our SR algorithm and the results show that the achieved level of accuracy and precision of the SR generated landing site DEM is more than adequate for detecting hazardous terrain features and identifying safe areas

    Multi-wavelength, multi-beam, photonic based sensor for object discrimination and positioning

    Get PDF
    Over the last decade, substantial research efforts have been dedicated towards the development of advanced laser scanning systems for discrimination in perimeter security, defence, agriculture, transportation, surveying and geosciences. Military forces, in particular, have already started employing laser scanning technologies for projectile guidance, surveillance, satellite and missile tracking; and target discrimination and recognition. However, laser scanning is relatively a new security technology. It has previously been utilized for a wide variety of civil and military applications. Terrestrial laser scanning has found new use as an active optical sensor for indoors and outdoors perimeter security. A laser scanning technique with moving parts was tested in the British Home Office - Police Scientific Development Branch (PSDB) in 2004. It was found that laser scanning has the capability to detect humans in 30m range and vehicles in 80m range with low false alarm rates. However, laser scanning with moving parts is much more sensitive to vibrations than a multi-beam stationary optic approach. Mirror device scanners are slow, bulky and expensive and being inherently mechanical they wear out as a result of acceleration, cause deflection errors and require regular calibration. Multi-wavelength laser scanning represent a potential evolution from object detection to object identification and classification, where detailed features of objects and materials are discriminated by measuring their reflectance characteristics at specific wavelengths and matching them with their spectral reflectance curves. With the recent advances in the development of high-speed sensors and high-speed data processors, the implementation of multi-wavelength laser scanners for object identification has now become feasible. A two-wavelength photonic-based sensor for object discrimination has recently been reported, based on the use of an optical cavity for generating a laser spot array and maintaining adequate overlapping between tapped collimated laser beams of different wavelengths over a long optical path. While this approach is capable of discriminating between objects of different colours, its main drawback is the limited number of security-related objects that can be discriminated. This thesis proposes and demonstrates the concept of a novel photonic based multi-wavelength sensor for object identification and position finding. The sensor employs a laser combination module for input wavelength signal multiplexing and beam overlapping, a custom-made curved optical cavity for multi-beam spot generation through internal beam reflection and transmission and a high-speed imager for scattered reflectance spectral measurements. Experimental results show that five different laser wavelengths, namely 473nm, 532nm, 635nm, 670nm and 785nm, are necessary for discriminating various intruding objects of interest through spectral reflectance and slope measurements. Various objects were selected to demonstrate the proof of concept. We also demonstrate that the object position (coordinates) is determined using the triangulation method, which is based on the projection of laser spots along determined angles onto intruding objects and the measurement of their reflectance spectra using an image sensor. Experimental results demonstrate the ability of the multi-wavelength spectral reflectance sensor to simultaneously discriminate between different objects and predict their positions over a 6m range with an accuracy exceeding 92%. A novel optical design is used to provide additional transverse laser beam scanning for the identification of camouflage materials. A camouflage material is chosen to illustrate the discrimination capability of the sensor, which has complex patterns within a single sample, and is successfully detected and discriminated from other objects over a 6m range by scanning the laser beam spots along the transverse direction. By using more wavelengths at optimised points in the spectrum where different objects show different optical characteristics, better discrimination can be accomplished
    • …
    corecore