489 research outputs found

    UWB Pulse Radar for Human Imaging and Doppler Detection Applications

    Get PDF
    We were motivated to develop new technologies capable of identifying human life through walls. Our goal is to pinpoint multiple people at a time, which could pay dividends during military operations, disaster rescue efforts, or assisted-living. Such system requires the combination of two features in one platform: seeing-through wall localization and vital signs Doppler detection. Ultra-wideband (UWB) radar technology has been used due to its distinct advantages, such as ultra-low power, fine imaging resolution, good penetrating through wall characteristics, and high performance in noisy environment. Not only being widely used in imaging systems and ground penetrating detection, UWB radar also targets Doppler sensing, precise positioning and tracking, communications and measurement, and etc. A robust UWB pulse radar prototype has been developed and is presented here. The UWB pulse radar prototype integrates seeing-through imaging and Doppler detection features in one platform. Many challenges existing in implementing such a radar have been addressed extensively in this dissertation. Two Vivaldi antenna arrays have been designed and fabricated to cover 1.5-4.5 GHz and 1.5-10 GHz, respectively. A carrier-based pulse radar transceiver has been implemented to achieve a high dynamic range of 65dB. A 100 GSPS data acquisition module is prototyped using the off-the-shelf field-programmable gate array (FPGA) and analog-to-digital converter (ADC) based on a low cost solution: equivalent time sampling scheme. Ptolemy and transient simulation tools are used to accurately emulate the linear and nonlinear components in the comprehensive simulation platform, incorporated with electromagnetic theory to account for through wall effect and radar scattering. Imaging and Doppler detection examples have been given to demonstrate that such a “Biometrics-at-a-glance” would have a great impact on the security, rescuing, and biomedical applications in the future

    Real-time Three-dimensional Photoacoustic Imaging

    Get PDF
    Photoacoustic imaging is a modality that combines the benefits of two prominent imaging techniques; the strong contrast inherent to optical imaging techniques with the enhanced penetration depth and resolution of ultrasound imaging. PA waves are generated by illuminating a light-absorbing object with a short laser pulse. The deposited energy causes a pressure change in the object and, consequently, an outwardly propagating acoustic wave. Images are produced by using characteristic optical information contained within the waves. We have developed a 3D PA imaging system by using a staring, sparse array approach to produce real-time PA images. The technique employs the use of a limited number of transducers and by solving a linear system model, 3D PA images are rendered. In this thesis, the development of an omni-directional PA source is introduced as a method to characterize the shift-variant system response. From this foundation, a technique is presented to generate an experimental estimate of the imaging operator for a PA system. This allows further characterization of the object space by two techniques; the crosstalk matrix and singular value decomposition. Finally, the results of the singular value decomposition analysis coupled with the linear system model approach to image reconstruction, 3D PA images are produced at a frame rate of 0.7 Hz. This approach to 3D PA imaging has provided the foundation for 3D PA images to be produced at frame rates limited only by the laser repetition rate, as straightforward system improvements could see the imaging process reduced to tens of milliseconds

    Development and Characterization of a Chromotomosynthetic Hyperspectral Imaging System

    Get PDF
    A chromotomosynthetic imaging (CTI) methodology based upon mathematical reconstruction of a set of 2-D spectral projections to collect high-speed (100 Hz) 3-D hyperspectral data cube has been proposed. The CTI system can simultaneously provide usable 3-D spatial and spectral information, provide high-frame rate slitless 1-D spectra, and generate 2-D imagery equivalent to that collected with no prism in the optical system. The wavelength region where prism dispersion is highest (500 nm) is most sensitive to loss of spectral resolution in the presence of systematic error, while wavelengths 600 nm suffer mostly from a shift of the spectral peaks. The quality of the spectral resolution in the reconstructed hyperspectral imagery was degraded by as much as a factor of two in the blue spectral region with less than 1° total angular error in mount alignment in the two axes of freedom. Even with no systematic error, spatial artifacts from the reconstruction limit the ability to provide adequate spectral imagery without specialized image reconstruction techniques as targets become more spatially and spectrally uniform

    Phase History Decomposition for Efficient Scatterer Classification in SAR Imagery

    Get PDF
    A new theory and algorithm for scatterer classification in SAR imagery is presented. The automated classification process is operationally efficient compared to existing image segmentation methods requiring human supervision. The algorithm reconstructs coarse resolution subimages from subdomains of the SAR phase history. It analyzes local peaks in the subimages to determine locations and geometric shapes of scatterers in the scene. Scatterer locations are indicated by the presence of a stable peak in all subimages for a given subaperture, while scatterer shapes are indicated by changes in pixel intensity. A new multi-peak model is developed from physical models of electromagnetic scattering to predict how pixel intensities behave for different scatterer shapes. The algorithm uses a least squares classifier to match observed pixel behavior to the model. Classification accuracy improves with increasing fractional bandwidth and is subject to the high-frequency and wide-aperture approximations of the multi-peak model. For superior computational efficiency, an integrated fast SAR imaging technique is developed to combine the coarse resolution subimages into a final SAR image having fine resolution. Finally, classification results are overlaid on the SAR image so that analysts can deduce the significance of the scatterer shape information within the image context

    Real-time programmable acoustooptic synthetic aperture radar processor

    Get PDF
    The acoustooptic time-and-space integrating approach to real-time synthetic aperture radar (SAR) processing is reviewed, and novel hybrid optical/electronic techniques, which generalize the basic architecture, are described. The generalized architecture is programmable and has the ability to compensate continuously for range migration changes in the parameters of the radar/target geometry and anomalous platform motion. The new architecture is applicable to the spotlight mode of SAR, particularly for applications in which real-time onboard processing is required

    Joint Image Reconstruction and Segmentation Using the Potts Model

    Full text link
    We propose a new algorithmic approach to the non-smooth and non-convex Potts problem (also called piecewise-constant Mumford-Shah problem) for inverse imaging problems. We derive a suitable splitting into specific subproblems that can all be solved efficiently. Our method does not require a priori knowledge on the gray levels nor on the number of segments of the reconstruction. Further, it avoids anisotropic artifacts such as geometric staircasing. We demonstrate the suitability of our method for joint image reconstruction and segmentation. We focus on Radon data, where we in particular consider limited data situations. For instance, our method is able to recover all segments of the Shepp-Logan phantom from 77 angular views only. We illustrate the practical applicability on a real PET dataset. As further applications, we consider spherical Radon data as well as blurred data

    Characterization of multiphase flows integrating X-ray imaging and virtual reality

    Get PDF
    Multiphase flows are used in a wide variety of industries, from energy production to pharmaceutical manufacturing. However, because of the complexity of the flows and difficulty measuring them, it is challenging to characterize the phenomena inside a multiphase flow. To help overcome this challenge, researchers have used numerous types of noninvasive measurement techniques to record the phenomena that occur inside the flow. One technique that has shown much success is X-ray imaging. While capable of high spatial resolutions, X-ray imaging generally has poor temporal resolution. This research improves the characterization of multiphase flows in three ways. First, an X-ray image intensifier is modified to use a high-speed camera to push the temporal limits of what is possible with current tube source X-ray imaging technology. Using this system, sample flows were imaged at 1000 frames per second without a reduction in spatial resolution. Next, the sensitivity of X-ray computed tomography (CT) measurements to changes in acquisition parameters is analyzed. While in theory CT measurements should be stable over a range of acquisition parameters, previous research has indicated otherwise. The analysis of this sensitivity shows that, while raw CT values are strongly affected by changes to acquisition parameters, if proper calibration techniques are used, acquisition parameters do not significantly influence the results for multiphase flow imaging. Finally, two algorithms are analyzed for their suitability to reconstruct an approximate tomographic slice from only two X-ray projections. These algorithms increase the spatial error in the measurement, as compared to traditional CT; however, they allow for very high temporal resolutions for 3D imaging. The only limit on the speed of this measurement technique is the image intensifier-camera setup, which was shown to be capable of imaging at a rate of at least 1000 FPS. While advances in measurement techniques for multiphase flows are one part of improving multiphase flow characterization, the challenge extends beyond measurement techniques. For improved measurement techniques to be useful, the data must be accessible to scientists in a way that maximizes the comprehension of the phenomena. To this end, this work also presents a system for using the Microsoft Kinect sensor to provide natural, non-contact interaction with multiphase flow data. Furthermore, this system is constructed so that it is trivial to add natural, non-contact interaction to immersive visualization applications. Therefore, multiple visualization applications can be built that are optimized to specific types of data, but all leverage the same natural interaction. Finally, the research is concluded by proposing a system that integrates the improved X-ray measurements, with the Kinect interaction system, and a CAVE automatic virtual environment (CAVE) to present scientists with the multiphase flow measurements in an intuitive and inherently three-dimensional manner
    corecore