871 research outputs found

    Improving Multiple Surface Range Estimation of a 3-Dimensional FLASH LADAR in the Presence of Atmospheric Turbulence

    Get PDF
    Laser Radar sensors can be designed to provide two-dimensional and three-dimensional (3-D) images of a scene from a single laser pulse. Currently, there are various data recording and presentation techniques being developed for 3-D sensors. While the technology is still being proven, many applications are being explored and suggested. As technological advancements are coupled with enhanced signal processing algorithms, it is possible that this technology will present exciting new military capabilities for sensor users. The goal of this work is to develop an algorithm to enhance the utility of 3-D Laser Radar sensors through accurate ranging to multiple surfaces per image pixel while minimizing the effects of diffraction. Via a new 3-D blind deconvolution algorithm, it will be possible to realize numerous enhancements over both traditional Gaussian mixture modeling and single surface range estimation. While traditional Gaussian mixture modeling can effectively model the received pulse, we know that its shape is likely altered due to optical aberrations from the imaging system and the medium through which it is imaging. Simulation examples show that the multi-surface ranging algorithm derived in this work improves range estimation over standard Gaussian mixture modeling and frame-by-frame deconvolution by up to 89% and 85% respectively

    Real-Time Quantum Noise Suppression In Very Low-Dose Fluoroscopy

    Get PDF
    Fluoroscopy provides real-time X-ray screening of patient's organs and of various radiopaque objects, which make it an invaluable tool for many interventional procedures. For this reason, the number of fluoroscopy screenings has experienced a consistent growth in the last decades. However, this trend has raised many concerns about the increase in X-ray exposure, as even low-dose procedures turned out to be not as safe as they were considered, thus demanding a rigorous monitoring of the X-ray dose delivered to the patients and to the exposed medical staff. In this context, the use of very low-dose protocols would be extremely beneficial. Nonetheless, this would result in very noisy images, which need to be suitably denoised in real-time to support interventional procedures. Simple smoothing filters tend to produce blurring effects that undermines the visibility of object boundaries, which is essential for the human eye to understand the imaged scene. Therefore, some denoising strategies embed noise statistics-based criteria to improve their denoising performances. This dissertation focuses on the Noise Variance Conditioned Average (NVCA) algorithm, which takes advantage of the a priori knowledge of quantum noise statistics to perform noise reduction while preserving the edges and has already outperformed many state-of-the-art methods in the denoising of images corrupted by quantum noise, while also being suitable for real-time hardware implementation. Different issues are addressed that currently limit the actual use of very low-dose protocols in clinical practice, e.g. the evaluation of actual performances of denoising algorithms in very low-dose conditions, the optimization of tuning parameters to obtain the best denoising performances, the design of an index to properly measure the quality of X-ray images, and the assessment of an a priori noise characterization approach to account for time-varying noise statistics due to changes of X-ray tube settings. An improved NVCA algorithm is also presented, along with its real-time hardware implementation on a Field Programmable Gate Array (FPGA). The novel algorithm provides more efficient noise reduction performances also for low-contrast moving objects, thus relaxing the trade-off between noise reduction and edge preservation, while providing a further reduction of hardware complexity, which allows for low usage of logic resources also on small FPGA platforms. The results presented in this dissertation provide the means for future studies aimed at embedding the NVCA algorithm in commercial fluoroscopic devices to accomplish real-time denoising of very low-dose X-ray images, which would foster their actual use in clinical practice

    Blind Deconvolution of Anisoplanatic Images Collected by a Partially Coherent Imaging System

    Get PDF
    Coherent imaging systems offer unique benefits to system operators in terms of resolving power, range gating, selective illumination and utility for applications where passively illuminated targets have limited emissivity or reflectivity. This research proposes a novel blind deconvolution algorithm that is based on a maximum a posteriori Bayesian estimator constructed upon a physically based statistical model for the intensity of the partially coherent light at the imaging detector. The estimator is initially constructed using a shift-invariant system model, and is later extended to the case of a shift-variant optical system by the addition of a transfer function term that quantifies optical blur for wide fields-of-view and atmospheric conditions. The estimators are evaluated using both synthetically generated imagery, as well as experimentally collected image data from an outdoor optical range. The research is extended to consider the effects of weighted frame averaging for the individual short-exposure frames collected by the imaging system. It was found that binary weighting of ensemble frames significantly increases spatial resolution

    Medical image enhancement

    Get PDF
    Each image acquired from a medical imaging system is often part of a two-dimensional (2-D) image set whose total presents a three-dimensional (3-D) object for diagnosis. Unfortunately, sometimes these images are of poor quality. These distortions cause an inadequate object-of-interest presentation, which can result in inaccurate image analysis. Blurring is considered a serious problem. Therefore, “deblurring” an image to obtain better quality is an important issue in medical image processing. In our research, the image is initially decomposed. Contrast improvement is achieved by modifying the coefficients obtained from the decomposed image. Small coefficient values represent subtle details and are amplified to improve the visibility of the corresponding details. The stronger image density variations make a major contribution to the overall dynamic range, and have large coefficient values. These values can be reduced without much information loss

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Super-Resolution of Unmanned Airborne Vehicle Images with Maximum Fidelity Stochastic Restoration

    Get PDF
    Super-resolution (SR) refers to reconstructing a single high resolution (HR) image from a set of subsampled, blurred and noisy low resolution (LR) images. One may, then, envision a scenario where a set of LR images is acquired with sensors on a moving platform like unmanned airborne vehicles (UAV). Due to the wind, the UAV may encounter altitude change or rotational effects which can distort the acquired as well as the processed images. Also, the visual quality of the SR image is affected by image acquisition degradations, the available number of the LR images and their relative positions. This dissertation seeks to develop a novel fast stochastic algorithm to reconstruct a single SR image from UAV-captured images in two steps. First, the UAV LR images are aligned using a new hybrid registration algorithm within subpixel accuracy. In the second step, the proposed approach develops a new fast stochastic minimum square constrained Wiener restoration filter for SR reconstruction and restoration using a fully detailed continuous-discrete-continuous (CDC) model. A new parameter that accounts for LR images registration and fusion errors is added to the SR CDC model in addition to a multi-response restoration and reconstruction. Finally, to assess the visual quality of the resultant images, two figures of merit are introduced: information rate and maximum realizable fidelity. Experimental results show that quantitative assessment using the proposed figures coincided with the visual qualitative assessment. We evaluated our filter against other SR techniques and its results were found to be competitive in terms of speed and visual quality

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets
    corecore