2 research outputs found

    Small-kernel image restoration

    Get PDF
    The goal of image restoration is to remove degradations that are introduced during image acquisition and display. Although image restoration is a difficult task that requires considerable computation, in many applications the processing must be performed significantly faster than is possible with traditional algorithms implemented on conventional serial architectures. as demonstrated in this dissertation, digital image restoration can be efficiently implemented by convolving an image with a small kernel. Small-kernel convolution is a local operation that requires relatively little processing and can be easily implemented in parallel. A small-kernel technique must compromise effectiveness for efficiency, but if the kernel values are well-chosen, small-kernel restoration can be very effective.;This dissertation develops a small-kernel image restoration algorithm that minimizes expected mean-square restoration error. The derivation of the mean-square-optimal small kernel parallels that of the Wiener filter, but accounts for explicit spatial constraints on the kernel. This development is thorough and rigorous, but conceptually straightforward: the mean-square-optimal kernel is conditioned only on a comprehensive end-to-end model of the imaging process and spatial constraints on the kernel. The end-to-end digital imaging system model accounts for the scene, acquisition blur, sampling, noise, and display reconstruction. The determination of kernel values is directly conditioned on the specific size and shape of the kernel. Experiments presented in this dissertation demonstrate that small-kernel image restoration requires significantly less computation than a state-of-the-art implementation of the Wiener filter yet the optimal small-kernel yields comparable restored images.;The mean-square-optimal small-kernel algorithm and most other image restoration algorithms require a characterization of the image acquisition device (i.e., an estimate of the device\u27s point spread function or optical transfer function). This dissertation describes an original method for accurately determining this characterization. The method extends the traditional knife-edge technique to explicitly deal with fundamental sampled system considerations of aliasing and sample/scene phase. Results for both simulated and real imaging systems demonstrate the accuracy of the method

    Studying Turbulence Using Numerical Simulation Databases. 5: Proceedings of the 1994 Summer Program

    Get PDF
    Direct numerical simulation databases were used to study turbulence physics and modeling issues at the fifth Summer Program of the Center for Turbulence Research. The largest group, comprising more than half of the participants, was the Turbulent Reacting Flows and Combustion group. The remaining participants were in three groups: Fundamentals, Modeling & LES, and Rotating Turbulence. For the first time in the CTR Summer Programs, participants included engineers from the U.S. aerospace industry. They were exposed to a variety of problems involving turbulence, and were able to incorporate the models developed at CTR in their company codes. They were exposed to new ideas on turbulence prediction, methods which already appear to have had an impact on their capabilities at their laboratories. Such interactions among the practitioners in the government, academia, and industry are the most meaningful way of transferring technology
    corecore