228 research outputs found
Acceleration of k-Nearest Neighbor and SRAD Algorithms Using Intel FPGA SDK for OpenCL
Field Programmable Gate Arrays (FPGAs) have been widely used for accelerating machine learning algorithms. However, the high design cost and time for implementing FPGA-based accelerators using traditional HDL-based design methodologies has discouraged users from designing FPGA-based accelerators. In recent years, a new CAD tool called Intel FPGA SDK for OpenCL (IFSO) allowed fast and efficient design of FPGA-based hardware accelerators from high level specification such as OpenCL. Even software engineers with basic hardware design knowledge could design FPGA-based accelerators. In this thesis, IFSO has been used to explore acceleration of k-Nearest-Neighbour (kNN) algorithm and Speckle Reducing Anisotropic Diffusion (SRAD) simulation using FPGAs. kNN is a popular algorithm used in machine learning. Bitonic sorting and radix sorting algorithms were used in the kNN algorithm to check if these provide any performance improvements. Acceleration of SRAD simulation was also explored. The experimental results obtained for these algorithms from FPGA-based acceleration were compared with the state of the art CPU implementation. The optimized algorithms were implemented on two different FPGAs (Intel Stratix A7 and Intel Arria 10 GX). Experimental results show that the FPGA-based accelerators provided similar or better execution time (up to 80X) and better power efficiency (75% reduction in power consumption) than traditional platforms such as a workstation based on two Intel Xeon processors E5-2620 Series (each with 6 cores and running at 2.4 GHz)
A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes
QR bar codes are prototypical images for which part of the image is a priori
known (required patterns). Open source bar code readers, such as ZBar, are
readily available. We exploit both these facts to provide and assess purely
regularization-based methods for blind deblurring of QR bar codes in the
presence of noise.Comment: 14 pages, 19 figures (with a total of 57 subfigures), 1 table; v3:
previously missing reference [35] adde
Parametric Level-sets Enhanced To Improve Reconstruction (PaLEnTIR)
In this paper, we consider the restoration and reconstruction of piecewise
constant objects in two and three dimensions using PaLEnTIR, a significantly
enhanced Parametric level set (PaLS) model relative to the current
state-of-the-art. The primary contribution of this paper is a new PaLS
formulation which requires only a single level set function to recover a scene
with piecewise constant objects possessing multiple unknown contrasts. Our
model offers distinct advantages over current approaches to the multi-contrast,
multi-object problem, all of which require multiple level sets and explicit
estimation of the contrast magnitudes. Given upper and lower bounds on the
contrast, our approach is able to recover objects with any distribution of
contrasts and eliminates the need to know either the number of contrasts in a
given scene or their values. We provide an iterative process for finding these
space-varying contrast limits. Relative to most PaLS methods which employ
radial basis functions (RBFs), our model makes use of non-isotropic basis
functions, thereby expanding the class of shapes that a PaLS model of a given
complexity can approximate. Finally, PaLEnTIR improves the conditioning of the
Jacobian matrix required as part of the parameter identification process and
consequently accelerates the optimization methods by controlling the magnitude
of the PaLS expansion coefficients, fixing the centers of the basis functions,
and the uniqueness of parametric to image mappings provided by the new
parameterization. We demonstrate the performance of the new approach using both
2D and 3D variants of X-ray computed tomography, diffuse optical tomography
(DOT), denoising, deconvolution problems. Application to experimental sparse CT
data and simulated data with different types of noise are performed to further
validate the proposed method.Comment: 31 pages, 56 figure
Multiscale bilateral filtering for improving image quality in digital breast tomosynthesis
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/135115/1/mp3283.pd
Enhancement of Historical Printed Document Images by Combining Total Variation Regularization and Non-Local Means Filtering
This paper proposes a novel method for document enhancement which combines two recent powerful noise-reduction steps. The first step is based on the total variation framework. It flattens background grey-levels and produces an intermediate image where background noise is considerably reduced. This image is used as a mask to produce an image with a cleaner background while keeping character details. The second step is applied to the cleaner image and consists of a filter based on non-local means: character edges are smoothed by searching for similar patch images in pixel neighborhoods. The document images to be enhanced are real historical printed documents from several periods which include several defects in their background and on character edges. These defects result from scanning, paper aging and bleed- through. The proposed method enhances document images by combining the total variation and the non-local means techniques in order to improve OCR recognition. The method is shown to be more powerful than when these techniques are used alone and than other enhancement methods
Sparse and Redundant Representations for Inverse Problems and Recognition
Sparse and redundant representation of data enables the
description of signals as linear combinations of a few atoms from
a dictionary. In this dissertation, we study applications of
sparse and redundant representations in inverse problems and
object recognition. Furthermore, we propose two novel imaging
modalities based on the recently introduced theory of Compressed
Sensing (CS).
This dissertation consists of four major parts. In the first part
of the dissertation, we study a new type of deconvolution
algorithm that is based on estimating the image from a shearlet
decomposition. Shearlets provide a multi-directional and
multi-scale decomposition that has been mathematically shown to
represent distributed discontinuities such as edges better than
traditional wavelets. We develop a deconvolution algorithm that
allows for the approximation inversion operator to be controlled
on a multi-scale and multi-directional basis. Furthermore, we
develop a method for the automatic determination of the threshold
values for the noise shrinkage for each scale and direction
without explicit knowledge of the noise variance using a
generalized cross validation method.
In the second part of the dissertation, we study a reconstruction
method that recovers highly undersampled images assumed to have a
sparse representation in a gradient domain by using partial
measurement samples that are collected in the Fourier domain. Our
method makes use of a robust generalized Poisson solver that
greatly aids in achieving a significantly improved performance
over similar proposed methods. We will demonstrate by experiments
that this new technique is more flexible to work with either
random or restricted sampling scenarios better than its
competitors.
In the third part of the dissertation, we introduce a novel
Synthetic Aperture Radar (SAR) imaging modality which can provide
a high resolution map of the spatial distribution of targets and
terrain using a significantly reduced number of needed transmitted
and/or received electromagnetic waveforms. We demonstrate that
this new imaging scheme, requires no new hardware components and
allows the aperture to be compressed. Also, it
presents many new applications and advantages which include strong
resistance to countermesasures and interception, imaging much
wider swaths and reduced on-board storage requirements.
The last part of the dissertation deals with object recognition
based on learning dictionaries for simultaneous sparse signal
approximations and feature extraction. A dictionary is learned
for each object class based on given training examples which
minimize the representation error with a sparseness constraint. A
novel test image is then projected onto the span of the atoms in
each learned dictionary. The residual vectors along with the
coefficients are then used for recognition. Applications to
illumination robust face recognition and automatic target
recognition are presented
A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images
Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method
- …