7 research outputs found
OCReP: An Optimally Conditioned Regularization for Pseudoinversion Based Neural Training
In this paper we consider the training of single hidden layer neural networks
by pseudoinversion, which, in spite of its popularity, is sometimes affected by
numerical instability issues. Regularization is known to be effective in such
cases, so that we introduce, in the framework of Tikhonov regularization, a
matricial reformulation of the problem which allows us to use the condition
number as a diagnostic tool for identification of instability. By imposing
well-conditioning requirements on the relevant matrices, our theoretical
analysis allows the identification of an optimal value for the regularization
parameter from the standpoint of stability. We compare with the value derived
by cross-validation for overfitting control and optimisation of the
generalization performance. We test our method for both regression and
classification tasks. The proposed method is quite effective in terms of
predictivity, often with some improvement on performance with respect to the
reference cases considered. This approach, due to analytical determination of
the regularization parameter, dramatically reduces the computational load
required by many other techniques.Comment: Published on Neural Network
X-ray CT Image Reconstruction on Highly-Parallel Architectures.
Model-based image reconstruction (MBIR) methods for X-ray CT use accurate
models of the CT acquisition process, the statistics of the noisy measurements,
and noise-reducing regularization to produce potentially higher quality images
than conventional methods even at reduced X-ray doses. They do this by
minimizing a statistically motivated high-dimensional cost function; the high
computational cost of numerically minimizing this function has prevented MBIR
methods from reaching ubiquity in the clinic. Modern highly-parallel hardware
like graphics processing units (GPUs) may offer the computational resources to
solve these reconstruction problems quickly, but simply "translating" existing
algorithms designed for conventional processors to the GPU may not fully
exploit the hardware's capabilities.
This thesis proposes GPU-specialized image denoising and image reconstruction
algorithms. The proposed image denoising algorithm uses group coordinate
descent with carefully structured groups. The algorithm converges very
rapidly: in one experiment, it denoises a 65 megapixel image in about 1.5
seconds, while the popular Chambolle-Pock primal-dual algorithm running on the
same hardware takes over a minute to reach the same level of accuracy.
For X-ray CT reconstruction, this thesis uses duality and group coordinate
ascent to propose an alternative to the popular ordered subsets (OS) method.
Similar to OS, the proposed method can use a subset of the data to update the
image. Unlike OS, the proposed method is convergent. In one helical CT
reconstruction experiment, an implementation of the proposed algorithm using
one GPU converges more quickly than a state-of-the-art algorithm converges
using four GPUs. Using four GPUs, the proposed algorithm reaches near
convergence of a wide-cone axial reconstruction problem with over 220 million
voxels in only 11 minutes.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113551/1/mcgaffin_1.pd
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS: PART-I - Special Section on Blind Signal Processing and Its Applications
Blind signal processing (BSP) is currently one of the most exciting areas of research in statistical signal processing, unsupervised machine learning, neural networks, information theory, and exploratory data analysis. It has applications at the intersection of many science and engineering disciplines concerned with understanding and extracting useful information from data as diverse as neuronal activity and brain images, bioinformatics, communications, the World Wide Web, audio, video, and sensor signals. Because BSP is an interdisciplinary research area, the combination of ideas from the above disciplines is a developing avenue of research.
The aim of this Special Section is to offer an opportunity to link these techniques in different areas and to find effectiveways of solving this problem. The Special Section constitutes a vehicle whereby researchers can present new studies of BSP, thus paving the way for future developments in the field.We received 20 submissions for consideration. After the review process, we selected the following eight papers for publication that span the approaches identified above. These are complex blind source extraction from noisy mixtures using second order statistics by Javidi et al.; complex independent component analysis by entropy bound minimization by Li et al.; real-time independent vector analysis for convolutive blind source separation by Kim; a nonnegative blind source separation model for binary test data by Schachtner et al.; a matrix pseudoinversion lemma and its application to block-based adaptive blind deconvolution for MIMO systems by Kohno et al.; colored subspace analysis: dimension reduction based on a signal’s autocorrelation structure by Theis; blind adaptive equalization of MIMO systems: new recursive algorithms and convergence analysis by Radenkovic et al.; and noise estimation using mean square cross prediction error for speech enhancement by Wang et al. We hope that this Special Section will stimulate interest in the challenging area of BSP, and look forward to seeing an increasing body of high-quality research aligned to this idea. We would like to express our gratitude to the authors of the papers in this special section, and also to the more than 60 reviewers who helped us evaluate the submissions