518 research outputs found
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Fast Fiber Orientation Estimation in Diffusion MRI from kq-Space Sampling and Anatomical Priors
High spatio-angular resolution diffusion MRI (dMRI) has been shown to provide
accurate identification of complex fiber configurations, albeit at the cost of
long acquisition times. We propose a method to recover intra-voxel fiber
configurations at high spatio-angular resolution relying on a kq-space
under-sampling scheme to enable accelerated acquisitions. The inverse problem
for reconstruction of the fiber orientation distribution (FOD) is regularized
by a structured sparsity prior promoting simultaneously voxelwise sparsity and
spatial smoothness of fiber orientation. Prior knowledge of the spatial
distribution of white matter, gray matter and cerebrospinal fluid is also
assumed. A minimization problem is formulated and solved via a forward-backward
convex optimization algorithmic structure. Simulations and real data analysis
suggest that accurate FOD mapping can be achieved from severe kq-space
under-sampling regimes, potentially enabling high spatio-angular dMRI in the
clinical setting.Comment: 10 pages, 5 figures, Supplementary Material
Mixture Modeling and Outlier Detection in Microarray Data Analysis
Microarray technology has become a dynamic tool in gene expression analysis
because it allows for the simultaneous measurement of thousands of gene expressions.
Uniqueness in experimental units and microarray data platforms, coupled with how
gene expressions are obtained, make the field open for interesting research questions.
In this dissertation, we present our investigations of two independent studies related
to microarray data analysis.
First, we study a recent platform in biology and bioinformatics that compares
the quality of genetic information from exfoliated colonocytes in fecal matter with
genetic material from mucosa cells within the colon. Using the intraclass correlation
coe�cient (ICC) as a measure of reproducibility, we assess the reliability of density
estimation obtained from preliminary analysis of fecal and mucosa data sets. Numerical findings clearly show that the distribution is comprised of two components.
For measurements between 0 and 1, it is natural to assume that the data points are
from a beta-mixture distribution. We explore whether ICC values should be modeled
with a beta mixture or transformed first and fit with a normal mixture. We find that
the use of mixture of normals in the inverse-probit transformed scale is less sensitive toward model mis-specification; otherwise a biased conclusion could be reached. By
using the normal mixture approach to compare the ICC distributions of fecal and
mucosa samples, we observe the quality of reproducible genes in fecal array data to
be comparable with that in mucosa arrays.
For microarray data, within-gene variance estimation is often challenging due
to the high frequency of low replication studies. Several methodologies have been
developed to strengthen variance terms by borrowing information across genes. However, even with such accommodations, variance may be initiated by the presence of
outliers. For our second study, we propose a robust modification of optimal shrinkage variance estimation to improve outlier detection. In order to increase power, we
suggest grouping standardized data so that information shared across genes is similar
in distribution. Simulation studies and analysis of real colon cancer microarray data
reveal that our methodology provides a technique which is insensitive to outliers, free of distributional assumptions, effective for small sample size, and data adaptive
Active Wavelength Selection for Chemical Identification Using Tunable Spectroscopy
Spectrometers are the cornerstone of analytical chemistry. Recent advances in microoptics manufacturing provide lightweight and portable alternatives to traditional spectrometers. In this dissertation, we developed a spectrometer based on Fabry-Perot interferometers (FPIs). A FPI is a tunable (it can only scan one wavelength at a time) optical filter. However, compared to its traditional counterparts such as FTIR (Fourier transform infrared spectroscopy), FPIs provide lower resolution and lower signal-noiseratio (SNR). Wavelength selection can help alleviate these drawbacks. Eliminating uninformative wavelengths not only speeds up the sensing process but also helps improve accuracy by avoiding nonlinearity and noise. Traditional wavelength selection algorithms follow a training-validation process, and thus they are only optimal for the target analyte. However, for chemical identification, the identities are unknown.
To address the above issue, this dissertation proposes active sensing algorithms that select wavelengths online while sensing. These algorithms are able to generate analytedependent wavelengths. We envision this algorithm deployed on a portable chemical gas platform that has low-cost sensors and limited computation resources. We develop three algorithms focusing on three different aspects of the chemical identification problems.
First, we consider the problem of single chemical identification. We formulate the problem as a typical classification problem where each chemical is considered as a distinct class. We use Bayesian risk as the utility function for wavelength selection, which calculates the misclassification cost between classes (chemicals), and we select the wavelength with the maximum reduction in the risk. We evaluate this approach on both synthesized and experimental data. The results suggest that active sensing outperforms the passive method, especially in a noisy environment.
Second, we consider the problem of chemical mixture identification. Since the number of potential chemical mixtures grows exponentially as the number of components increases, it is intractable to formulate all potential mixtures as classes. To circumvent combinatorial explosion, we developed a multi-modal non-negative least squares (MMNNLS) method that searches multiple near-optimal solutions as an approximation of all the solutions. We project the solutions onto spectral space, calculate the variance of the projected spectra at each wavelength, and select the next wavelength using the variance as the guidance. We validate this approach on synthesized and experimental data. The results suggest that active approaches are superior to their passive counterparts especially when the condition number of the mixture grows larger (the analytes consist of more components, or the constituent spectra are very similar to each other).
Third, we consider improving the computational speed for chemical mixture identification. MM-NNLS scales poorly as the chemical mixture becomes more complex. Therefore, we develop a wavelength selection method based on Gaussian process regression (GPR). GPR aims to reconstruct the spectrum rather than solving the mixture problem, thus, its computational cost is a function of the number of wavelengths. We evaluate the approach on both synthesized and experimental data. The results again demonstrate more accurate and robust performance in contrast to passive algorithms
Small-Sample Analysis and Inference of Networked Dependency Structures from Complex Genomic Data
Die vorliegende Arbeit beschäftigt sich mit der statistischen Modellierung und Inferenz genetischer Netzwerke. Assoziationsstrukturen und wechselseitige Einflüsse sind ein wichtiges Thema in der Systembiologie. Genexpressionsdaten weisen eine hohe Dimensionalität auf, die geringen Stichprobenumfängen gegenübersteht ("small n, large p"). Die Analyse von Interaktionsstrukturen mit Hilfe graphischer Modelle ist demnach ein schlecht gestelltes (inverses) Problem, dessen Lösung Methoden zur Regularisierung erfordert. Ich schlage neuartige Schätzfunktionen für Kovarianzstrukturen und (partielle) Korrelationen vor. Diese basieren entweder auf Resampling-Verfahren oder auf Shrinkage zur Varianzreduktion. In der letzteren Methode wird die optimale Shrinkage Intensität analytisch berechnet. Im Vergleich zur klassischen Stichprobenkovarianzmatrix besitzt speziell diese Schätzfunktion wünschenswerte Eigenschaften im Sinne von gesteigerter Effizienz und von kleinerem mittleren quadratischen Fehler. Außerdem ergeben sich stets positiv definite und gut konditionierte Parameterschätzungen. Zur Bestimmung der Netzwerktopologie wird auf das Konzept graphischer Gaußscher Modelle zurückgegriffen, mit deren Hilfe sich sowohl marginale als auch bedingte Unabhängigkeiten darstellen lassen. Es wird eine Methode zur Modellselektion vorgestellt, die auf einer multiplen Testprozedur mit Kontrolle der False Discovery Rate beruht. Dabei wird die zugrunde liegende Nullverteilung adaptiv geschätzt. Das vorgeschlagene Framework ist rechentechnisch effizient und schneidet im Vergleich mit konkurrierenden Verfahren sowohl in Simulationen als auch in der Anwendung auf molekulare Daten sehr gut ab
Recommended from our members
Sparse density estimation on the multinomial manifold
A new sparse kernel density estimator is introduced based
on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the
multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density
estimators
- …