4 research outputs found

    Efficient and Robust Image Restoration Using Multiple-Feature L2-Relaxed Sparse Analysis Priors

    No full text
    We propose a novel formulation for relaxed analysis-based sparsity in multiple dictionaries as a general type of prior for images, and apply it for Bayesian estimation in image restoration problems. Our formulation of a â„“-relaxed â„“ pseudo-norm prior allows for an especially simple maximum a posteriori estimation iterative marginal optimization algorithm, whose convergence we prove. We achieve a significant speedup over the direct (static) solution by using dynamically evolving parameters through the estimation loop. As an added heuristic twist, we fix in advance the number of iterations, and then empirically optimize the involved parameters according to two performance benchmarks. The resulting constrained dynamic method is not just fast and effective, it is also highly robust and flexible. First, it is able to provide an outstanding tradeoff between computational load and performance, in visual and objective, mean square error and structural similarity terms, for a large variety of degradation tests, using the same set of parameter values for all tests. Second, the performance benchmark can be easily adapted to specific types of degradation, image classes, and even performance criteria. Third, it allows for using simultaneously several dictionaries with complementary features. This unique combination makes ours a highly practical deconvolution method.Peer Reviewe

    Outlier Detection for Mixed Model with Application to RNA-Seq Data

    Full text link
    Extracting messenger RNA (mRNA) molecules using oligo-dT probes targeting on the Poly(A) tail is common in RNA-sequencing (RNA-seq) experiments. This approach, however, is limited when the specimen is profoundly degraded or formalin-fixed such that either the majority of mRNAs have lost their Poly(A) tails or the oligo-dT probes do not anneal with the formalin-altered adenines. For this problem, a new protocol called capture RNA sequencing was developed using probes for target sequences, which gives unbiased estimates of RNA abundance even when the specimens are degraded. However, despite the effectiveness of capture sequencing, mRNA purification by the traditional Poly(A) protocol still underlies most reference libraries. A bridging mechanism that makes the two types of measurements comparable is needed for data integration and efficient use of information. In the first project, we developed an optimization algorithm that was later applied to outlier detection in a linear mixed model for data integration. In particular, we minimized the sum of truncated convex functions, which is often encountered in models with L0 penalty. The solution is exact in one-dimensional and two-dimensional spaces. For higher-dimensional problems, we applied the algorithm in a coordinate descent fashion. Although the global optimality is compromised, this approach generates local solutions with much higher efficiency. In the second project, we investigated the differences between Poly(A) libraries and capture sequencing libraries. We showed that without conversion, directly merging the two types of measurements lead to biases in subsequent analyses. A practical solution was to use a linear mixed model to predict one type of measurements based on the other. The predicted values based on this approach have high correlations, low errors and high efficiency compared with those based on the fixed model. Moreover, the procedure eliminates false positive findings and biases introduced by the technology differences between the two measurements. In the third project, we noted outlying observations and outlying random effects when fitting the mixed model. As they lead to the discovery of dysfunctional probes and batch effects, we developed an algorithm that screened for the outliers and provided a robust estimation. Specifically, we modified the mean-shift model with variable selection using L0 penalties, which was first introduced by Gannaz (2007), McCann and Welsch (2007) and She and Owen (2012). By incorporating the optimization method proposed in the first project, the algorithm became scalable and yielded exact solutions for low-dimensional problems. In particular, under the assumption of normality, there existed analytic expressions for the penalty parameters. In simulation studies, we showed that the proposed algorithm attained reliable outlier detection, delivered robust estimation and achieved efficient computation.PHDBiostatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147613/1/ltzuying_1.pd
    corecore