225 research outputs found
Projection Onto A Simplex
This mini-paper presents a fast and simple algorithm to compute the
projection onto the canonical simplex . Utilizing the Moreau's
identity, we show that the problem is essentially a univariate minimization and
the objective function is strictly convex and continuously differentiable.
Moreover, it is shown that there are at most n candidates which can be computed
explicitly, and the minimizer is the only one that falls into the correct
interval
A computational model for functional mapping of genes that regulate intra-cellular circadian rhythms
BACKGROUND: Genes that control circadian rhythms in organisms have been recognized, but have been difficult to detect because circadian behavior comprises periodically dynamic traits and is sensitive to environmental changes. METHOD: We present a statistical model for mapping and characterizing specific genes or quantitative trait loci (QTL) that affect variations in rhythmic responses. This model integrates a system of differential equations into the framework for functional mapping, allowing hypotheses about the interplay between genetic actions and periodic rhythms to be tested. A simulation approach based on sustained circadian oscillations of the clock proteins and their mRNAs has been designed to test the statistical properties of the model. CONCLUSION: The model has significant implications for probing the molecular genetic mechanism of rhythmic oscillations through the detection of the clock QTL throughout the genome
Learnable Descent Algorithm for Nonsmooth Nonconvex Image Reconstruction
We propose a general learning based framework for solving nonsmooth and
nonconvex image reconstruction problems. We model the regularization function
as the composition of the norm and a smooth but nonconvex feature
mapping parametrized as a deep convolutional neural network. We develop a
provably convergent descent-type algorithm to solve the nonsmooth nonconvex
minimization problem by leveraging the Nesterov's smoothing technique and the
idea of residual learning, and learn the network parameters such that the
outputs of the algorithm match the references in training data. Our method is
versatile as one can employ various modern network structures into the
regularization, and the resulting network inherits the guaranteed convergence
of the algorithm. We also show that the proposed network is parameter-efficient
and its performance compares favorably to the state-of-the-art methods in a
variety of image reconstruction problems in practice
- …