50,942 research outputs found
Multiple passages of light through an absorption inhomogeneity in optical imaging of turbid media
The multiple passages of light through an absorption inhomogeneity of finite
size deep within a turbid medium is analyzed for optical imaging using the
``self-energy'' diagram. The nonlinear correction becomes more important for an
inhomogeneity of a larger size and with greater contrast in absorption with
respect to the host background. The nonlinear correction factor agrees well
with that from Monte Carlo simulations for CW light. The correction is about
in near infrared for an absorption inhomogeneity with the typical
optical properties found in tissues and of size of five times the transport
mean free path.Comment: 3 figure
Guarantees of Total Variation Minimization for Signal Recovery
In this paper, we consider using total variation minimization to recover
signals whose gradients have a sparse support, from a small number of
measurements. We establish the proof for the performance guarantee of total
variation (TV) minimization in recovering \emph{one-dimensional} signal with
sparse gradient support. This partially answers the open problem of proving the
fidelity of total variation minimization in such a setting \cite{TVMulti}. In
particular, we have shown that the recoverable gradient sparsity can grow
linearly with the signal dimension when TV minimization is used. Recoverable
sparsity thresholds of TV minimization are explicitly computed for
1-dimensional signal by using the Grassmann angle framework. We also extend our
results to TV minimization for multidimensional signals. Stability of
recovering signal itself using 1-D TV minimization has also been established
through a property called "almost Euclidean property for 1-dimensional TV
norm". We further give a lower bound on the number of random Gaussian
measurements for recovering 1-dimensional signal vectors with elements and
-sparse gradients. Interestingly, the number of needed measurements is lower
bounded by , rather than the bound
frequently appearing in recovering -sparse signal vectors.Comment: lower bounds added; version with Gaussian width, improved bounds;
stability results adde
FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks
Rectified linear unit (ReLU) is a widely used activation function for deep
convolutional neural networks. However, because of the zero-hard rectification,
ReLU networks miss the benefits from negative values. In this paper, we propose
a novel activation function called \emph{flexible rectified linear unit
(FReLU)} to further explore the effects of negative values. By redesigning the
rectified point of ReLU as a learnable parameter, FReLU expands the states of
the activation output. When the network is successfully trained, FReLU tends to
converge to a negative value, which improves the expressiveness and thus the
performance. Furthermore, FReLU is designed to be simple and effective without
exponential functions to maintain low cost computation. For being able to
easily used in various network architectures, FReLU does not rely on strict
assumptions by self-adaption. We evaluate FReLU on three standard image
classification datasets, including CIFAR-10, CIFAR-100, and ImageNet.
Experimental results show that the proposed method achieves fast convergence
and higher performances on both plain and residual networks
- …