2,311 research outputs found
Laboratory Transferability of Optimally Shaped Laser Pulses for Quantum Control
Optimal control experiments can readily identify effective shaped laser
pulses, or "photonic reagents", that achieve a wide variety of objectives. For
many practical applications, an important criterion is that a particular
photonic reagent prescription still produce a good, if not optimal, target
objective yield when transferred to a different system or laboratory, {even if
the same shaped pulse profile cannot be reproduced exactly. As a specific
example, we assess the potential for transferring optimal photonic reagents for
the objective of optimizing a ratio of photoproduct ions from a family of
halomethanes through three related experiments.} First, applying the same set
of photonic reagents with systematically varying second- and third-order chirp
on both laser systems generated similar shapes of the associated control
landscape (i.e., relation between the objective yield and the variables
describing the photonic reagents). Second, optimal photonic reagents obtained
from the first laser system were found to still produce near optimal yields on
the second laser system. Third, transferring a collection of photonic reagents
optimized on the first laser system to the second laser system reproduced
systematic trends in photoproduct yields upon interaction with the homologous
chemical family. Despite inherent differences between the two systems,
successful and robust transfer of photonic reagents is demonstrated in the
above three circumstances. The ability to transfer photonic reagents from one
laser system to another is analogous to well-established utilitarian operating
procedures with traditional chemical reagents. The practical implications of
the present results for experimental quantum control are discussed
A note on the asymptotics for the randomly stopped weighted sums
Let {Xi , i ⩾ 1} be a sequence of identically distributed real-valued random variables with common distribution FX; let {θi , i ⩾ 1} be a sequence of identically distributed, nonnegative and nondegenerate at zero random variables; and let τ be a positive integer-valued counting random variable. Assume that {Xi , i ⩾ 1}, {θi , i ⩾ 1} and τ are mutually independent. In the presence of heavy-tailed Xi's, this paper investigates the asymptotic tail behavior for the maximum of randomly weighted sums Mτ = max1 ⩽ k ⩽ τ ∑ki = 1θi Xi under the condition that {θi , i ⩾ 1} satisfy a general dependence structure
Smoothing Proximal Gradient Method for General Structured Sparse Learning
We study the problem of learning high dimensional regression models
regularized by a structured-sparsity-inducing penalty that encodes prior
structural information on either input or output sides. We consider two widely
adopted types of such penalties as our motivating examples: 1) overlapping
group lasso penalty, based on the l1/l2 mixed-norm penalty, and 2) graph-guided
fusion penalty. For both types of penalties, due to their non-separability,
developing an efficient optimization method has remained a challenging problem.
In this paper, we propose a general optimization approach, called smoothing
proximal gradient method, which can solve the structured sparse regression
problems with a smooth convex loss and a wide spectrum of
structured-sparsity-inducing penalties. Our approach is based on a general
smoothing technique of Nesterov. It achieves a convergence rate faster than the
standard first-order method, subgradient method, and is much more scalable than
the most widely used interior-point method. Numerical results are reported to
demonstrate the efficiency and scalability of the proposed method.Comment: arXiv admin note: substantial text overlap with arXiv:1005.471
- …