7,646 research outputs found
Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
We consider the problem of optimizing the sum of a smooth convex function and
a non-smooth convex function using proximal-gradient methods, where an error is
present in the calculation of the gradient of the smooth term or in the
proximity operator with respect to the non-smooth term. We show that both the
basic proximal-gradient method and the accelerated proximal-gradient method
achieve the same convergence rate as in the error-free case, provided that the
errors decrease at appropriate rates.Using these rates, we perform as well as
or better than a carefully chosen fixed error level on a set of structured
sparsity problems.Comment: Neural Information Processing Systems (2011
Minimizing Finite Sums with the Stochastic Average Gradient
We propose the stochastic average gradient (SAG) method for optimizing the
sum of a finite number of smooth convex functions. Like stochastic gradient
(SG) methods, the SAG method's iteration cost is independent of the number of
terms in the sum. However, by incorporating a memory of previous gradient
values the SAG method achieves a faster convergence rate than black-box SG
methods. The convergence rate is improved from O(1/k^{1/2}) to O(1/k) in
general, and when the sum is strongly-convex the convergence rate is improved
from the sub-linear O(1/k) to a linear convergence rate of the form O(p^k) for
p \textless{} 1. Further, in many cases the convergence rate of the new method
is also faster than black-box deterministic gradient methods, in terms of the
number of gradient evaluations. Numerical experiments indicate that the new
algorithm often dramatically outperforms existing SG and deterministic gradient
methods, and that the performance may be further improved through the use of
non-uniform sampling strategies.Comment: Revision from January 2015 submission. Major changes: updated
literature follow and discussion of subsequent work, additional Lemma showing
the validity of one of the formulas, somewhat simplified presentation of
Lyapunov bound, included code needed for checking proofs rather than the
polynomials generated by the code, added error regions to the numerical
experiment
Implementation strategies for hyperspectral unmixing using Bayesian source separation
Bayesian Positive Source Separation (BPSS) is a useful unsupervised approach
for hyperspectral data unmixing, where numerical non-negativity of spectra and
abundances has to be ensured, such in remote sensing. Moreover, it is sensible
to impose a sum-to-one (full additivity) constraint to the estimated source
abundances in each pixel. Even though non-negativity and full additivity are
two necessary properties to get physically interpretable results, the use of
BPSS algorithms has been so far limited by high computation time and large
memory requirements due to the Markov chain Monte Carlo calculations. An
implementation strategy which allows one to apply these algorithms on a full
hyperspectral image, as typical in Earth and Planetary Science, is introduced.
Effects of pixel selection, the impact of such sampling on the relevance of the
estimated component spectra and abundance maps, as well as on the computation
times, are discussed. For that purpose, two different dataset have been used: a
synthetic one and a real hyperspectral image from Mars.Comment: 10 pages, 6 figures, submitted to IEEE Transactions on Geoscience and
Remote Sensing in the special issue on Hyperspectral Image and Signal
Processing (WHISPERS
VoodooFlash: authoring across physical and digital form
Design tools that integrate hardware and software components facilitate product design work across aspects of physical form and user interaction, but at the cost of requiring designers to work with other than their accustomed programming tools. In this paper we introduce VoodooFlash, a tool designed to build on the widespread use of Flash while facilitating design work across physical and digital components. VoodooFlash extends the existing practice of authoring interactive applications in terms of arranging components on a virtual stage, and provides a physical stage on which controls can be arranged, linked to software components, and appropriated with other physical design materials
Endeavors 2004-05
Research, discovery for Nebraskans; Tallow key to new cholesterol fighter; Exploring genetics of potential biological threat; Creating wearable corn â husks, that is; Tracking helps predict landslide sites; Assessing foot-and-mouth test kits; New mapping system tracks livestock diseases; Remote images could predict crop health; Exploring genetics behind obesity; Team seeks lower cost ways to reduce arsenic; Larger, softer kernels boost feed value; Alternative crops could aid Panhandle; Analysis examines impact of GM crops; Better understanding servant-leadership; Simulation tool aids corn growers; Devising ways to predict livestock odors; Local produce could help growers, chefs; Using corn for ethanol makes energy sense; Sensors should reveal soil differences; IDing safe levels of manure for growing crop
Direct photon production and flow at low transverse momenta in pp, p-Pb and Pb-Pb collisions
Low transverse momentum direct photon measurements have been carried out by
the ALICE experiment at the CERN LHC in small collision systems (pp,
and 8 TeV and p--Pb, TeV) as well as
in heavy-ion collisions (Pb--Pb, TeV). For the first
time, also the multiplicity dependence of direct photon production was
investigated in p--Pb collisions. Whereas in the small systems no significant
thermal photon signal was observed, a excess has been measured in
central Pb--Pb collisions in the region of GeV/. A signal of
prompt photon production at high transverse momentum consistent with binary
scaling has been observed in all collision systems following NLO pQCD
predictions. Direct photon flow has been measured in central and semi-central
Pb--Pb collisions and found to be of similar size as the charged hadron and
decay photon flow.Comment: Proceedings of Hard Probes 2018, 30 September - 5 October,
Aix-Les-Bains, Franc
Light meson nuclear modification factor in p-Pb collisions over an unprecedented range with ALICE
Light neutral meson differential invariant cross section and nuclear
modification factor measurements have been carried out with the ALICE detector
at the CERN LHC in pp collisions at TeV and p--Pb collisions at
TeV. The analysis combines results from several
partially independent reconstruction techniques where the and
meson decay photons were detected with the electromagnetic calorimeter, EMCal,
the photon spectrometer, PHOS, or via reconstruction of pairs from
conversions in the ALICE detector material using the central tracking system.
The neutral pion measurement reaching a of 200 GeV/ poses as
the highest measured identified particle spectrum to date while the
meson is measured to an unprecedented of 50 GeV/. The spectra
are found to be generally overestimated by NLO pQCD calculations. The nuclear
modification factors of both mesons exhibit a suppression for
GeV/ which is stronger compared to previous measurements at
TeV and consistent with CGC and cold nuclear matter
energy loss calculations. For GeV/, is
consistent with unity and theory predictions.Comment: Proceedings of the 10th International Conference on Hard and
Electromagnetic Probes of High-Energy Nuclear Collisions, Hard Probes 202
- âŠ