290,682 research outputs found
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
Ur-Priors, Conditionalization, and Ur-Prior Conditionalization
Conditionalization is a widely endorsed rule for updating oneâs beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule â one that appeals to what Iâll call âur-priorsâ. But different authors have understood the rule in different ways, and these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what Iâll call âloaded evidential standardsâ, are especially promising
Weak observability estimates for 1-D wave equations with rough coefficients
In this paper we prove observability estimates for 1-dimensional wave
equations with non-Lipschitz coefficients. For coefficients in the Zygmund
class we prove a "classical" observability estimate, which extends the
well-known observability results in the energy space for regularity. When
the coefficients are instead log-Lipschitz or log-Zygmund, we prove
observability estimates "with loss of derivatives": in order to estimate the
total energy of the solutions, we need measurements on some higher order
Sobolev norms at the boundary. This last result represents the intermediate
step between the Lipschitz (or Zygmund) case, when observability estimates hold
in the energy space, and the H\"older one, when they fail at any finite order
(as proved in \cite{Castro-Z}) due to an infinite loss of derivatives. We also
establish a sharp relation between the modulus of continuity of the
coefficients and the loss of derivatives in the observability estimates. In
particular, we will show that under any condition which is weaker than the
log-Lipschitz one (not only H\"older, for instance), observability estimates
fail in general, while in the intermediate instance between the Lipschitz and
the log-Lipschitz ones they can hold only admitting a loss of a finite number
of derivatives. This classification has an exact counterpart when considering
also the second variation of the coefficients.Comment: submitte
On some normability conditions
Various normability conditions of locally convex spaces (including Vogt interpolation classes DN and as well as quasi- and asymptotic normability) are investigated. In particular, it is shown that on the class of Schwartz spaces the property of asymptotic normability coincides with the property GS , which is a natural generalization of Gelfand-Shilov countable normability (cf. [9, 25], where the metrizable case was treated). It is observed also that there are certain natural duality relationships among some of normability conditions
Frames of translates with prescribed fine structure in shift invariant spaces
For a given finitely generated shift invariant (FSI) subspace \cW\subset
L^2(\R^k) we obtain a simple criterion for the existence of shift generated
(SG) Bessel sequences E(\cF) induced by finite sequences of vectors \cF\in
\cW^n that have a prescribed fine structure i.e., such that the norms of the
vectors in \cF and the spectra of S_{E(\cF)} is prescribed in each fiber of
\text{Spec}(\cW)\subset \T^k. We complement this result by developing an
analogue of the so-called sequences of eigensteps from finite frame theory in
the context of SG Bessel sequences, that allows for a detailed description of
all sequences with prescribed fine structure. Then, given we characterize the finite sequences \cF\in\cW^n such
that , for , and such that the fine spectral
structure of the shift generated Bessel sequences E(\cF) have minimal spread
(i.e. we show the existence of optimal SG Bessel sequences with prescribed
norms); in this context the spread of the spectra is measured in terms of the
convex potential P^\cW_\varphi induced by \cW and an arbitrary convex
function .Comment: 31 pages. Accepted in the JFA. This revised version has several
changes in the notation and the organization of the text. There exists text
overlap with arXiv:1508.01739 in the preliminary section
- âŠ