1,163 research outputs found
Rerepresenting and Restructuring Domain Theories: A Constructive Induction Approach
Theory revision integrates inductive learning and background knowledge by
combining training examples with a coarse domain theory to produce a more
accurate theory. There are two challenges that theory revision and other
theory-guided systems face. First, a representation language appropriate for
the initial theory may be inappropriate for an improved theory. While the
original representation may concisely express the initial theory, a more
accurate theory forced to use that same representation may be bulky,
cumbersome, and difficult to reach. Second, a theory structure suitable for a
coarse domain theory may be insufficient for a fine-tuned theory. Systems that
produce only small, local changes to a theory have limited value for
accomplishing complex structural alterations that may be required.
Consequently, advanced theory-guided learning systems require flexible
representation and flexible structure. An analysis of various theory revision
systems and theory-guided learning systems reveals specific strengths and
weaknesses in terms of these two desired properties. Designed to capture the
underlying qualities of each system, a new system uses theory-guided
constructive induction. Experiments in three domains show improvement over
previous theory-guided systems. This leads to a study of the behavior,
limitations, and potential of theory-guided constructive induction.Comment: See http://www.jair.org/ for an online appendix and other files
accompanying this articl
Accelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints
Regularization of ill-posed linear inverse problems via penalization
has been proposed for cases where the solution is known to be (almost) sparse.
One way to obtain the minimizer of such an penalized functional is via
an iterative soft-thresholding algorithm. We propose an alternative
implementation to -constraints, using a gradient method, with
projection on -balls. The corresponding algorithm uses again iterative
soft-thresholding, now with a variable thresholding parameter. We also propose
accelerated versions of this iterative method, using ingredients of the
(linear) steepest descent method. We prove convergence in norm for one of these
projected gradient methods, without and with acceleration.Comment: 24 pages, 5 figures. v2: added reference, some amendments, 27 page
Yet another breakdown point notion: EFSBP - illustrated at scale-shape models
The breakdown point in its different variants is one of the central notions
to quantify the global robustness of a procedure. We propose a simple
supplementary variant which is useful in situations where we have no obvious or
only partial equivariance: Extending the Donoho and Huber(1983) Finite Sample
Breakdown Point, we propose the Expected Finite Sample Breakdown Point to
produce less configuration-dependent values while still preserving the finite
sample aspect of the former definition. We apply this notion for joint
estimation of scale and shape (with only scale-equivariance available),
exemplified for generalized Pareto, generalized extreme value, Weibull, and
Gamma distributions. In these settings, we are interested in highly-robust,
easy-to-compute initial estimators; to this end we study Pickands-type and
Location-Dispersion-type estimators and compute their respective breakdown
points.Comment: 21 pages, 4 figure
Position and momentum observables on R and on R^3
We characterize all position and momentum observables on R and on R^3. We
study some of their operational properties and discuss their covariant joint
observables.Comment: 18 page
Efficient Resolution of Anisotropic Structures
We highlight some recent new delevelopments concerning the sparse
representation of possibly high-dimensional functions exhibiting strong
anisotropic features and low regularity in isotropic Sobolev or Besov scales.
Specifically, we focus on the solution of transport equations which exhibit
propagation of singularities where, additionally, high-dimensionality enters
when the convection field, and hence the solutions, depend on parameters
varying over some compact set. Important constituents of our approach are
directionally adaptive discretization concepts motivated by compactly supported
shearlet systems, and well-conditioned stable variational formulations that
support trial spaces with anisotropic refinements with arbitrary
directionalities. We prove that they provide tight error-residual relations
which are used to contrive rigorously founded adaptive refinement schemes which
converge in . Moreover, in the context of parameter dependent problems we
discuss two approaches serving different purposes and working under different
regularity assumptions. For frequent query problems, making essential use of
the novel well-conditioned variational formulations, a new Reduced Basis Method
is outlined which exhibits a certain rate-optimal performance for indefinite,
unsymmetric or singularly perturbed problems. For the radiative transfer
problem with scattering a sparse tensor method is presented which mitigates or
even overcomes the curse of dimensionality under suitable (so far still
isotropic) regularity assumptions. Numerical examples for both methods
illustrate the theoretical findings
Time-frequency detection algorithm for gravitational wave bursts
An efficient algorithm is presented for the identification of short bursts of
gravitational radiation in the data from broad-band interferometric detectors.
The algorithm consists of three steps: pixels of the time-frequency
representation of the data that have power above a fixed threshold are first
identified. Clusters of such pixels that conform to a set of rules on their
size and their proximity to other clusters are formed, and a final threshold is
applied on the power integrated over all pixels in such clusters. Formal
arguments are given to support the conjecture that this algorithm is very
efficient for a wide class of signals. A precise model for the false alarm rate
of this algorithm is presented, and it is shown using a number of
representative numerical simulations to be accurate at the 1% level for most
values of the parameters, with maximal error around 10%.Comment: 26 pages, 15 figures, to appear in PR
Probabilistic Reconstruction in Compressed Sensing: Algorithms, Phase Diagrams, and Threshold Achieving Matrices
Compressed sensing is a signal processing method that acquires data directly
in a compressed form. This allows one to make less measurements than what was
considered necessary to record a signal, enabling faster or more precise
measurement protocols in a wide range of applications. Using an
interdisciplinary approach, we have recently proposed in [arXiv:1109.4424] a
strategy that allows compressed sensing to be performed at acquisition rates
approaching to the theoretical optimal limits. In this paper, we give a more
thorough presentation of our approach, and introduce many new results. We
present the probabilistic approach to reconstruction and discuss its optimality
and robustness. We detail the derivation of the message passing algorithm for
reconstruction and expectation max- imization learning of signal-model
parameters. We further develop the asymptotic analysis of the corresponding
phase diagrams with and without measurement noise, for different distribution
of signals, and discuss the best possible reconstruction performances
regardless of the algorithm. We also present new efficient seeding matrices,
test them on synthetic data and analyze their performance asymptotically.Comment: 42 pages, 37 figures, 3 appendixe
Compressed sensing imaging techniques for radio interferometry
Radio interferometry probes astrophysical signals through incomplete and
noisy Fourier measurements. The theory of compressed sensing demonstrates that
such measurements may actually suffice for accurate reconstruction of sparse or
compressible signals. We propose new generic imaging techniques based on convex
optimization for global minimization problems defined in this context. The
versatility of the framework notably allows introduction of specific prior
information on the signals, which offers the possibility of significant
improvements of reconstruction relative to the standard local matching pursuit
algorithm CLEAN used in radio astronomy. We illustrate the potential of the
approach by studying reconstruction performances on simulations of two
different kinds of signals observed with very generic interferometric
configurations. The first kind is an intensity field of compact astrophysical
objects. The second kind is the imprint of cosmic strings in the temperature
field of the cosmic microwave background radiation, of particular interest for
cosmology.Comment: 10 pages, 1 figure. Version 2 matches version accepted for
publication in MNRAS. Changes includes: writing corrections, clarifications
of arguments, figure update, and a new subsection 4.1 commenting on the exact
compliance of radio interferometric measurements with compressed sensin
Detection and imaging in strongly backscattering randomly layered media
Abstract. Echoes from small reflectors buried in heavy clutter are weak and difficult to distinguish from the medium backscatter. Detection and imaging with sensor arrays in such media requires filtering out the unwanted backscatter and enhancing the echoes from the reflectors that we wish to locate. We consider a filtering and detection approach based on the singular value decomposition of the local cosine transform of the array response matrix. The algorithm is general and can be used for detection and imaging in heavy clutter, but its analysis depends on the model of the cluttered medium. This paper is concerned with the analysis of the algorithm in finely layered random media. We obtain a detailed characterization of the singular values of the transformed array response matrix and justify the systematic approach of the filtering algorithm for detecting and refining the time windows that contain the echoes that are useful in imaging
- …