15,747 research outputs found
Block-diagonal covariance selection for high-dimensional Gaussian graphical models
Gaussian graphical models are widely utilized to infer and visualize networks
of dependencies between continuous variables. However, inferring the graph is
difficult when the sample size is small compared to the number of variables. To
reduce the number of parameters to estimate in the model, we propose a
non-asymptotic model selection procedure supported by strong theoretical
guarantees based on an oracle inequality and a minimax lower bound. The
covariance matrix of the model is approximated by a block-diagonal matrix. The
structure of this matrix is detected by thresholding the sample covariance
matrix, where the threshold is selected using the slope heuristic. Based on the
block-diagonal structure of the covariance matrix, the estimation problem is
divided into several independent problems: subsequently, the network of
dependencies between variables is inferred using the graphical lasso algorithm
in each block. The performance of the procedure is illustrated on simulated
data. An application to a real gene expression dataset with a limited sample
size is also presented: the dimension reduction allows attention to be
objectively focused on interactions among smaller subsets of genes, leading to
a more parsimonious and interpretable modular network.Comment: Accepted in JAS
Calibrating Array Detectors
The development of sensitive large format imaging arrays for the infrared
promises to provide revolutionary capabilities for space astronomy. For
example, the Infrared Array Camera (IRAC) on SIRTF will use four 256 x 256
arrays to provide background limited high spatial resolution images of the sky
in the 3 to 8 micron spectral region. In order to reach the performance limits
possible with this generation of sensitive detectors, calibration procedures
must be developed so that uncertainties in detector calibration will always be
dominated by photon statistics from the dark sky as a major system noise
source. In the near infrared, where the faint extragalactic sky is observed
through the scattered and reemitted zodiacal light from our solar system,
calibration is particularly important. Faint sources must be detected on this
brighter local foreground.
We present a procedure for calibrating imaging systems and analyzing such
data. In our approach, by proper choice of observing strategy, information
about detector parameters is encoded in the sky measurements. Proper analysis
allows us to simultaneously solve for sky brightness and detector parameters,
and provides accurate formal error estimates.
This approach allows us to extract the calibration from the observations
themselves; little or no additional information is necessary to allow full
interpretation of the data. Further, this approach allows refinement and
verification of detector parameters during the mission, and thus does not
depend on a priori knowledge of the system or ground calibration for
interpretation of images.Comment: Scheduled for ApJS, June 2000 (16 pages, 3 JPEG figures
Image formation in synthetic aperture radio telescopes
Next generation radio telescopes will be much larger, more sensitive, have
much larger observation bandwidth and will be capable of pointing multiple
beams simultaneously. Obtaining the sensitivity, resolution and dynamic range
supported by the receivers requires the development of new signal processing
techniques for array and atmospheric calibration as well as new imaging
techniques that are both more accurate and computationally efficient since data
volumes will be much larger. This paper provides a tutorial overview of
existing image formation techniques and outlines some of the future directions
needed for information extraction from future radio telescopes. We describe the
imaging process from measurement equation until deconvolution, both as a
Fourier inversion problem and as an array processing estimation problem. The
latter formulation enables the development of more advanced techniques based on
state of the art array processing. We demonstrate the techniques on simulated
and measured radio telescope data.Comment: 12 page
4D Seismic History Matching Incorporating Unsupervised Learning
The work discussed and presented in this paper focuses on the history
matching of reservoirs by integrating 4D seismic data into the inversion
process using machine learning techniques. A new integrated scheme for the
reconstruction of petrophysical properties with a modified Ensemble Smoother
with Multiple Data Assimilation (ES-MDA) in a synthetic reservoir is proposed.
The permeability field inside the reservoir is parametrised with an
unsupervised learning approach, namely K-means with Singular Value
Decomposition (K-SVD). This is combined with the Orthogonal Matching Pursuit
(OMP) technique which is very typical for sparsity promoting regularisation
schemes. Moreover, seismic attributes, in particular, acoustic impedance, are
parametrised with the Discrete Cosine Transform (DCT). This novel combination
of techniques from machine learning, sparsity regularisation, seismic imaging
and history matching aims to address the ill-posedness of the inversion of
historical production data efficiently using ES-MDA. In the numerical
experiments provided, I demonstrate that these sparse representations of the
petrophysical properties and the seismic attributes enables to obtain better
production data matches to the true production data and to quantify the
propagating waterfront better compared to more traditional methods that do not
use comparable parametrisation techniques
- …