11,665 research outputs found
Gaussian Process Morphable Models
Statistical shape models (SSMs) represent a class of shapes as a normal
distribution of point variations, whose parameters are estimated from example
shapes. Principal component analysis (PCA) is applied to obtain a
low-dimensional representation of the shape variation in terms of the leading
principal components. In this paper, we propose a generalization of SSMs,
called Gaussian Process Morphable Models (GPMMs). We model the shape variations
with a Gaussian process, which we represent using the leading components of its
Karhunen-Loeve expansion. To compute the expansion, we make use of an
approximation scheme based on the Nystrom method. The resulting model can be
seen as a continuous analogon of an SSM. However, while for SSMs the shape
variation is restricted to the span of the example data, with GPMMs we can
define the shape variation using any Gaussian process. For example, we can
build shape models that correspond to classical spline models, and thus do not
require any example data. Furthermore, Gaussian processes make it possible to
combine different models. For example, an SSM can be extended with a spline
model, to obtain a model that incorporates learned shape characteristics, but
is flexible enough to explain shapes that cannot be represented by the SSM. We
introduce a simple algorithm for fitting a GPMM to a surface or image. This
results in a non-rigid registration approach, whose regularization properties
are defined by a GPMM. We show how we can obtain different registration
schemes,including methods for multi-scale, spatially-varying or hybrid
registration, by constructing an appropriate GPMM. As our approach strictly
separates modelling from the fitting process, this is all achieved without
changes to the fitting algorithm. We show the applicability and versatility of
GPMMs on a clinical use case, where the goal is the model-based segmentation of
3D forearm images
Deriving High-Precision Radial Velocities
This chapter describes briefly the key aspects behind the derivation of
precise radial velocities. I start by defining radial velocity precision in the
context of astrophysics in general and exoplanet searches in particular. Next I
discuss the different basic elements that constitute a spectrograph, and how
these elements and overall technical choices impact on the derived radial
velocity precision. Then I go on to discuss the different wavelength
calibration and radial velocity calculation techniques, and how these are
intimately related to the spectrograph's properties. I conclude by presenting
some interesting examples of planets detected through radial velocity, and some
of the new-generation instruments that will push the precision limit further.Comment: Lecture presented at the IVth Azores International Advanced School in
Space Sciences on "Asteroseismology and Exoplanets: Listening to the Stars
and Searching for New Worlds" (arXiv:1709.00645), which took place in Horta,
Azores Islands, Portugal in July 201
Observing the evaporation transition in vibro-fluidized granular matter
By shaking a sand box the grains on the top start to jump giving the picture
of evaporating a sand bulk, and a gaseous transition starts at the surface
granular matter (GM) bed. Moreover the mixture of the grains in the whole bed
starts to move in a cooperative way which is far away from a Brownian
description. In a previous work we have shown that the key element to describe
the statistics of this behavior is the exclusion of volume principle, whereby
the system obeys a Fermi configurational approach. Even though the experiment
involves an archetypal non-equilibrium system, we succeeded in defining a
global temperature, as the quantity associated to the Lagrange parameter in a
maximum entropic statistical description. In fact in order to close our
approach we had to generalize the equipartition theorem for dissipative
systems. Therefore we postulated, found and measured a fundamental dissipative
parameter, written in terms of pumping and gravitational energies, linking the
configurational entropy to the collective response for the expansion of the
centre of mass (c.m.) of the granular bed. Here we present a kinetic approach
to describe the experimental velocity distribution function (VDF) of this
non-Maxwellian gas of macroscopic Fermi-like particles (mFp). The evaporation
transition occurs mainly by jumping balls governed by the excluded volume
principle. Surprisingly in the whole range of low temperatures that we measured
this description reveals a lattice-gas, leading to a packing factor, which is
independent of the external parameters. In addition we measure the mean free
path, as a function of the driving frequency, and corroborate our prediction
from the present kinetic theory.Comment: 6 pages, 4 figures, submitted for publication September 1st, 200
Understanding Health and Disease with Multidimensional Single-Cell Methods
Current efforts in the biomedical sciences and related interdisciplinary
fields are focused on gaining a molecular understanding of health and disease,
which is a problem of daunting complexity that spans many orders of magnitude
in characteristic length scales, from small molecules that regulate cell
function to cell ensembles that form tissues and organs working together as an
organism. In order to uncover the molecular nature of the emergent properties
of a cell, it is essential to measure multiple cell components simultaneously
in the same cell. In turn, cell heterogeneity requires multiple cells to be
measured in order to understand health and disease in the organism. This review
summarizes current efforts towards a data-driven framework that leverages
single-cell technologies to build robust signatures of healthy and diseased
phenotypes. While some approaches focus on multicolor flow cytometry data and
other methods are designed to analyze high-content image-based screens, we
emphasize the so-called Supercell/SVM paradigm (recently developed by the
authors of this review and collaborators) as a unified framework that captures
mesoscopic-scale emergence to build reliable phenotypes. Beyond their specific
contributions to basic and translational biomedical research, these efforts
illustrate, from a larger perspective, the powerful synergy that might be
achieved from bringing together methods and ideas from statistical physics,
data mining, and mathematics to solve the most pressing problems currently
facing the life sciences.Comment: 25 pages, 7 figures; revised version with minor changes. To appear in
J. Phys.: Cond. Mat
Variational Downscaling, Fusion and Assimilation of Hydrometeorological States via Regularized Estimation
Improved estimation of hydrometeorological states from down-sampled
observations and background model forecasts in a noisy environment, has been a
subject of growing research in the past decades. Here, we introduce a unified
framework that ties together the problems of downscaling, data fusion and data
assimilation as ill-posed inverse problems. This framework seeks solutions
beyond the classic least squares estimation paradigms by imposing proper
regularization, which are constraints consistent with the degree of smoothness
and probabilistic structure of the underlying state. We review relevant
regularization methods in derivative space and extend classic formulations of
the aforementioned problems with particular emphasis on hydrologic and
atmospheric applications. Informed by the statistical characteristics of the
state variable of interest, the central results of the paper suggest that
proper regularization can lead to a more accurate and stable recovery of the
true state and hence more skillful forecasts. In particular, using the Tikhonov
and Huber regularization in the derivative space, the promise of the proposed
framework is demonstrated in static downscaling and fusion of synthetic
multi-sensor precipitation data, while a data assimilation numerical experiment
is presented using the heat equation in a variational setting
Recommended from our members
A First Order Analysis of Lighting, Shading, and Shadows
The shading in a scene depends on a combination of many factors---how the lighting varies spatially across a surface, how it varies along different directions, the geometric curvature and reflectance properties of objects, and the locations of soft shadows. In this paper, we conduct a complete first order or gradient analysis of lighting, shading and shadows, showing how each factor separately contributes to scene appearance, and when it is important. Gradients are well suited for analyzing the intricate combination of appearance effects, since each gradient term corresponds directly to variation in a specific factor. First, we show how the spatial {\em and} directional gradients of the light field change, as light interacts with curved objects. This extends the recent frequency analysis of Durand et al.\ to gradients, and has many advantages for operations, like bump-mapping, that are difficult to analyze in the Fourier domain. Second, we consider the individual terms responsible for shading gradients, such as lighting variation, convolution with the surface BRDF, and the object's curvature. This analysis indicates the relative importance of various terms, and shows precisely how they combine in shading. As one practical application, our theoretical framework can be used to adaptively sample images in high-gradient regions for efficient rendering. Third, we understand the effects of soft shadows, computing accurate visibility gradients. We generalize previous work to arbitrary curved occluders, and develop a local framework that is easy to integrate with conventional ray-tracing methods. Our visibility gradients can be directly used in practical gradient interpolation methods for efficient rendering
- …