17,372 research outputs found
Discrete spherical means of directional derivatives and Veronese maps
We describe and study geometric properties of discrete circular and spherical
means of directional derivatives of functions, as well as discrete
approximations of higher order differential operators. For an arbitrary
dimension we present a general construction for obtaining discrete spherical
means of directional derivatives. The construction is based on using the
Minkowski's existence theorem and Veronese maps. Approximating the directional
derivatives by appropriate finite differences allows one to obtain finite
difference operators with good rotation invariance properties. In particular,
we use discrete circular and spherical means to derive discrete approximations
of various linear and nonlinear first- and second-order differential operators,
including discrete Laplacians. A practical potential of our approach is
demonstrated by considering applications to nonlinear filtering of digital
images and surface curvature estimation
The Local Structure of Space-Variant Images
Local image structure is widely used in theories of both machine and biological vision. The form of the differential operators describing this structure for space-invariant images has been well documented (e.g. Koenderink, 1984). Although space-variant coordinates are universally used in mammalian visual systems, the form of the operators in the space-variant domain has received little attention. In this report we derive the form of the most common differential operators and surface characteristics in the space-variant domain and show examples of their use. The operators include the Laplacian, the gradient and the divergence, as well as the fundamental forms of the image treated as a surface. We illustrate the use of these results by deriving the space-variant form of corner detection and image enhancement algorithms. The latter is shown to have interesting properties in the complex log domain, implicitly encoding a variable grid-size integration of the underlying PDE, allowing rapid enhancement of large scale peripheral features while preserving high spatial frequencies in the fovea.Office of Naval Research (N00014-95-I-0409
Locally Adaptive Frames in the Roto-Translation Group and their Applications in Medical Imaging
Locally adaptive differential frames (gauge frames) are a well-known
effective tool in image analysis, used in differential invariants and
PDE-flows. However, at complex structures such as crossings or junctions, these
frames are not well-defined. Therefore, we generalize the notion of gauge
frames on images to gauge frames on data representations defined on the extended space of positions and
orientations, which we relate to data on the roto-translation group ,
. This allows to define multiple frames per position, one per
orientation. We compute these frames via exponential curve fits in the extended
data representations in . These curve fits minimize first or second
order variational problems which are solved by spectral decomposition of,
respectively, a structure tensor or Hessian of data on . We include
these gauge frames in differential invariants and crossing preserving PDE-flows
acting on extended data representation and we show their advantage compared
to the standard left-invariant frame on . Applications include
crossing-preserving filtering and improved segmentations of the vascular tree
in retinal images, and new 3D extensions of coherence-enhancing diffusion via
invertible orientation scores
Left-invariant evolutions of wavelet transforms on the Similitude Group
Enhancement of multiple-scale elongated structures in noisy image data is
relevant for many biomedical applications but commonly used PDE-based
enhancement techniques often fail at crossings in an image. To get an overview
of how an image is composed of local multiple-scale elongated structures we
construct a multiple scale orientation score, which is a continuous wavelet
transform on the similitude group, SIM(2). Our unitary transform maps the space
of images onto a reproducing kernel space defined on SIM(2), allowing us to
robustly relate Euclidean (and scaling) invariant operators on images to
left-invariant operators on the corresponding continuous wavelet transform.
Rather than often used wavelet (soft-)thresholding techniques, we employ the
group structure in the wavelet domain to arrive at left-invariant evolutions
and flows (diffusion), for contextual crossing preserving enhancement of
multiple scale elongated structures in noisy images. We present experiments
that display benefits of our work compared to recent PDE techniques acting
directly on the images and to our previous work on left-invariant diffusions on
orientation scores defined on Euclidean motion group.Comment: 40 page
Surface Networks
We study data-driven representations for three-dimensional triangle meshes,
which are one of the prevalent objects used to represent 3D geometry. Recent
works have developed models that exploit the intrinsic geometry of manifolds
and graphs, namely the Graph Neural Networks (GNNs) and its spectral variants,
which learn from the local metric tensor via the Laplacian operator. Despite
offering excellent sample complexity and built-in invariances, intrinsic
geometry alone is invariant to isometric deformations, making it unsuitable for
many applications. To overcome this limitation, we propose several upgrades to
GNNs to leverage extrinsic differential geometry properties of
three-dimensional surfaces, increasing its modeling power.
In particular, we propose to exploit the Dirac operator, whose spectrum
detects principal curvature directions --- this is in stark contrast with the
classical Laplace operator, which directly measures mean curvature. We coin the
resulting models \emph{Surface Networks (SN)}. We prove that these models
define shape representations that are stable to deformation and to
discretization, and we demonstrate the efficiency and versatility of SNs on two
challenging tasks: temporal prediction of mesh deformations under non-linear
dynamics and generative models using a variational autoencoder framework with
encoders/decoders given by SNs
- …