61 research outputs found
Nilpotent Approximations of Sub-Riemannian Distances for Fast Perceptual Grouping of Blood Vessels in 2D and 3D
We propose an efficient approach for the grouping of local orientations
(points on vessels) via nilpotent approximations of sub-Riemannian distances in
the 2D and 3D roto-translation groups and . In our distance
approximations we consider homogeneous norms on nilpotent groups that locally
approximate , and which are obtained via the exponential and logarithmic
map on . In a qualitative validation we show that the norms provide
accurate approximations of the true sub-Riemannian distances, and we discuss
their relations to the fundamental solution of the sub-Laplacian on .
The quantitative experiments further confirm the accuracy of the
approximations. Quantitative results are obtained by evaluating perceptual
grouping performance of retinal blood vessels in 2D images and curves in
challenging 3D synthetic volumes. The results show that 1) sub-Riemannian
geometry is essential in achieving top performance and 2) that grouping via the
fast analytic approximations performs almost equally, or better, than
data-adaptive fast marching approaches on and .Comment: 18 pages, 9 figures, 3 tables, in review at JMI
A PDE Approach to Data-driven Sub-Riemannian Geodesics in SE(2)
We present a new flexible wavefront propagation algorithm for the boundary
value problem for sub-Riemannian (SR) geodesics in the roto-translation group
with a metric tensor depending on a smooth
external cost , , computed from
image data. The method consists of a first step where a SR-distance map is
computed as a viscosity solution of a Hamilton-Jacobi-Bellman (HJB) system
derived via Pontryagin's Maximum Principle (PMP). Subsequent backward
integration, again relying on PMP, gives the SR-geodesics. For
we show that our method produces the global minimizers. Comparison with exact
solutions shows a remarkable accuracy of the SR-spheres and the SR-geodesics.
We present numerical computations of Maxwell points and cusp points, which we
again verify for the uniform cost case . Regarding image
analysis applications, tracking of elongated structures in retinal and
synthetic images show that our line tracking generically deals with crossings.
We show the benefits of including the sub-Riemannian geometry.Comment: Extended version of SSVM 2015 conference article "Data-driven
Sub-Riemannian Geodesics in SE(2)
Regular SE(3) Group Convolutions for Volumetric Medical Image Analysis
Regular group convolutional neural networks (G-CNNs) have been shown to
increase model performance and improve equivariance to different geometrical
symmetries. This work addresses the problem of SE(3), i.e., roto-translation
equivariance, on volumetric data. Volumetric image data is prevalent in many
medical settings. Motivated by the recent work on separable group convolutions,
we devise a SE(3) group convolution kernel separated into a continuous SO(3)
(rotation) kernel and a spatial kernel. We approximate equivariance to the
continuous setting by sampling uniform SO(3) grids. Our continuous SO(3) kernel
is parameterized via RBF interpolation on similarly uniform grids. We
demonstrate the advantages of our approach in volumetric medical image
analysis. Our SE(3) equivariant models consistently outperform CNNs and regular
discrete G-CNNs on challenging medical classification tasks and show
significantly improved generalization capabilities. Our approach achieves up to
a 16.5% gain in accuracy over regular CNNs.Comment: 10 pages, 1 figure, 2 tables, accepted at MICCAI 2023. Updated
version to camera ready version
On genuine invariance learning without weight-tying
In this paper, we investigate properties and limitations of invariance
learned by neural networks from the data compared to the genuine invariance
achieved through invariant weight-tying. To do so, we adopt a group theoretical
perspective and analyze invariance learning in neural networks without
weight-tying constraints. We demonstrate that even when a network learns to
correctly classify samples on a group orbit, the underlying decision-making in
such a model does not attain genuine invariance. Instead, learned invariance is
strongly conditioned on the input data, rendering it unreliable if the input
distribution shifts. We next demonstrate how to guide invariance learning
toward genuine invariance by regularizing the invariance of a model at the
training. To this end, we propose several metrics to quantify learned
invariance: (i) predictive distribution invariance, (ii) logit invariance, and
(iii) saliency invariance similarity. We show that the invariance learned with
the invariance error regularization closely reassembles the genuine invariance
of weight-tying models and reliably holds even under a severe input
distribution shift. Closer analysis of the learned invariance also reveals the
spectral decay phenomenon, when a network chooses to achieve the invariance to
a specific transformation group by reducing the sensitivity to any input
perturbation
Latent Field Discovery In Interacting Dynamical Systems With Neural Fields
Systems of interacting objects often evolve under the influence of field
effects that govern their dynamics, yet previous works have abstracted away
from such effects, and assume that systems evolve in a vacuum. In this work, we
focus on discovering these fields, and infer them from the observed dynamics
alone, without directly observing them. We theorize the presence of latent
force fields, and propose neural fields to learn them. Since the observed
dynamics constitute the net effect of local object interactions and global
field effects, recently popularized equivariant networks are inapplicable, as
they fail to capture global information. To address this, we propose to
disentangle local object interactions -- which are equivariant
and depend on relative states -- from external global field effects -- which
depend on absolute states. We model interactions with equivariant graph
networks, and combine them with neural fields in a novel graph network that
integrates field forces. Our experiments show that we can accurately discover
the underlying fields in charged particles settings, traffic scenes, and
gravitational n-body problems, and effectively use them to learn the system and
forecast future trajectories.Comment: NeurIPS 2023. https://github.com/mkofinas/aethe
Attentive Group Equivariant Convolutional Networks
Although group convolutional networks are able to learn powerful
representations based on symmetry patterns, they lack explicit means to learn
meaningful relationships among them (e.g., relative positions and poses). In
this paper, we present attentive group equivariant convolutions, a
generalization of the group convolution, in which attention is applied during
the course of convolution to accentuate meaningful symmetry combinations and
suppress non-plausible, misleading ones. We indicate that prior work on visual
attention can be described as special cases of our proposed framework and show
empirically that our attentive group equivariant convolutional networks
consistently outperform conventional group convolutional networks on benchmark
image datasets. Simultaneously, we provide interpretability to the learned
concepts through the visualization of equivariant attention maps.Comment: Proceedings of the 37th International Conference on Machine Learning
(ICML), 202
Roto-Translation Equivariant Convolutional Networks: Application to Histopathology Image Analysis
Rotation-invariance is a desired property of machine-learning models for
medical image analysis and in particular for computational pathology
applications. We propose a framework to encode the geometric structure of the
special Euclidean motion group SE(2) in convolutional networks to yield
translation and rotation equivariance via the introduction of SE(2)-group
convolution layers. This structure enables models to learn feature
representations with a discretized orientation dimension that guarantees that
their outputs are invariant under a discrete set of rotations. Conventional
approaches for rotation invariance rely mostly on data augmentation, but this
does not guarantee the robustness of the output when the input is rotated. At
that, trained conventional CNNs may require test-time rotation augmentation to
reach their full capability. This study is focused on histopathology image
analysis applications for which it is desirable that the arbitrary global
orientation information of the imaged tissues is not captured by the machine
learning models. The proposed framework is evaluated on three different
histopathology image analysis tasks (mitosis detection, nuclei segmentation and
tumor classification). We present a comparative analysis for each problem and
show that consistent increase of performances can be achieved when using the
proposed framework
Stability Analysis of Fractal Dimension in Retinal Vasculature
Fractal dimension (FD) has been considered as a potential biomarker for retina-based disease detection. However, conflicting findings can be found in the reported literature regarding the association of the biomarker with diseases. This motivates us to examine the stability of the FD on different (1) vessel segmentations obtained from human observers, (2) automatic segmentation methods, (3) threshold values, and (4) region-of-interests. Our experiments show that the corresponding relative errors with respect to reference ones, computed per patient, are generally higher than the relative standard deviation of the reference values themselves (among all patients). The conclusion of this paper is that we cannot fully rely on the studied FD values, and thus do not recommend their use in quantitative clinical applications
- …