2,557 research outputs found
Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies
In motion analysis and understanding it is important to be able to fit a
suitable model or structure to the temporal series of observed data, in order
to describe motion patterns in a compact way, and to discriminate between them.
In an unsupervised context, i.e., no prior model of the moving object(s) is
available, such a structure has to be learned from the data in a bottom-up
fashion. In recent times, volumetric approaches in which the motion is captured
from a number of cameras and a voxel-set representation of the body is built
from the camera views, have gained ground due to attractive features such as
inherent view-invariance and robustness to occlusions. Automatic, unsupervised
segmentation of moving bodies along entire sequences, in a temporally-coherent
and robust way, has the potential to provide a means of constructing a
bottom-up model of the moving body, and track motion cues that may be later
exploited for motion classification. Spectral methods such as locally linear
embedding (LLE) can be useful in this context, as they preserve "protrusions",
i.e., high-curvature regions of the 3D volume, of articulated shapes, while
improving their separation in a lower dimensional space, making them in this
way easier to cluster. In this paper we therefore propose a spectral approach
to unsupervised and temporally-coherent body-protrusion segmentation along time
sequences. Volumetric shapes are clustered in an embedding space, clusters are
propagated in time to ensure coherence, and merged or split to accommodate
changes in the body's topology. Experiments on both synthetic and real
sequences of dense voxel-set data are shown. This supports the ability of the
proposed method to cluster body-parts consistently over time in a totally
unsupervised fashion, its robustness to sampling density and shape quality, and
its potential for bottom-up model constructionComment: 31 pages, 26 figure
Unsupervised Video Anomaly Detection with Diffusion Models Conditioned on Compact Motion Representations
This paper aims to address the unsupervised video anomaly detection (VAD)
problem, which involves classifying each frame in a video as normal or
abnormal, without any access to labels. To accomplish this, the proposed method
employs conditional diffusion models, where the input data is the
spatiotemporal features extracted from a pre-trained network, and the condition
is the features extracted from compact motion representations that summarize a
given video segment in terms of its motion and appearance. Our method utilizes
a data-driven threshold and considers a high reconstruction error as an
indicator of anomalous events. This study is the first to utilize compact
motion representations for VAD and the experiments conducted on two large-scale
VAD benchmarks demonstrate that they supply relevant information to the
diffusion model, and consequently improve VAD performances w.r.t the prior art.
Importantly, our method exhibits better generalization performance across
different datasets, notably outperforming both the state-of-the-art and
baseline methods. The code of our method is available at
https://github.com/AnilOsmanTur/conditioned_video_anomaly_diffusionComment: Accepted to ICIAP 202
Modality Cycles with Masked Conditional Diffusion for Unsupervised Anomaly Segmentation in MRI
Unsupervised anomaly segmentation aims to detect patterns that are distinct
from any patterns processed during training, commonly called abnormal or
out-of-distribution patterns, without providing any associated manual
segmentations. Since anomalies during deployment can lead to model failure,
detecting the anomaly can enhance the reliability of models, which is valuable
in high-risk domains like medical imaging. This paper introduces Masked
Modality Cycles with Conditional Diffusion (MMCCD), a method that enables
segmentation of anomalies across diverse patterns in multimodal MRI. The method
is based on two fundamental ideas. First, we propose the use of cyclic modality
translation as a mechanism for enabling abnormality detection.
Image-translation models learn tissue-specific modality mappings, which are
characteristic of tissue physiology. Thus, these learned mappings fail to
translate tissues or image patterns that have never been encountered during
training, and the error enables their segmentation. Furthermore, we combine
image translation with a masked conditional diffusion model, which attempts to
`imagine' what tissue exists under a masked area, further exposing unknown
patterns as the generative model fails to recreate them. We evaluate our method
on a proxy task by training on healthy-looking slices of BraTS2021
multi-modality MRIs and testing on slices with tumors. We show that our method
compares favorably to previous unsupervised approaches based on image
reconstruction and denoising with autoencoders and diffusion models.Comment: Accepted in Multiscale Multimodal Medical Imaging workshop in MICCAI
202
- …