161 research outputs found
Linear-time Online Action Detection From 3D Skeletal Data Using Bags of Gesturelets
Sliding window is one direct way to extend a successful recognition system to
handle the more challenging detection problem. While action recognition decides
only whether or not an action is present in a pre-segmented video sequence,
action detection identifies the time interval where the action occurred in an
unsegmented video stream. Sliding window approaches for action detection can
however be slow as they maximize a classifier score over all possible
sub-intervals. Even though new schemes utilize dynamic programming to speed up
the search for the optimal sub-interval, they require offline processing on the
whole video sequence. In this paper, we propose a novel approach for online
action detection based on 3D skeleton sequences extracted from depth data. It
identifies the sub-interval with the maximum classifier score in linear time.
Furthermore, it is invariant to temporal scale variations and is suitable for
real-time applications with low latency
Unsupervised Speech Representation Pooling Using Vector Quantization
With the advent of general-purpose speech representations from large-scale
self-supervised models, applying a single model to multiple downstream tasks is
becoming a de-facto approach. However, the pooling problem remains; the length
of speech representations is inherently variable. The naive average pooling is
often used, even though it ignores the characteristics of speech, such as
differently lengthed phonemes. Hence, we design a novel pooling method to
squash acoustically similar representations via vector quantization, which does
not require additional training, unlike attention-based pooling. Further, we
evaluate various unsupervised pooling methods on various self-supervised
models. We gather diverse methods scattered around speech and text to evaluate
on various tasks: keyword spotting, speaker identification, intent
classification, and emotion recognition. Finally, we quantitatively and
qualitatively analyze our method, comparing it with supervised pooling methods
Rekonstruktion und skalierbare Detektion und Verfolgung von 3D Objekten
The task of detecting objects in images is essential for autonomous systems to categorize, comprehend and eventually navigate or manipulate its environment. Since many applications demand not only detection of objects but also the estimation of their exact poses, 3D CAD models can prove helpful since they provide means for feature extraction and hypothesis refinement. This work, therefore, explores two paths: firstly, we will look into methods to create richly-textured and geometrically accurate models of real-life objects. Using these reconstructions as a basis, we will investigate on how to improve in the domain of 3D object detection and pose estimation, focusing especially on scalability, i.e. the problem of dealing with multiple objects simultaneously.Objekterkennung in Bildern ist für ein autonomes System von entscheidender Bedeutung, um seine Umgebung zu kategorisieren, zu erfassen und schließlich zu navigieren oder zu manipulieren. Da viele Anwendungen nicht nur die Erkennung von Objekten, sondern auch die Schätzung ihrer exakten Positionen erfordern, können sich 3D-CAD-Modelle als hilfreich erweisen, da sie Mittel zur Merkmalsextraktion und Verfeinerung von Hypothesen bereitstellen. In dieser Arbeit werden daher zwei Wege untersucht: Erstens werden wir Methoden untersuchen, um strukturreiche und geometrisch genaue Modelle realer Objekte zu erstellen. Auf der Grundlage dieser Konstruktionen werden wir untersuchen, wie sich der Bereich der 3D-Objekterkennung und der Posenschätzung verbessern lässt, wobei insbesondere die Skalierbarkeit im Vordergrund steht, d.h. das Problem der gleichzeitigen Bearbeitung mehrerer Objekte
Hybrid Scene Compression for Visual Localization
Localizing an image wrt. a 3D scene model represents a core task for many
computer vision applications. An increasing number of real-world applications
of visual localization on mobile devices, e.g., Augmented Reality or autonomous
robots such as drones or self-driving cars, demand localization approaches to
minimize storage and bandwidth requirements. Compressing the 3D models used for
localization thus becomes a practical necessity. In this work, we introduce a
new hybrid compression algorithm that uses a given memory limit in a more
effective way. Rather than treating all 3D points equally, it represents a
small set of points with full appearance information and an additional, larger
set of points with compressed information. This enables our approach to obtain
a more complete scene representation without increasing the memory
requirements, leading to a superior performance compared to previous
compression schemes. As part of our contribution, we show how to handle
ambiguous matches arising from point compression during RANSAC. Besides
outperforming previous compression techniques in terms of pose accuracy under
the same memory constraints, our compression scheme itself is also more
efficient. Furthermore, the localization rates and accuracy obtained with our
approach are comparable to state-of-the-art feature-based methods, while using
a small fraction of the memory.Comment: Published at CVPR 201
Spatial Deep Networks for Outdoor Scene Classification
Scene classification has become an increasingly popular topic in computer vision.
The techniques for scene classification can be widely used in many other aspects,
such as detection, action recognition, and content-based image retrieval. Recently,
the stationary property of images has been leveraged in conjunction with convolutional
networks to perform classification tasks. In the existing approach, one
random patch is extracted from each training image to learn filters for convolutional
processes. However, feature learning only from one random patch per image
is not robust because patches selected from di↵erent areas of an image may contain
distinct scene objects which make the features of these patches have di↵erent
descriptive power. In this dissertation, focusing on deep learning techniques, we
propose a multi-scale network that utilizes multiple random patches and di↵erent
patch dimensions to learn feature representations for images in order to improve
the existing approach.
Despite the much better performance the multi-scale network can achieve than
the existing approach, lacking of local features and the spatial layout is one of
the core limitations of both methods. Therefore, we propose a novel Spatial Deep
Network (SDN) to further enhance the existing approach by exploiting the spatial
layout of the image and constraining the random patch extraction to be performed
in di↵erent areas of the image so as to e↵ectively restrict the patches to hold the
necessary characteristics of di↵erent image areas. In this way, SDN yields compact
but discriminative features that incorporate both global descriptors and the local
spatial information for images. Experiment results show that SDN considerably
exceeds the existing approach and multi-scale networks and achieves competitive
performance with some widely used classification techniques on the OT dataset
(developed by Oliva and Torralba). In order to evaluate the robustness of the
proposed SDN, we also apply it to the content-based image retrieval on the Holidays
dataset, where our features attain much better retrieval performance but have much
lower feature dimensions compared to other state-of-the-art feature descriptors
Spatiotemporal visual analysis of human actions
In this dissertation we propose four methods for the recognition of human activities. In all four of
them, the representation of the activities is based on spatiotemporal features that are automatically
detected at areas where there is a significant amount of independent motion, that is, motion that is
due to ongoing activities in the scene. We propose the use of spatiotemporal salient points as features
throughout this dissertation. The algorithms presented, however, can be used with any kind of features,
as long as the latter are well localized and have a well-defined area of support in space and time. We
introduce the utilized spatiotemporal salient points in the first method presented in this dissertation.
By extending previous work on spatial saliency, we measure the variations in the information content of
pixel neighborhoods both in space and time, and detect the points at the locations and scales for which
this information content is locally maximized. In this way, an activity is represented as a collection of
spatiotemporal salient points. We propose an iterative linear space-time warping technique in order
to align the representations in space and time and propose to use Relevance Vector Machines (RVM)
in order to classify each example into an action category. In the second method proposed in this
dissertation we propose to enhance the acquired representations of the first method. More specifically,
we propose to track each detected point in time, and create representations based on sets of trajectories,
where each trajectory expresses how the information engulfed by each salient point evolves over time.
In order to deal with imperfect localization of the detected points, we augment the observation model
of the tracker with background information, acquired using a fully automatic background estimation
algorithm. In this way, the tracker favors solutions that contain a large number of foreground pixels.
In addition, we perform experiments where the tracked templates are localized on specific parts of the
body, like the hands and the head, and we further augment the tracker’s observation model using a
human skin color model. Finally, we use a variant of the Longest Common Subsequence algorithm
(LCSS) in order to acquire a similarity measure between the resulting trajectory representations, and
RVMs for classification. In the third method that we propose, we assume that neighboring salient
points follow a similar motion. This is in contrast to the previous method, where each salient point was
tracked independently of its neighbors. More specifically, we propose to extract a novel set of visual
descriptors that are based on geometrical properties of three-dimensional piece-wise polynomials. The
latter are fitted on the spatiotemporal locations of salient points that fall within local spatiotemporal
neighborhoods, and are assumed to follow a similar motion. The extracted descriptors are invariant in
translation and scaling in space-time. Coupling the neighborhood dimensions to the scale at which the
corresponding spatiotemporal salient points are detected ensures the latter. The descriptors that are
extracted across the whole dataset are subsequently clustered in order to create a codebook, which is
used in order to represent the overall motion of the subjects within small temporal windows.Finally,we use boosting in order to select the most discriminative of these windows for each class, and RVMs for
classification. The fourth and last method addresses the joint problem of localization and recognition
of human activities depicted in unsegmented image sequences. Its main contribution is the use of an
implicit representation of the spatiotemporal shape of the activity, which relies on the spatiotemporal
localization of characteristic ensembles of spatiotemporal features. The latter are localized around
automatically detected salient points. Evidence for the spatiotemporal localization of the activity
is accumulated in a probabilistic spatiotemporal voting scheme. During training, we use boosting in
order to create codebooks of characteristic feature ensembles for each class. Subsequently, we construct
class-specific spatiotemporal models, which encode where in space and time each codeword ensemble
appears in the training set. During testing, each activated codeword ensemble casts probabilistic
votes concerning the spatiotemporal localization of the activity, according to the information stored
during training. We use a Mean Shift Mode estimation algorithm in order to extract the most probable
hypotheses from each resulting voting space. Each hypothesis corresponds to a spatiotemporal volume
which potentially engulfs the activity, and is verified by performing action category classification with
an RVM classifier
- …