74 research outputs found
Second-order Temporal Pooling for Action Recognition
Deep learning models for video-based action recognition usually generate
features for short clips (consisting of a few frames); such clip-level features
are aggregated to video-level representations by computing statistics on these
features. Typically zero-th (max) or the first-order (average) statistics are
used. In this paper, we explore the benefits of using second-order statistics.
Specifically, we propose a novel end-to-end learnable feature aggregation
scheme, dubbed temporal correlation pooling that generates an action descriptor
for a video sequence by capturing the similarities between the temporal
evolution of clip-level CNN features computed across the video. Such a
descriptor, while being computationally cheap, also naturally encodes the
co-activations of multiple CNN features, thereby providing a richer
characterization of actions than their first-order counterparts. We also
propose higher-order extensions of this scheme by computing correlations after
embedding the CNN features in a reproducing kernel Hilbert space. We provide
experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained
datasets such as MPII Cooking activities and JHMDB, as well as the recent
Kinetics-600. Our results demonstrate the advantages of higher-order pooling
schemes that when combined with hand-crafted features (as is standard practice)
achieves state-of-the-art accuracy.Comment: Accepted in the International Journal of Computer Vision (IJCV
Going Deeper into Action Recognition: A Survey
Understanding human actions in visual data is tied to advances in
complementary research areas including object recognition, human dynamics,
domain adaptation and semantic segmentation. Over the last decade, human action
analysis evolved from earlier schemes that are often limited to controlled
environments to nowadays advanced solutions that can learn from millions of
videos and apply to almost all daily activities. Given the broad range of
applications from video surveillance to human-computer interaction, scientific
milestones in action recognition are achieved more rapidly, eventually leading
to the demise of what used to be good in a short time. This motivated us to
provide a comprehensive review of the notable steps taken towards recognizing
human actions. To this end, we start our discussion with the pioneering methods
that use handcrafted representations, and then, navigate into the realm of deep
learning based approaches. We aim to remain objective throughout this survey,
touching upon encouraging improvements as well as inevitable fallbacks, in the
hope of raising fresh questions and motivating new research directions for the
reader
Mathematical Methods Applied to Digital Image Processing
Introduction: Digital image processing (DIP) is an important research area since it spans a variety of applications. Although over the past few decades there has been a rapid rise in this field, there still remain issues to address. Examples include image coding, image restoration, 3D image processing, feature extraction and analysis, moving object detection, and face recognition. To deal with these issues, the use of sophisticated and robust mathematical algorithms plays a crucial role. The aim of this special issue is to provide an opportunity for researchers to publish their latest theoretical and technological achievements in mathematical methods and their various applications related to DIP. This special issue covers topics related to the development of mathematical methods and their applications. It has a total of twenty-four high-quality papers covering various important topics in DIP, including image preprocessing, image encoding/decoding, stereo image reconstruction, dimensionality and data size reduction, and applications
Egocentric Auditory Attention Localization in Conversations
In a noisy conversation environment such as a dinner party, people often
exhibit selective auditory attention, or the ability to focus on a particular
speaker while tuning out others. Recognizing who somebody is listening to in a
conversation is essential for developing technologies that can understand
social behavior and devices that can augment human hearing by amplifying
particular sound sources. The computer vision and audio research communities
have made great strides towards recognizing sound sources and speakers in
scenes. In this work, we take a step further by focusing on the problem of
localizing auditory attention targets in egocentric video, or detecting who in
a camera wearer's field of view they are listening to. To tackle the new and
challenging Selective Auditory Attention Localization problem, we propose an
end-to-end deep learning approach that uses egocentric video and multichannel
audio to predict the heatmap of the camera wearer's auditory attention. Our
approach leverages spatiotemporal audiovisual features and holistic reasoning
about the scene to make predictions, and outperforms a set of baselines on a
challenging multi-speaker conversation dataset. Project page:
https://fkryan.github.io/saa
Sparse variational regularization for visual motion estimation
The computation of visual motion is a key component in numerous computer vision tasks such as object detection, visual object tracking and activity recognition. Despite exten- sive research effort, efficient handling of motion discontinuities, occlusions and illumina- tion changes still remains elusive in visual motion estimation. The work presented in this thesis utilizes variational methods to handle the aforementioned problems because these methods allow the integration of various mathematical concepts into a single en- ergy minimization framework. This thesis applies the concepts from signal sparsity to the variational regularization for visual motion estimation. The regularization is designed in such a way that it handles motion discontinuities and can detect object occlusions
The Role of Riemannian Manifolds in Computer Vision: From Coding to Deep Metric Learning
A diverse number of tasks in computer vision and machine learning
enjoy from representations of data that are compact yet
discriminative, informative and robust to critical measurements.
Two notable representations are offered by Region Covariance
Descriptors (RCovD) and linear subspaces which are naturally
analyzed through the manifold of Symmetric Positive Definite
(SPD) matrices and the Grassmann manifold, respectively, two
widely used types of Riemannian manifolds in computer vision.
As our first objective, we examine image and video-based
recognition applications where the local descriptors have the
aforementioned Riemannian structures, namely the SPD or linear
subspace structure. Initially, we provide a solution to compute
Riemannian version of the conventional Vector of Locally
aggregated Descriptors (VLAD), using geodesic distance of the
underlying manifold as the nearness measure. Next, by having a
closer look at the resulting codes, we formulate a new concept
which we name Local Difference Vectors (LDV). LDVs enable us to
elegantly expand our Riemannian coding techniques to any
arbitrary metric as well as provide intrinsic solutions to
Riemannian sparse coding and its variants when local structured
descriptors are considered.
We then turn our attention to two special types of covariance
descriptors namely infinite-dimensional RCovDs and rank-deficient
covariance matrices for which the underlying Riemannian
structure, i.e. the manifold of SPD matrices is out of reach to
great extent. %Generally speaking, infinite-dimensional RCovDs
offer better discriminatory power over their low-dimensional
counterparts.
To overcome this difficulty, we propose to approximate the
infinite-dimensional RCovDs by making use of two feature
mappings, namely random Fourier features and the Nystrom method.
As for the rank-deficient covariance matrices, unlike most
existing approaches that employ inference tools by predefined
regularizers, we derive positive definite kernels that can be
decomposed into the kernels on the cone of SPD matrices and
kernels on the Grassmann manifolds and show their effectiveness
for image set classification task.
Furthermore, inspired by attractive properties of Riemannian
optimization techniques, we extend the recently introduced Keep
It Simple and Straightforward MEtric learning (KISSME) method to
the scenarios where input data is non-linearly distributed. To
this end, we make use of the infinite dimensional covariance
matrices and propose techniques towards projecting on the
positive cone in a Reproducing Kernel Hilbert Space (RKHS).
We also address the sensitivity issue of the KISSME to the input
dimensionality. The KISSME algorithm is greatly dependent on
Principal Component Analysis (PCA) as a preprocessing step which
can lead to difficulties, especially when the dimensionality is
not meticulously set.
To address this issue, based on the KISSME algorithm, we develop
a Riemannian framework to jointly learn a mapping performing
dimensionality reduction and a metric in the induced space.
Lastly, in line with the recent trend in metric learning, we
devise end-to-end learning of a generic deep network for metric
learning using our derivation
Rekonstruktion, Analyse und Editierung dynamisch deformierter 3D-Oberflächen
Dynamically deforming 3D surfaces play a major role in computer graphics. However, producing time-varying dynamic geometry at ever increasing detail is a time-consuming and costly process, and so a recent trend is to capture geometry data directly from the real world. In the first part of this thesis, I propose novel approaches for this research area. These approaches capture dense dynamic 3D surfaces from multi-camera systems in a particularly robust and accurate way. This provides highly realistic dynamic surface models for phenomena like moving garments and bulging muscles.
However, re-using, editing, or otherwise analyzing dynamic 3D surface data is not yet conveniently possible. To close this gap, the second part of this dissertation develops novel data-driven modeling and animation approaches. I first show a supervised data-driven approach for modeling human muscle deformations that scales to huge datasets and provides fine-scale, anatomically realistic deformations at high quality not attainable by previous methods. I then extend data-driven modeling to the unsupervised case, providing editing tools for a wider set of input data ranging from facial performance capture and full-body motion to muscle and cloth deformation. To this end, I introduce the concepts of sparsity and locality within a mathematical optimization framework. I also explore these concepts for constructing shape-aware functions that are useful for static geometry processing, registration, and localized editing.Dynamisch deformierbare 3D-Oberflächen spielen in der Computergrafik eine zentrale Rolle. Die Erstellung der für Computergrafik-Anwendungen benötigten, hochaufgelösten und zeitlich veränderlichen Oberflächengeometrien ist allerdings äußerst arbeitsintensiv. Aus dieser Problematik heraus hat sich der Trend entwickelt, Oberflächendaten direkt aus Aufnahmen der echten Welt zu erfassen.
Dazu nötige 3D-Rekonstruktionsverfahren werden im ersten Teil der Arbeit entwickelt.
Die vorgestellten, neuartigen Verfahren erlauben die Erfassung dynamischer 3D-Oberflächen aus Mehrkamera-Aufnahmen bei hoher Verlässlichkeit und Präzision.
Auf diese Weise können detaillierte Oberflächenmodelle von Phänomenen wie in Bewegung befindliche Kleidung oder sich anspannende Muskeln erfasst werden.
Aber auch die Wiederverwendung, Bearbeitung und Analyse derlei gewonnener 3D-Oberflächendaten ist aktuell noch nicht auf eine einfache Art und Weise möglich.
Um diese Lücke zu schließen beschäftigt sich der zweite Teil der Arbeit mit der datengetriebenen Modellierung und Animation.
Zunächst wird ein Ansatz für das überwachte Lernen menschlicher Muskel-Deformationen vorgestellt. Dieses neuartige Verfahren ermöglicht eine datengetriebene Modellierung mit besonders umfangreichen Datensätzen und liefert anatomisch-realistische Deformationseffekte. Es übertrifft damit die Genauigkeit früherer Methoden.
Im nächsten Teil beschäftigt sich die Dissertation mit dem unüberwachten Lernen aus 3D-Oberflächendaten. Es werden neuartige Werkzeuge vorgestellt, die eine weitreichende Menge an Eingabedaten verarbeiten können, von aufgenommenen Gesichtsanimationen über Ganzkörperbewegungen bis hin zu Muskel- und Kleidungsdeformationen.
Um diese Anwendungsbreite zu erreichen stützt sich die Arbeit auf die allgemeinen Konzepte der Spärlichkeit und Lokalität und bettet diese in einen mathematischen Optimierungsansatz ein.
Abschließend zeigt die vorliegende Arbeit, wie diese Konzepte auch für die Konstruktion von oberflächen-adaptiven Basisfunktionen übertragen werden können. Dadurch können Anwendungen für die Verarbeitung, Registrierung und Bearbeitung statischer Oberflächenmodelle erschlossen werden
Super-resolution microscopy live cell imaging and image analysis
Novel fundamental research results provided new techniques going beyond the diffraction limit. These recent advances known as super-resolution microscopy have been awarded by the Nobel Prize as they promise new discoveries in biology and live sciences. All these techniques rely on complex signal and image processing. The applicability in biology, and particularly for live cell imaging, remains challenging and needs further investigation. Focusing on image processing and analysis, the thesis is devoted to a significant enhancement of structured illumination microscopy (SIM) and super-resolution optical fluctuation imaging (SOFI)methods towards fast live cell and quantitative imaging. The thesis presents a novel image reconstruction method for both 2D and 3D SIM data, compatible with weak signals, and robust towards unwanted image artifacts. This image reconstruction is efficient under low light conditions, reduces phototoxicity and facilitates live cell observations. We demonstrate the performance of our new method by imaging long super-resolution video sequences of live U2-OS cells and improving cell particle tracking. We develop an adapted 3D deconvolution algorithm for SOFI, which suppresses noise and makes 3D SOFI live cell imaging feasible due to reduction of the number of required input images. We introduce a novel linearization procedure for SOFI maximizing the resolution gain and show that SOFI and PALM can both be applied on the same dataset revealing more insights about the sample. This PALM and SOFI concept provides an enlarged quantitative imaging framework, allowing unprecedented functional exploration of the sample through the estimation of molecular parameters. For quantifying the outcome of our super-resolutionmethods, the thesis presents a novel methodology for objective image quality assessment measuring spatial resolution and signal to noise ratio in real samples. We demonstrate our enhanced SOFI framework by high throughput 3D imaging of live HeLa cells acquiring the whole super-resolution 3D image in 0.95 s, by investigating focal adhesions in live MEF cells, by fast optical readout of fluorescently labelled DNA strands and by unraveling the nanoscale organization of CD4 proteins on a plasma membrane of T-cells. Within the thesis, unique open-source software packages SIMToolbox and SOFI simulation tool were developed to facilitate implementation of super-resolution microscopy methods
- …