87 research outputs found
DocMIR: An automatic document-based indexing system for meeting retrieval
This paper describes the DocMIR system which captures, analyzes and indexes automatically meetings, conferences, lectures, etc. by taking advantage of the documents projected (e.g. slideshows, budget tables, figures, etc.) during the events. For instance, the system can automatically apply the above-mentioned procedures to a lecture and automatically index the event according to the presented slides and their contents. For indexing, the system requires neither specific software installed on the presenter's computer nor any conscious intervention of the speaker throughout the presentation. The only material required by the system is the electronic presentation file of the speaker. Even if not provided, the system would temporally segment the presentation and offer a simple storyboard-like browsing interface. The system runs on several capture boxes connected to cameras and microphones that records events, synchronously. Once the recording is over, indexing is automatically performed by analyzing the content of the captured video containing projected documents and detects the scene changes, identifies the documents, computes their duration and extracts their textual content. Each of the captured images is identified from a repository containing all original electronic documents, captured audio-visual data and metadata created during post-production. The identification is based on documents' signatures, which hierarchically structure features from both layout structure and color distributions of the document images. Video segments are finally enriched with textual content of the identified original documents, which further facilitate the query and retrieval without using OCR. The signature-based indexing method proposed in this article is robust and works with low-resolution images and can be applied to several other applications including real-time document recognition, multimedia IR and augmented reality system
An Attention-driven Hierarchical Multi-scale Representation for Visual Recognition
Convolutional Neural Networks (CNNs) have revolutionized the understanding of
visual content. This is mainly due to their ability to break down an image into
smaller pieces, extract multi-scale localized features and compose them to
construct highly expressive representations for decision making. However, the
convolution operation is unable to capture long-range dependencies such as
arbitrary relations between pixels since it operates on a fixed-size window.
Therefore, it may not be suitable for discriminating subtle changes (e.g.
fine-grained visual recognition). To this end, our proposed method captures the
high-level long-range dependencies by exploring Graph Convolutional Networks
(GCNs), which aggregate information by establishing relationships among
multi-scale hierarchical regions. These regions consist of smaller (closer
look) to larger (far look), and the dependency between regions is modeled by an
innovative attention-driven message propagation, guided by the graph structure
to emphasize the neighborhoods of a given region. Our approach is simple yet
extremely effective in solving both the fine-grained and generic visual
classification problems. It outperforms the state-of-the-arts with a
significant margin on three and is very competitive on other two datasets.Comment: Accepted in the 32nd British Machine Vision Conference (BMVC) 202
COGNITO: Activity monitoring and recovery
In order to test our hierarchical framework, we have obtained two datasets using an egocentric setup. These datasets consist of non-periodic manipulative tasks in an industrial context. All the sequences were captured with on-body sensors consisting IMUs, a backpack-mounted RGB-D camera for top-view and a chestmounted fisheye camera for front-view of the workbench. The first dataset is the scenario of hammering nails and driving screws. In this dataset, subjects are asked to hammer 3 nails and drive 3 screws using prescribed tools. The second dataset is a labelling and packaging bottles scenario. In this dataset, participants asked to attach labels to two bottles, then package them in the correct positions within a box. This requires opening the box, placing the bottles, closing the box, and then writing on the box as completed using a marker pen
Coarse Temporal Attention Network (CTA-Net) for Driver’s Activity Recognition
There is significant progress in recognizing traditional human activities
from videos focusing on highly distinctive actions involving discriminative
body movements, body-object and/or human-human interactions. Driver's
activities are different since they are executed by the same subject with
similar body parts movements, resulting in subtle changes. To address this, we
propose a novel framework by exploiting the spatiotemporal attention to model
the subtle changes. Our model is named Coarse Temporal Attention Network
(CTA-Net), in which coarse temporal branches are introduced in a trainable
glimpse network. The goal is to allow the glimpse to capture high-level
temporal relationships, such as 'during', 'before' and 'after' by focusing on a
specific part of a video. These branches also respect the topology of the
temporal dynamics in the video, ensuring that different branches learn
meaningful spatial and temporal changes. The model then uses an innovative
attention mechanism to generate high-level action specific contextual
information for activity recognition by exploring the hidden states of an LSTM.
The attention mechanism helps in learning to decide the importance of each
hidden state for the recognition task by weighing them when constructing the
representation of the video. Our approach is evaluated on four publicly
accessible datasets and significantly outperforms the state-of-the-art by a
considerable margin with only RGB video as input.Comment: Extended version of the accepted WACV 202
- …