84 research outputs found
DocMIR: An automatic document-based indexing system for meeting retrieval
This paper describes the DocMIR system which captures, analyzes and indexes automatically meetings, conferences, lectures, etc. by taking advantage of the documents projected (e.g. slideshows, budget tables, figures, etc.) during the events. For instance, the system can automatically apply the above-mentioned procedures to a lecture and automatically index the event according to the presented slides and their contents. For indexing, the system requires neither specific software installed on the presenter's computer nor any conscious intervention of the speaker throughout the presentation. The only material required by the system is the electronic presentation file of the speaker. Even if not provided, the system would temporally segment the presentation and offer a simple storyboard-like browsing interface. The system runs on several capture boxes connected to cameras and microphones that records events, synchronously. Once the recording is over, indexing is automatically performed by analyzing the content of the captured video containing projected documents and detects the scene changes, identifies the documents, computes their duration and extracts their textual content. Each of the captured images is identified from a repository containing all original electronic documents, captured audio-visual data and metadata created during post-production. The identification is based on documents' signatures, which hierarchically structure features from both layout structure and color distributions of the document images. Video segments are finally enriched with textual content of the identified original documents, which further facilitate the query and retrieval without using OCR. The signature-based indexing method proposed in this article is robust and works with low-resolution images and can be applied to several other applications including real-time document recognition, multimedia IR and augmented reality system
An Attention-driven Hierarchical Multi-scale Representation for Visual Recognition
Convolutional Neural Networks (CNNs) have revolutionized the understanding of
visual content. This is mainly due to their ability to break down an image into
smaller pieces, extract multi-scale localized features and compose them to
construct highly expressive representations for decision making. However, the
convolution operation is unable to capture long-range dependencies such as
arbitrary relations between pixels since it operates on a fixed-size window.
Therefore, it may not be suitable for discriminating subtle changes (e.g.
fine-grained visual recognition). To this end, our proposed method captures the
high-level long-range dependencies by exploring Graph Convolutional Networks
(GCNs), which aggregate information by establishing relationships among
multi-scale hierarchical regions. These regions consist of smaller (closer
look) to larger (far look), and the dependency between regions is modeled by an
innovative attention-driven message propagation, guided by the graph structure
to emphasize the neighborhoods of a given region. Our approach is simple yet
extremely effective in solving both the fine-grained and generic visual
classification problems. It outperforms the state-of-the-arts with a
significant margin on three and is very competitive on other two datasets.Comment: Accepted in the 32nd British Machine Vision Conference (BMVC) 202
COGNITO: Activity monitoring and recovery
In order to test our hierarchical framework, we have obtained two datasets using an egocentric setup. These datasets consist of non-periodic manipulative tasks in an industrial context. All the sequences were captured with on-body sensors consisting IMUs, a backpack-mounted RGB-D camera for top-view and a chestmounted fisheye camera for front-view of the workbench. The first dataset is the scenario of hammering nails and driving screws. In this dataset, subjects are asked to hammer 3 nails and drive 3 screws using prescribed tools. The second dataset is a labelling and packaging bottles scenario. In this dataset, participants asked to attach labels to two bottles, then package them in the correct positions within a box. This requires opening the box, placing the bottles, closing the box, and then writing on the box as completed using a marker pen
A review of computer vision-based approaches for physical rehabilitation and assessment
The computer vision community has extensively researched the area of human motion analysis, which primarily focuses on pose estimation, activity recognition, pose or gesture recognition and so on. However for many applications, like monitoring of functional rehabilitation of patients with musculo skeletal or physical impairments, the requirement is to comparatively evaluate human motion. In this survey, we capture important literature on vision-based monitoring and physical rehabilitation that focuses on comparative evaluation of human motion during the past two decades and discuss the state of current research in this area. Unlike other reviews in this area, which are written from a clinical objective, this article presents research in this area from a computer vision application perspective. We propose our own taxonomy of computer vision-based rehabilitation and assessment research which are further divided into sub-categories to capture novelties of each research. The review discusses the challenges of this domain due to the wide ranging human motion abnormalities and difficulty in automatically assessing those abnormalities. Finally, suggestions on the future direction of research are offered
- …