602 research outputs found
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
This paper takes a problem-oriented perspective and presents a comprehensive
review of transfer learning methods, both shallow and deep, for cross-dataset
visual recognition. Specifically, it categorises the cross-dataset recognition
into seventeen problems based on a set of carefully chosen data and label
attributes. Such a problem-oriented taxonomy has allowed us to examine how
different transfer learning approaches tackle each problem and how well each
problem has been researched to date. The comprehensive problem-oriented review
of the advances in transfer learning with respect to the problem has not only
revealed the challenges in transfer learning for visual recognition, but also
the problems (e.g. eight of the seventeen problems) that have been scarcely
studied. This survey not only presents an up-to-date technical review for
researchers, but also a systematic approach and a reference for a machine
learning practitioner to categorise a real problem and to look up for a
possible solution accordingly
Crowdsourcing in Computer Vision
Computer vision systems require large amounts of manually annotated data to
properly learn challenging visual concepts. Crowdsourcing platforms offer an
inexpensive method to capture human knowledge and understanding, for a vast
number of visual perception tasks. In this survey, we describe the types of
annotations computer vision researchers have collected using crowdsourcing, and
how they have ensured that this data is of high quality while annotation effort
is minimized. We begin by discussing data collection on both classic (e.g.,
object recognition) and recent (e.g., visual story-telling) vision tasks. We
then summarize key design decisions for creating effective data collection
interfaces and workflows, and present strategies for intelligently selecting
the most important data instances to annotate. Finally, we conclude with some
thoughts on the future of crowdsourcing in computer vision.Comment: A 69-page meta review of the field, Foundations and Trends in
Computer Graphics and Vision, 201
Efficient Diverse Ensemble for Discriminative Co-Tracking
Ensemble discriminative tracking utilizes a committee of classifiers, to
label data samples, which are in turn, used for retraining the tracker to
localize the target using the collective knowledge of the committee. Committee
members could vary in their features, memory update schemes, or training data,
however, it is inevitable to have committee members that excessively agree
because of large overlaps in their version space. To remove this redundancy and
have an effective ensemble learning, it is critical for the committee to
include consistent hypotheses that differ from one-another, covering the
version space with minimum overlaps. In this study, we propose an online
ensemble tracker that directly generates a diverse committee by generating an
efficient set of artificial training. The artificial data is sampled from the
empirical distribution of the samples taken from both target and background,
whereas the process is governed by query-by-committee to shrink the overlap
between classifiers. The experimental results demonstrate that the proposed
scheme outperforms conventional ensemble trackers on public benchmarks.Comment: CVPR 2018 Submissio
Open-Fusion: Real-time Open-Vocabulary 3D Mapping and Queryable Scene Representation
Precise 3D environmental mapping is pivotal in robotics. Existing methods
often rely on predefined concepts during training or are time-intensive when
generating semantic maps. This paper presents Open-Fusion, a groundbreaking
approach for real-time open-vocabulary 3D mapping and queryable scene
representation using RGB-D data. Open-Fusion harnesses the power of a
pre-trained vision-language foundation model (VLFM) for open-set semantic
comprehension and employs the Truncated Signed Distance Function (TSDF) for
swift 3D scene reconstruction. By leveraging the VLFM, we extract region-based
embeddings and their associated confidence maps. These are then integrated with
3D knowledge from TSDF using an enhanced Hungarian-based feature-matching
mechanism. Notably, Open-Fusion delivers outstanding annotation-free 3D
segmentation for open-vocabulary without necessitating additional 3D training.
Benchmark tests on the ScanNet dataset against leading zero-shot methods
highlight Open-Fusion's superiority. Furthermore, it seamlessly combines the
strengths of region-based VLFM and TSDF, facilitating real-time 3D scene
comprehension that includes object concepts and open-world semantics. We
encourage the readers to view the demos on our project page:
https://uark-aicv.github.io/OpenFusio
An Unsupervised Framework for Online Spatiotemporal Detection of Activities of Daily Living by Hierarchical Activity Models
International audienceAutomatic detection and analysis of human activities captured by various sensors (e.g. 1 sequence of images captured by RGB camera) play an essential role in various research fields in order 2 to understand the semantic content of a captured scene. The main focus of the earlier studies has 3 been widely on supervised classification problem, where a label is assigned for a given short clip. 4 Nevertheless, in real-world scenarios, such as in Activities of Daily Living (ADL), the challenge is 5 to automatically browse long-term (days and weeks) stream of videos to identify segments with 6 semantics corresponding to the model activities and their temporal boundaries. This paper proposes 7 an unsupervised solution to address this problem by generating hierarchical models that combine 8 global trajectory information with local dynamics of the human body. Global information helps in 9 modeling the spatiotemporal evolution of long-term activities and hence, their spatial and temporal 10 localization. Moreover, the local dynamic information incorporates complex local motion patterns of 11 daily activities into the models. Our proposed method is evaluated using realistic datasets captured 12 from observation rooms in hospitals and nursing homes. The experimental data on a variety of 13 monitoring scenarios in hospital settings reveals how this framework can be exploited to provide 14 timely diagnose and medical interventions for cognitive disorders such as Alzheimer's disease. The 15 obtained results show that our framework is a promising attempt capable of generating activity 16 models without any supervision. 1
Adaptive Multidimensional Fuzzy Sets for Texture Modeling
The modeling of the perceptual properties of texture plays a fundamental role in tasks where some interaction with subjects is needed. In order to face the imprecision related to these properties, fuzzy sets defined on the domain of computational measures of the corresponding property are usually employed. In this sense, the most interesting approaches show that the combination of different measures as reference sets improve the texture characterization. However, the main drawback of these proposals is that they do not take into account the subjectivity associated with human perception. For example, the perception of a texture property may change depending on the user, and in addition, the image context may influence the global perception of a given property. In this paper, we propose to solve these problems by combining the use of several computational measures in a reference set with adaptation to the subjectivity of human perception. To do this, we propose a generic methodology that automatically transforms any multidimensional fuzzy set modeling a texture property to the particular perception of a new user or to the image context. For this purpose, the information given by the user, or extracted from the textures present in the image, are employed
- …