1,173 research outputs found
Multiple Instance Learning: A Survey of Problem Characteristics and Applications
Multiple instance learning (MIL) is a form of weakly supervised learning
where training instances are arranged in sets, called bags, and a label is
provided for the entire bag. This formulation is gaining interest because it
naturally fits various problems and allows to leverage weakly labeled data.
Consequently, it has been used in diverse application fields such as computer
vision and document classification. However, learning from bags raises
important challenges that are unique to MIL. This paper provides a
comprehensive survey of the characteristics which define and differentiate the
types of MIL problems. Until now, these problem characteristics have not been
formally identified and described. As a result, the variations in performance
of MIL algorithms from one data set to another are difficult to explain. In
this paper, MIL problem characteristics are grouped into four broad categories:
the composition of the bags, the types of data distribution, the ambiguity of
instance labels, and the task to be performed. Methods specialized to address
each category are reviewed. Then, the extent to which these characteristics
manifest themselves in key MIL application areas are described. Finally,
experiments are conducted to compare the performance of 16 state-of-the-art MIL
methods on selected problem characteristics. This paper provides insight on how
the problem characteristics affect MIL algorithms, recommendations for future
benchmarking and promising avenues for research
Semi-Supervised Deep Learning for Multi-Tissue Segmentation from Multi-Contrast MRI
Segmentation of thigh tissues (muscle, fat, inter-muscular adipose tissue
(IMAT), bone, and bone marrow) from magnetic resonance imaging (MRI) scans is
useful for clinical and research investigations in various conditions such as
aging, diabetes mellitus, obesity, metabolic syndrome, and their associated
comorbidities. Towards a fully automated, robust, and precise quantification of
thigh tissues, herein we designed a novel semi-supervised segmentation
algorithm based on deep network architectures. Built upon Tiramisu segmentation
engine, our proposed deep networks use variational and specially designed
targeted dropouts for faster and robust convergence, and utilize multi-contrast
MRI scans as input data. In our experiments, we have used 150 scans from 50
distinct subjects from the Baltimore Longitudinal Study of Aging (BLSA). The
proposed system made use of both labeled and unlabeled data with high efficacy
for training, and outperformed the current state-of-the-art methods with dice
scores of 97.52%, 94.61%, 80.14%, 95.93%, and 96.83% for muscle, fat, IMAT,
bone, and bone marrow tissues, respectively. Our results indicate that the
proposed system can be useful for clinical research studies where volumetric
and distributional tissue quantification is pivotal and labeling is a
significant issue. To the best of our knowledge, the proposed system is the
first attempt at multi-tissue segmentation using a single end-to-end
semi-supervised deep learning framework for multi-contrast thigh MRI scans.Comment: 20 pages, 9 figures, Journal of Signal Processing System
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
- …