6,636 research outputs found
A comparative study of interactive segmentation with different number of strokes on complex images
Interactive image segmentation is the way to extract an object of interest with the guidance of the user. The guidance from the user is an iterative process until the required object of interest had been segmented. Therefore, the input from the user as well as the understanding of the algorithms based on the user input has an essential role in the success of interactive segmentation. The most common user input type in interactive segmentation is using strokes. The different number of strokes are utilized in each different interactive segmentation algorithms. There was no evaluation of the effects on the number of strokes on this interactive segmentation. Therefore, this paper intends to fill this shortcoming. In this study, the input strokes had been categorized into single, double, and multiple strokes. The use of the same number of strokes on the object of interest and background on three interactive segmentation algorithms: i) Nonparametric Higher-order Learning (NHL), ii) Maximal Similarity-based Region Merging (MSRM) and iii) Graph-Based Manifold Ranking (GBMR) are evaluated, focusing on the complex images from Berkeley image dataset. This dataset contains a total of 12,000 test color images and ground truth images. Two types of complex images had been selected for the experiment: image with a background color like the object of interest, and image with the object of interest overlapped with other similar objects. This can be concluded that, generally, more strokes used as input could improve image segmentation accuracy
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
Mapping Topographic Structure in White Matter Pathways with Level Set Trees
Fiber tractography on diffusion imaging data offers rich potential for
describing white matter pathways in the human brain, but characterizing the
spatial organization in these large and complex data sets remains a challenge.
We show that level set trees---which provide a concise representation of the
hierarchical mode structure of probability density functions---offer a
statistically-principled framework for visualizing and analyzing topography in
fiber streamlines. Using diffusion spectrum imaging data collected on
neurologically healthy controls (N=30), we mapped white matter pathways from
the cortex into the striatum using a deterministic tractography algorithm that
estimates fiber bundles as dimensionless streamlines. Level set trees were used
for interactive exploration of patterns in the endpoint distributions of the
mapped fiber tracks and an efficient segmentation of the tracks that has
empirical accuracy comparable to standard nonparametric clustering methods. We
show that level set trees can also be generalized to model pseudo-density
functions in order to analyze a broader array of data types, including entire
fiber streamlines. Finally, resampling methods show the reliability of the
level set tree as a descriptive measure of topographic structure, illustrating
its potential as a statistical descriptor in brain imaging analysis. These
results highlight the broad applicability of level set trees for visualizing
and analyzing high-dimensional data like fiber tractography output
Nonparametric Bayesian Double Articulation Analyzer for Direct Language Acquisition from Continuous Speech Signals
Human infants can discover words directly from unsegmented speech signals
without any explicitly labeled data. In this paper, we develop a novel machine
learning method called nonparametric Bayesian double articulation analyzer
(NPB-DAA) that can directly acquire language and acoustic models from observed
continuous speech signals. For this purpose, we propose an integrative
generative model that combines a language model and an acoustic model into a
single generative model called the "hierarchical Dirichlet process hidden
language model" (HDP-HLM). The HDP-HLM is obtained by extending the
hierarchical Dirichlet process hidden semi-Markov model (HDP-HSMM) proposed by
Johnson et al. An inference procedure for the HDP-HLM is derived using the
blocked Gibbs sampler originally proposed for the HDP-HSMM. This procedure
enables the simultaneous and direct inference of language and acoustic models
from continuous speech signals. Based on the HDP-HLM and its inference
procedure, we developed a novel double articulation analyzer. By assuming
HDP-HLM as a generative model of observed time series data, and by inferring
latent variables of the model, the method can analyze latent double
articulation structure, i.e., hierarchically organized latent words and
phonemes, of the data in an unsupervised manner. The novel unsupervised double
articulation analyzer is called NPB-DAA.
The NPB-DAA can automatically estimate double articulation structure embedded
in speech signals. We also carried out two evaluation experiments using
synthetic data and actual human continuous speech signals representing Japanese
vowel sequences. In the word acquisition and phoneme categorization tasks, the
NPB-DAA outperformed a conventional double articulation analyzer (DAA) and
baseline automatic speech recognition system whose acoustic model was trained
in a supervised manner.Comment: 15 pages, 7 figures, Draft submitted to IEEE Transactions on
Autonomous Mental Development (TAMD
DeepCut: Object Segmentation from Bounding Box Annotations using Convolutional Neural Networks
In this paper, we propose DeepCut, a method to obtain pixelwise object
segmentations given an image dataset labelled with bounding box annotations. It
extends the approach of the well-known GrabCut method to include machine
learning by training a neural network classifier from bounding box annotations.
We formulate the problem as an energy minimisation problem over a
densely-connected conditional random field and iteratively update the training
targets to obtain pixelwise object segmentations. Additionally, we propose
variants of the DeepCut method and compare those to a naive approach to CNN
training under weak supervision. We test its applicability to solve brain and
lung segmentation problems on a challenging fetal magnetic resonance dataset
and obtain encouraging results in terms of accuracy
- …