29,811 research outputs found
Focused Proofreading: Efficiently Extracting Connectomes from Segmented EM Images
Identifying complex neural circuitry from electron microscopic (EM) images
may help unlock the mysteries of the brain. However, identifying this circuitry
requires time-consuming, manual tracing (proofreading) due to the size and
intricacy of these image datasets, thus limiting state-of-the-art analysis to
very small brain regions. Potential avenues to improve scalability include
automatic image segmentation and crowd sourcing, but current efforts have had
limited success. In this paper, we propose a new strategy, focused
proofreading, that works with automatic segmentation and aims to limit
proofreading to the regions of a dataset that are most impactful to the
resulting circuit. We then introduce a novel workflow, which exploits
biological information such as synapses, and apply it to a large dataset in the
fly optic lobe. With our techniques, we achieve significant tracing speedups of
3-5x without sacrificing the quality of the resulting circuit. Furthermore, our
methodology makes the task of proofreading much more accessible and hence
potentially enhances the effectiveness of crowd sourcing
Automated Quantitative Description of Spiral Galaxy Arm-Segment Structure
We describe a system for the automatic quantification of structure in spiral
galaxies. This enables translation of sky survey images into data needed to
help address fundamental astrophysical questions such as the origin of spiral
structure---a phenomenon that has eluded theoretical description despite 150
years of study (Sellwood 2010). The difficulty of automated measurement is
underscored by the fact that, to date, only manual efforts (such as the citizen
science project Galaxy Zoo) have been able to extract information about large
samples of spiral galaxies. An automated approach will be needed to eliminate
measurement subjectivity and handle the otherwise-overwhelming image quantities
(up to billions of images) from near-future surveys. Our approach automatically
describes spiral galaxy structure as a set of arcs, precisely describing spiral
arm segment arrangement while retaining the flexibility needed to accommodate
the observed wide variety of spiral galaxy structure. The largest existing
quantitative measurements were manually-guided and encompassed fewer than 100
galaxies, while we have already applied our method to more than 29,000
galaxies. Our output matches previous information, both quantitatively over
small existing samples, and qualitatively against human classifications from
Galaxy Zoo.Comment: 9 pages;4 figures; 2 tables; accepted to CVPR (Computer Vision and
Pattern Recognition), June 2012, Providence, Rhode Island, June 16-21, 201
Gray Image extraction using Fuzzy Logic
Fuzzy systems concern fundamental methodology to represent and process
uncertainty and imprecision in the linguistic information. The fuzzy systems
that use fuzzy rules to represent the domain knowledge of the problem are known
as Fuzzy Rule Base Systems (FRBS). On the other hand image segmentation and
subsequent extraction from a noise-affected background, with the help of
various soft computing methods, are relatively new and quite popular due to
various reasons. These methods include various Artificial Neural Network (ANN)
models (primarily supervised in nature), Genetic Algorithm (GA) based
techniques, intensity histogram based methods etc. providing an extraction
solution working in unsupervised mode happens to be even more interesting
problem. Literature suggests that effort in this respect appears to be quite
rudimentary. In the present article, we propose a fuzzy rule guided novel
technique that is functional devoid of any external intervention during
execution. Experimental results suggest that this approach is an efficient one
in comparison to different other techniques extensively addressed in
literature. In order to justify the supremacy of performance of our proposed
technique in respect of its competitors, we take recourse to effective metrics
like Mean Squared Error (MSE), Mean Absolute Error (MAE), Peak Signal to Noise
Ratio (PSNR).Comment: 8 pages, 5 figures, Fuzzy Rule Base, Image Extraction, Fuzzy
Inference System (FIS), Membership Functions, Membership values,Image coding
and Processing, Soft Computing, Computer Vision Accepted and published in
IEEE. arXiv admin note: text overlap with arXiv:1206.363
High compression image and image sequence coding
The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis
Research in interactive scene analysis
Cooperative (man-machine) scene analysis techniques were developed whereby humans can provide a computer with guidance when completely automated processing is infeasible. An interactive approach promises significant near-term payoffs in analyzing various types of high volume satellite imagery, as well as vehicle-based imagery used in robot planetary exploration. This report summarizes the work accomplished over the duration of the project and describes in detail three major accomplishments: (1) the interactive design of texture classifiers; (2) a new approach for integrating the segmentation and interpretation phases of scene analysis; and (3) the application of interactive scene analysis techniques to cartography
Machine learning of hierarchical clustering to segment 2D and 3D images
We aim to improve segmentation through the use of machine learning tools
during region agglomeration. We propose an active learning approach for
performing hierarchical agglomerative segmentation from superpixels. Our method
combines multiple features at all scales of the agglomerative process, works
for data with an arbitrary number of dimensions, and scales to very large
datasets. We advocate the use of variation of information to measure
segmentation accuracy, particularly in 3D electron microscopy (EM) images of
neural tissue, and using this metric demonstrate an improvement over competing
algorithms in EM and natural images.Comment: 15 pages, 8 figure
Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching
This paper presents a robotic pick-and-place system that is capable of
grasping and recognizing both known and novel objects in cluttered
environments. The key new feature of the system is that it handles a wide range
of object categories without needing any task-specific training data for novel
objects. To achieve this, it first uses a category-agnostic affordance
prediction algorithm to select and execute among four different grasping
primitive behaviors. It then recognizes picked objects with a cross-domain
image classification framework that matches observed images to product images.
Since product images are readily available for a wide range of objects (e.g.,
from the web), the system works out-of-the-box for novel objects without
requiring any additional training data. Exhaustive experimental results
demonstrate that our multi-affordance grasping achieves high success rates for
a wide variety of objects in clutter, and our recognition algorithm achieves
high accuracy for both known and novel grasped objects. The approach was part
of the MIT-Princeton Team system that took 1st place in the stowing task at the
2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are
available online at http://arc.cs.princeton.eduComment: Project webpage: http://arc.cs.princeton.edu Summary video:
https://youtu.be/6fG7zwGfIk
- …