127,391 research outputs found
DxNAT - Deep Neural Networks for Explaining Non-Recurring Traffic Congestion
Non-recurring traffic congestion is caused by temporary disruptions, such as
accidents, sports games, adverse weather, etc. We use data related to real-time
traffic speed, jam factors (a traffic congestion indicator), and events
collected over a year from Nashville, TN to train a multi-layered deep neural
network. The traffic dataset contains over 900 million data records. The
network is thereafter used to classify the real-time data and identify
anomalous operations. Compared with traditional approaches of using statistical
or machine learning techniques, our model reaches an accuracy of 98.73 percent
when identifying traffic congestion caused by football games. Our approach
first encodes the traffic across a region as a scaled image. After that the
image data from different timestamps is fused with event- and time-related
data. Then a crossover operator is used as a data augmentation method to
generate training datasets with more balanced classes. Finally, we use the
receiver operating characteristic (ROC) analysis to tune the sensitivity of the
classifier. We present the analysis of the training time and the inference time
separately
Learning Visual Attributes
We present a probabilistic generative model of visual attributes, together with an efficient learning algorithm. Attributes are visual qualities of objects, such as āredā, āstripedā, or āspottedā. The model sees attributes as patterns of image segments, repeatedly sharing some characteristic properties. These can be any combination of appearance, shape, or the layout of segments within the pattern. Moreover, attributes with general appearance are taken into account, such as the pattern of alternation of any two colors which is characteristic for stripes. To enable learning from unsegmented training images, the model is learnt discriminatively, by optimizing a likelihood ratio. As demonstrated in the experimental evaluation, our model can learn in a weakly supervised setting and encompasses a broad range of attributes. We show that attributes can be learnt starting from a text query to Google image search, and can then be used to recognize the attribute and determine its spatial extent in novel real-world images.
Self-Configuring and Evolving Fuzzy Image Thresholding
Every segmentation algorithm has parameters that need to be adjusted in order
to achieve good results. Evolving fuzzy systems for adjustment of segmentation
parameters have been proposed recently (Evolving fuzzy image segmentation --
EFIS [1]. However, similar to any other algorithm, EFIS too suffers from a few
limitations when used in practice. As a major drawback, EFIS depends on
detection of the object of interest for feature calculation, a task that is
highly application-dependent. In this paper, a new version of EFIS is proposed
to overcome these limitations. The new EFIS, called self-configuring EFIS
(SC-EFIS), uses available training data to auto-configure the parameters that
are fixed in EFIS. As well, the proposed SC-EFIS relies on a feature selection
process that does not require the detection of a region of interest (ROI).Comment: To appear in proceedings of The 14th International Conference on
Machine Learning and Applications (IEEE ICMLA 2015), Miami, Florida, USA,
201
Deeply-Supervised CNN for Prostate Segmentation
Prostate segmentation from Magnetic Resonance (MR) images plays an important
role in image guided interven- tion. However, the lack of clear boundary
specifically at the apex and base, and huge variation of shape and texture
between the images from different patients make the task very challenging. To
overcome these problems, in this paper, we propose a deeply supervised
convolutional neural network (CNN) utilizing the convolutional information to
accurately segment the prostate from MR images. The proposed model can
effectively detect the prostate region with additional deeply supervised layers
compared with other approaches. Since some information will be abandoned after
convolution, it is necessary to pass the features extracted from early stages
to later stages. The experimental results show that significant segmentation
accuracy improvement has been achieved by our proposed method compared to other
reported approaches.Comment: Due to a crucial sign error in equation
Convolutional Feature Masking for Joint Object and Stuff Segmentation
The topic of semantic segmentation has witnessed considerable progress due to
the powerful features learned by convolutional neural networks (CNNs). The
current leading approaches for semantic segmentation exploit shape information
by extracting CNN features from masked image regions. This strategy introduces
artificial boundaries on the images and may impact the quality of the extracted
features. Besides, the operations on the raw image domain require to compute
thousands of networks on a single image, which is time-consuming. In this
paper, we propose to exploit shape information via masking convolutional
features. The proposal segments (e.g., super-pixels) are treated as masks on
the convolutional feature maps. The CNN features of segments are directly
masked out from these maps and used to train classifiers for recognition. We
further propose a joint method to handle objects and "stuff" (e.g., grass, sky,
water) in the same framework. State-of-the-art results are demonstrated on
benchmarks of PASCAL VOC and new PASCAL-CONTEXT, with a compelling
computational speed.Comment: IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
201
- ā¦