4,770 research outputs found
Object Detection based on Region Decomposition and Assembly
Region-based object detection infers object regions for one or more
categories in an image. Due to the recent advances in deep learning and region
proposal methods, object detectors based on convolutional neural networks
(CNNs) have been flourishing and provided the promising detection results.
However, the detection accuracy is degraded often because of the low
discriminability of object CNN features caused by occlusions and inaccurate
region proposals. In this paper, we therefore propose a region decomposition
and assembly detector (R-DAD) for more accurate object detection.
In the proposed R-DAD, we first decompose an object region into multiple
small regions. To capture an entire appearance and part details of the object
jointly, we extract CNN features within the whole object region and decomposed
regions. We then learn the semantic relations between the object and its parts
by combining the multi-region features stage by stage with region assembly
blocks, and use the combined and high-level semantic features for the object
classification and localization. In addition, for more accurate region
proposals, we propose a multi-scale proposal layer that can generate object
proposals of various scales. We integrate the R-DAD into several feature
extractors, and prove the distinct performance improvement on PASCAL07/12 and
MSCOCO18 compared to the recent convolutional detectors.Comment: Accepted to 2019 AAAI Conference on Artificial Intelligence (AAAI
Learning sound representations using trainable COPE feature extractors
Sound analysis research has mainly been focused on speech and music
processing. The deployed methodologies are not suitable for analysis of sounds
with varying background noise, in many cases with very low signal-to-noise
ratio (SNR). In this paper, we present a method for the detection of patterns
of interest in audio signals. We propose novel trainable feature extractors,
which we call COPE (Combination of Peaks of Energy). The structure of a COPE
feature extractor is determined using a single prototype sound pattern in an
automatic configuration process, which is a type of representation learning. We
construct a set of COPE feature extractors, configured on a number of training
patterns. Then we take their responses to build feature vectors that we use in
combination with a classifier to detect and classify patterns of interest in
audio signals. We carried out experiments on four public data sets: MIVIA audio
events, MIVIA road events, ESC-10 and TU Dortmund data sets. The results that
we achieved (recognition rate equal to 91.71% on the MIVIA audio events, 94% on
the MIVIA road events, 81.25% on the ESC-10 and 94.27% on the TU Dortmund)
demonstrate the effectiveness of the proposed method and are higher than the
ones obtained by other existing approaches. The COPE feature extractors have
high robustness to variations of SNR. Real-time performance is achieved even
when the value of a large number of features is computed.Comment: Accepted for publication in Pattern Recognitio
- …