31,441 research outputs found
A Review of Object Detection Models based on Convolutional Neural Network
Convolutional Neural Network (CNN) has become the state-of-the-art for object
detection in image task. In this chapter, we have explained different
state-of-the-art CNN based object detection models. We have made this review
with categorization those detection models according to two different
approaches: two-stage approach and one-stage approach. Through this chapter, it
has shown advancements in object detection models from R-CNN to latest
RefineDet. It has also discussed the model description and training details of
each model. Here, we have also drawn a comparison among those models.Comment: 17 pages, 11 figures, 1 tabl
Outfit Recommender System
The online apparel retail market size in the United States is worth about seventy-two billion US dollars. Recommendation systems on retail websites generate a lot of this revenue. Thus, improving recommendation systems can increase their revenue. Traditional recommendations for clothes consisted of lexical methods. However, visual-based recommendations have gained popularity over the past few years. This involves processing a multitude of images using different image processing techniques. In order to handle such a vast quantity of images, deep neural networks have been used extensively. With the help of fast Graphics Processing Units, these networks provide results which are extremely accurate, within a small amount of time. However, there are still ways in which recommendations for clothes can be improved. We propose an event-based clothing recommendation system which uses object detection. We train a model to identify nine events/scenarios that a user might attend: White Wedding, Indian Wedding, Conference, Funeral, Red Carpet, Pool Party, Birthday, Graduation and Workout. We train another model to detect clothes out of fifty-three categories of clothes worn at the event. Object detection gives a mAP of 84.01. Nearest neighbors of the clothes detected are recommended to the user
Video Classification With CNNs: Using The Codec As A Spatio-Temporal Activity Sensor
We investigate video classification via a two-stream convolutional neural
network (CNN) design that directly ingests information extracted from
compressed video bitstreams. Our approach begins with the observation that all
modern video codecs divide the input frames into macroblocks (MBs). We
demonstrate that selective access to MB motion vector (MV) information within
compressed video bitstreams can also provide for selective, motion-adaptive, MB
pixel decoding (a.k.a., MB texture decoding). This in turn allows for the
derivation of spatio-temporal video activity regions at extremely high speed in
comparison to conventional full-frame decoding followed by optical flow
estimation. In order to evaluate the accuracy of a video classification
framework based on such activity data, we independently train two CNN
architectures on MB texture and MV correspondences and then fuse their scores
to derive the final classification of each test video. Evaluation on two
standard datasets shows that the proposed approach is competitive to the best
two-stream video classification approaches found in the literature. At the same
time: (i) a CPU-based realization of our MV extraction is over 977 times faster
than GPU-based optical flow methods; (ii) selective decoding is up to 12 times
faster than full-frame decoding; (iii) our proposed spatial and temporal CNNs
perform inference at 5 to 49 times lower cloud computing cost than the fastest
methods from the literature.Comment: Accepted in IEEE Transactions on Circuits and Systems for Video
Technology. Extension of ICIP 2017 conference pape
Wireless Interference Identification with Convolutional Neural Networks
The steadily growing use of license-free frequency bands requires reliable
coexistence management for deterministic medium utilization. For interference
mitigation, proper wireless interference identification (WII) is essential. In
this work we propose the first WII approach based upon deep convolutional
neural networks (CNNs). The CNN naively learns its features through
self-optimization during an extensive data-driven GPU-based training process.
We propose a CNN example which is based upon sensing snapshots with a limited
duration of 12.8 {\mu}s and an acquisition bandwidth of 10 MHz. The CNN differs
between 15 classes. They represent packet transmissions of IEEE 802.11 b/g,
IEEE 802.15.4 and IEEE 802.15.1 with overlapping frequency channels within the
2.4 GHz ISM band. We show that the CNN outperforms state-of-the-art WII
approaches and has a classification accuracy greater than 95% for
signal-to-noise ratio of at least -5 dB
- …