27,667 research outputs found
Eye in the Sky: Real-time Drone Surveillance System (DSS) for Violent Individuals Identification using ScatterNet Hybrid Deep Learning Network
Drone systems have been deployed by various law enforcement agencies to
monitor hostiles, spy on foreign drug cartels, conduct border control
operations, etc. This paper introduces a real-time drone surveillance system to
identify violent individuals in public areas. The system first uses the Feature
Pyramid Network to detect humans from aerial images. The image region with the
human is used by the proposed ScatterNet Hybrid Deep Learning (SHDL) network
for human pose estimation. The orientations between the limbs of the estimated
pose are next used to identify the violent individuals. The proposed deep
network can learn meaningful representations quickly using ScatterNet and
structural priors with relatively fewer labeled examples. The system detects
the violent individuals in real-time by processing the drone images in the
cloud. This research also introduces the aerial violent individual dataset used
for training the deep network which hopefully may encourage researchers
interested in using deep learning for aerial surveillance. The pose estimation
and violent individuals identification performance is compared with the
state-of-the-art techniques.Comment: To Appear in the Efficient Deep Learning for Computer Vision (ECV)
workshop at IEEE Computer Vision and Pattern Recognition (CVPR) 2018. Youtube
demo at this: https://www.youtube.com/watch?v=zYypJPJipY
Human Pose Estimation using Global and Local Normalization
In this paper, we address the problem of estimating the positions of human
joints, i.e., articulated pose estimation. Recent state-of-the-art solutions
model two key issues, joint detection and spatial configuration refinement,
together using convolutional neural networks. Our work mainly focuses on
spatial configuration refinement by reducing variations of human poses
statistically, which is motivated by the observation that the scattered
distribution of the relative locations of joints e.g., the left wrist is
distributed nearly uniformly in a circular area around the left shoulder) makes
the learning of convolutional spatial models hard. We present a two-stage
normalization scheme, human body normalization and limb normalization, to make
the distribution of the relative joint locations compact, resulting in easier
learning of convolutional spatial models and more accurate pose estimation. In
addition, our empirical results show that incorporating multi-scale supervision
and multi-scale fusion into the joint detection network is beneficial.
Experiment results demonstrate that our method consistently outperforms
state-of-the-art methods on the benchmarks.Comment: ICCV201
Automatic nesting seabird detection based on boosted HOG-LBP descriptors
Seabird populations are considered an important and accessible indicator of the health of marine environments: variations have been linked with climate change and pollution 1. However, manual monitoring of large populations is labour-intensive, and requires significant investment of time and effort. In this paper, we propose a novel detection system for monitoring a specific population of Common Guillemots on Skomer Island, West Wales (UK). We incorporate two types of features, Histograms of Oriented Gradients (HOG) and Local Binary Pattern (LBP), to capture the edge/local shape information and the texture information of nesting seabirds. Optimal features are selected from a large HOG-LBP feature pool by boosting techniques, to calculate a compact representation suitable for the SVM classifier. A comparative study of two kinds of detectors, i.e., whole-body detector, head-beak detector, and their fusion is presented. When the proposed method is applied to the seabird detection, consistent and promising results are achieved. © 2011 IEEE
MoDeep: A Deep Learning Framework Using Motion Features for Human Pose Estimation
In this work, we propose a novel and efficient method for articulated human
pose estimation in videos using a convolutional network architecture, which
incorporates both color and motion features. We propose a new human body pose
dataset, FLIC-motion, that extends the FLIC dataset with additional motion
features. We apply our architecture to this dataset and report significantly
better performance than current state-of-the-art pose detection systems
Human Pose Estimation using Deep Consensus Voting
In this paper we consider the problem of human pose estimation from a single
still image. We propose a novel approach where each location in the image votes
for the position of each keypoint using a convolutional neural net. The voting
scheme allows us to utilize information from the whole image, rather than rely
on a sparse set of keypoint locations. Using dense, multi-target votes, not
only produces good keypoint predictions, but also enables us to compute
image-dependent joint keypoint probabilities by looking at consensus voting.
This differs from most previous methods where joint probabilities are learned
from relative keypoint locations and are independent of the image. We finally
combine the keypoints votes and joint probabilities in order to identify the
optimal pose configuration. We show our competitive performance on the MPII
Human Pose and Leeds Sports Pose datasets
- …