16,106 research outputs found
An Optimized and Fast Scheme for Real-time Human Detection using Raspberry Pi
This paper has been presented at : The International Conference on Digital Image Computing: Techniques and Applications (DICTA 2016)Real-time human detection is a challenging task due to appearance variance, occlusion and rapidly changing content; therefore it requires efficient hardware and optimized software. This paper presents a real-time human detection scheme on a Raspberry Pi. An efficient algorithm for human detection is proposed by processing regions of interest (ROI) based upon foreground estimation. Different number of scales have been considered for computing Histogram of Oriented Gradients (HOG) features for the selected ROI. Support vector machine (SVM) is employed for classification of HOG feature vectors into detected and non-detected human regions. Detected human regions are further filtered by analyzing the area of overlapping regions. Considering the limited capabilities of Raspberry Pi, the proposed scheme is evaluated using six different testing schemes on Town Centre and CAVIAR datasets. Out of these six testing schemes, Single Window with two Scales (SW2S) processes 3 frames per second with acceptable less accuracy than the original HOG. The proposed algorithm is about 8 times faster than the original multi-scale HOG and recommended to be used for real-time human detection on a Raspberry Pi
Human Detection using Feature Fusion Set of LBP and HOG
Human detection has become one of the major aspect in the real time modern systems whether it is driver-less vehicles or in disaster management or surveillance. Multiple approaches of machine learning are used to find an efficient and effective way of human detection. The proposed method is mainly applied to address the pose-variant problem of human detection. It reduces the redundancy problem which leads to a slow system. To solve the pose variant and redundancy problem, mutation and crossover concept has been applied over Local Binary Pattern (LBP) and Histogram of Oriented Gradient (HOG) feature set to generate final set . Then combination of feature fusion set of LBP and HOG are fed into Support Vector Machine (SVM) for classification purpose. To improve the performance of detector an unsupervised framework has been used for learning. For post-processing to suppress overlapping and redundant windows - Non-maximal suppression is used . For training and testing purpose, INRIA dataset has been used. The proposed method is compared with HOG, LBP, and HOG-LBP techniques, the result shows that our method outperforms these techniques
A robust and efficient video representation for action recognition
This paper introduces a state-of-the-art video representation and applies it
to efficient action recognition and detection. We first propose to improve the
popular dense trajectory features by explicit camera motion estimation. More
specifically, we extract feature point matches between frames using SURF
descriptors and dense optical flow. The matches are used to estimate a
homography with RANSAC. To improve the robustness of homography estimation, a
human detector is employed to remove outlier matches from the human body as
human motion is not constrained by the camera. Trajectories consistent with the
homography are considered as due to camera motion, and thus removed. We also
use the homography to cancel out camera motion from the optical flow. This
results in significant improvement on motion-based HOF and MBH descriptors. We
further explore the recent Fisher vector as an alternative feature encoding
approach to the standard bag-of-words histogram, and consider different ways to
include spatial layout information in these encodings. We present a large and
varied set of evaluations, considering (i) classification of short basic
actions on six datasets, (ii) localization of such actions in feature-length
movies, and (iii) large-scale recognition of complex events. We find that our
improved trajectory features significantly outperform previous dense
trajectories, and that Fisher vectors are superior to bag-of-words encodings
for video recognition tasks. In all three tasks, we show substantial
improvements over the state-of-the-art results
Articulated Clinician Detection Using 3D Pictorial Structures on RGB-D Data
Reliable human pose estimation (HPE) is essential to many clinical
applications, such as surgical workflow analysis, radiation safety monitoring
and human-robot cooperation. Proposed methods for the operating room (OR) rely
either on foreground estimation using a multi-camera system, which is a
challenge in real ORs due to color similarities and frequent illumination
changes, or on wearable sensors or markers, which are invasive and therefore
difficult to introduce in the room. Instead, we propose a novel approach based
on Pictorial Structures (PS) and on RGB-D data, which can be easily deployed in
real ORs. We extend the PS framework in two ways. First, we build robust and
discriminative part detectors using both color and depth images. We also
present a novel descriptor for depth images, called histogram of depth
differences (HDD). Second, we extend PS to 3D by proposing 3D pairwise
constraints and a new method that makes exact inference tractable. Our approach
is evaluated for pose estimation and clinician detection on a challenging RGB-D
dataset recorded in a busy operating room during live surgeries. We conduct
series of experiments to study the different part detectors in conjunction with
the various 2D or 3D pairwise constraints. Our comparisons demonstrate that 3D
PS with RGB-D part detectors significantly improves the results in a visually
challenging operating environment.Comment: The supplementary video is available at https://youtu.be/iabbGSqRSg
Efficient smile detection by Extreme Learning Machine
Smile detection is a specialized task in facial expression analysis with applications such as photo selection, user experience analysis, and patient monitoring. As one of the most important and informative expressions, smile conveys the underlying emotion status such as joy, happiness, and satisfaction. In this paper, an efficient smile detection approach is proposed based on Extreme Learning Machine (ELM). The faces are first detected and a holistic flow-based face registration is applied which does not need any manual labeling or key point detection. Then ELM is used to train the classifier. The proposed smile detector is tested with different feature descriptors on publicly available databases including real-world face images. The comparisons against benchmark classifiers including Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) suggest that the proposed ELM based smile detector in general performs better and is very efficient. Compared to state-of-the-art smile detector, the proposed method achieves competitive results without preprocessing and manual registration
Online learning and detection of faces with low human supervision
The final publication is available at link.springer.comWe present an efficient,online,and interactive approach for computing a classifier, called Wild Lady Ferns (WiLFs), for face learning and detection using small human supervision. More precisely, on the one hand, WiLFs combine online boosting and extremely randomized trees (Random Ferns) to compute progressively an efficient and discriminative classifier. On the other hand, WiLFs use an interactive human-machine approach that combines two complementary learning strategies to reduce considerably the degree of human supervision during learning. While the first strategy corresponds to query-by-boosting active learning, that requests human assistance over difficult samples in function of the classifier confidence, the second strategy refers to a memory-based learning which uses ¿ Exemplar-based Nearest Neighbors (¿ENN) to assist automatically the classifier. A pre-trained Convolutional Neural Network (CNN) is used to perform ¿ENN with high-level feature descriptors. The proposed approach is therefore fast (WilFs run in 1 FPS using a code not fully optimized), accurate (we obtain detection rates over 82% in complex datasets), and labor-saving (human assistance percentages of less than 20%).
As a byproduct, we demonstrate that WiLFs also perform semi-automatic annotation during learning, as while the classifier is being computed, WiLFs are discovering faces instances in input images which are used subsequently for training online the classifier. The advantages of our approach are demonstrated in synthetic and publicly available databases, showing comparable detection rates as offline approaches that require larger amounts of handmade training data.Peer ReviewedPostprint (author's final draft
Face Detection with Effective Feature Extraction
There is an abundant literature on face detection due to its important role
in many vision applications. Since Viola and Jones proposed the first real-time
AdaBoost based face detector, Haar-like features have been adopted as the
method of choice for frontal face detection. In this work, we show that simple
features other than Haar-like features can also be applied for training an
effective face detector. Since, single feature is not discriminative enough to
separate faces from difficult non-faces, we further improve the generalization
performance of our simple features by introducing feature co-occurrences. We
demonstrate that our proposed features yield a performance improvement compared
to Haar-like features. In addition, our findings indicate that features play a
crucial role in the ability of the system to generalize.Comment: 7 pages. Conference version published in Asian Conf. Comp. Vision
201
Beyond Physical Connections: Tree Models in Human Pose Estimation
Simple tree models for articulated objects prevails in the last decade.
However, it is also believed that these simple tree models are not capable of
capturing large variations in many scenarios, such as human pose estimation.
This paper attempts to address three questions: 1) are simple tree models
sufficient? more specifically, 2) how to use tree models effectively in human
pose estimation? and 3) how shall we use combined parts together with single
parts efficiently?
Assuming we have a set of single parts and combined parts, and the goal is to
estimate a joint distribution of their locations. We surprisingly find that no
latent variables are introduced in the Leeds Sport Dataset (LSP) during
learning latent trees for deformable model, which aims at approximating the
joint distributions of body part locations using minimal tree structure. This
suggests one can straightforwardly use a mixed representation of single and
combined parts to approximate their joint distribution in a simple tree model.
As such, one only needs to build Visual Categories of the combined parts, and
then perform inference on the learned latent tree. Our method outperformed the
state of the art on the LSP, both in the scenarios when the training images are
from the same dataset and from the PARSE dataset. Experiments on animal images
from the VOC challenge further support our findings.Comment: CVPR 201
- …