1,164 research outputs found

    Face Detection with Effective Feature Extraction

    Full text link
    There is an abundant literature on face detection due to its important role in many vision applications. Since Viola and Jones proposed the first real-time AdaBoost based face detector, Haar-like features have been adopted as the method of choice for frontal face detection. In this work, we show that simple features other than Haar-like features can also be applied for training an effective face detector. Since, single feature is not discriminative enough to separate faces from difficult non-faces, we further improve the generalization performance of our simple features by introducing feature co-occurrences. We demonstrate that our proposed features yield a performance improvement compared to Haar-like features. In addition, our findings indicate that features play a crucial role in the ability of the system to generalize.Comment: 7 pages. Conference version published in Asian Conf. Comp. Vision 201

    Asymmetric Pruning for Learning Cascade Detectors

    Full text link
    Cascade classifiers are one of the most important contributions to real-time object detection. Nonetheless, there are many challenging problems arising in training cascade detectors. One common issue is that the node classifier is trained with a symmetric classifier. Having a low misclassification error rate does not guarantee an optimal node learning goal in cascade classifiers, i.e., an extremely high detection rate with a moderate false positive rate. In this work, we present a new approach to train an effective node classifier in a cascade detector. The algorithm is based on two key observations: 1) Redundant weak classifiers can be safely discarded; 2) The final detector should satisfy the asymmetric learning objective of the cascade architecture. To achieve this, we separate the classifier training into two steps: finding a pool of discriminative weak classifiers/features and training the final classifier by pruning weak classifiers which contribute little to the asymmetric learning criterion (asymmetric classifier construction). Our model reduction approach helps accelerate the learning time while achieving the pre-determined learning objective. Experimental results on both face and car data sets verify the effectiveness of the proposed algorithm. On the FDDB face data sets, our approach achieves the state-of-the-art performance, which demonstrates the advantage of our approach.Comment: 14 page

    A Modified Adaboost Algorithm to Reduce False Positives in Face Detection

    Get PDF
    We present a modified Adaboost algorithm in face detection, which aims at an accurate algorithm to reduce false-positive detection rates. We built a new Adaboost weighting system that considers the total error of weak classifiers and classification probability. The probability was determined by computing both positive and negative classification errors for each weak classifier. The new weighting system gives higher weights to weak classifiers with the best positive classifications, which reduces false positives during detection. Experimental results reveal that the original Adaboost and the proposed method have comparable face detection rate performances, and the false-positive results were reduced almost four times using the proposed method

    A PAR-1–dependent orientation gradient of dynamic microtubules directs posterior cargo transport in the Drosophila oocyte

    Get PDF
    A PAR-1–mediated bias in microtubule organization in the Drosophila oocyte underlies posterior-directed mRNA transport

    Detecting Curved Objects Against Cluttered Backgrounds

    Get PDF
    Detecting curved objects against cluttered backgrounds is a hard problem in computer vision. We present new low-level and mid-level features to function in these environments. The low-level features are fast to compute, because they employ an integral image approach, which makes them especially useful in real-time applications. The mid-level features are built from low-level features, and are optimized for curved object detection. The usefulness of these features is tested by designing an object detection algorithm using these features. Object detection is accomplished by transforming the mid-level features into weak classifiers, which then produce a strong classifier using AdaBoost. The resulting strong classifier is then tested on the problem of detecting heads with shoulders. On a database of over 500 images of people, cropped to contain head and shoulders, and with a diverse set of backgrounds, the detection rate is 90% while the false positive rate on a database of 500 negative images is less than 2%

    Automatic detection and prevention of cyberbullying

    Get PDF
    The recent development of social media poses new challenges to the research community in analyzing online interactions between people. Social networking sites offer great opportunities for connecting with others, but also increase the vulnerability of young people to undesirable phenomena, such as cybervictimization. Recent research reports that on average, 20% to 40% of all teenagers have been victimized online. In this paper, we focus on cyberbullying as a particular form of cybervictimization. Successful prevention depends on the adequate detection of potentially harmful messages. However, given the massive information overload on the Web, there is a need for intelligent systems to identify potential risks automatically. We present the construction and annotation of a corpus of Dutch social media posts annotated with fine-grained cyberbullying-related text categories, such as insults and threats. Also, the specific participants (harasser, victim or bystander) in a cyberbullying conversation are identified to enhance the analysis of human interactions involving cyberbullying. Apart from describing our dataset construction and annotation, we present proof-of-concept experiments on the automatic identification of cyberbullying events and fine-grained cyberbullying categories

    Detecting Rip Currents from Images

    Get PDF
    Rip current images are useful for assisting in climate studies but time consuming to manually annotate by hand over thousands of images. Object detection is a possible solution for automatic annotation because of its success and popularity in identifying regions of interest in images, such as human faces. Similarly to faces, rip currents have distinct features that set them apart from other areas of an image, such as more generic patterns of the surf zone. There are many distinct methods of object detection applied in face detection research. In this thesis, the best fit for a rip current object detector is found by comparing these methods. In addition, the methods are improved with Haar features exclusively created for rip current images. The compared methods include max distance from the average, support vector machines, convolutional neural networks, the Viola-Jones object detector, and a meta-learner. The presented results are compared for accuracy, false positive rate, and detection rate. Viola-Jones has the top base-line performance by achieving a detection rate of 0.88 and identifying only 15 false positives in the test image set of 53 rip currents. The described meta-learner integrates the presented Haar features, which are developed in accordance with the original Viola-Jones algorithm. Ada-Boost, a feature ranking algorithm, shows that the newly presented Haar features extract more meaningful data from rip current images than some of the current features. The meta-classifier improves upon the stand-alone Viola-Jones when applying these features by reducing its false positives by 47% while retaining a similar computational cost and detection rate

    From filters to features:Scale-space analysis of edge and blur coding in human vision

    Get PDF
    To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO
    • …
    corecore