525 research outputs found

    Speeding up Adaboost object detection with motion segmentation and Haar feature acceleration

    Get PDF
    A key challenge in a surveillance system is the object detection task. Object detection in general is a non-trivial problem. A sub-problem within the broader context of object detection which many researchers focus on is face detection. Numerous techniques have been proposed for face detection. One of the better performing algorithms is proposed by Viola et. al. This algorithm is based on Adaboost and uses Haar features to detect objects. The main reason for its popularity is very low false positive rates and the fact that the classifier network can be trained for any detection task. The use of Haar basis functions to represent key object features is the key to its success. The basis functions are organized as a network to form a strong classifier. To detect objects, this technique divides each input image into non-overlapping sub-windows and the strong classifier is applied to each sub-window to detect the presence of an object. The process is repeated at multiple scales of the input image to detect objects of various sizes. In this thesis we propose an object detection system that uses object segmentation as a preprocessing step. We use Mixture of Gaussians (MoG) proposed by Staffer et. al. for object segmentation. One key advantage with using segmentation to extract image regions of interest is that it reduces the number of search windows sent to detection task, thereby reducing the computational complexity and the execution time. Moreover, owing to the computational complexity of both the segmentation and detection algorithms we used in the system, we propose hardware architectures for accelerating key computationally intensive blocks. In this thesis we propose hardware architecture for MoG and also for a key compute intensive block within the adaboost algorithm corresponding to the Haar feature computation

    Real-time embedded eye detection system

    Get PDF
    The detection of a person’s eyes is a basic task in applications as important as iris recognition in biometric identification or fatigue detection in driving assistance systems. Current commercial and research systems use software frameworks that require a dedicated computer, whose power consumption, size, and price are significantly large. This paper presents a hardware-based embedded solution for eye detection in real-time. From an algorithmic point-of-view, the popular Viola-Jones approach has been redesigned to enable highly parallel, single-pass image-processing implementation. Synthesized and implemented in an All-Programmable System-on-Chip (AP SoC), this proposal allows us to process more than 88 frames per second (fps), taking the classifier less than 2 ms per image. Experimental validation has been successfully addressed in an iris recognition system that works with walking subjects. In this case, the prototype module includes a CMOS digital imaging sensor providing 16 Mpixels images, and it outputs a stream of detected eyes as 640 × 480 images. Experiments for determining the accuracy of the proposed system in terms of eye detection are performed in the CASIA-Iris-distance V4 database. Significantly, they show that the accuracy in terms of eye detection is 100%.This work has been partially developed within the project RTI2018-099522-B-C4X, funded by the Gobierno de España and FEDER funds, and the ARMORI project (CEIATECH-10) funded by the University of Málaga. Portions of the research in this paper use the CASIA-Iris V4 collected by the Chinese Academy of Sciences - Institute of Automation (CASIA)

    Detecting pedestrians in surveillance videos based on Convolutional Neural Network and Motion

    Get PDF

    Fast Face Detector Training Using Tailored Views

    Full text link
    Face detection is an important task in computer vision and often serves as the first step for a variety of applications. State-of-the-art approaches use efficient learning algorithms and train on large amounts of manually labeled imagery. Acquiring appropriate training images, however, is very time-consuming and does not guarantee that the collected training data is representative in terms of data variability. Moreover, available data sets are often acquired under con-trolled settings, restricting, for example, scene illumination or 3D head pose to a narrow range. This paper takes a look into the automated generation of adaptive training samples from a 3D morphable face model. Using statistical insights, the tailored training data guarantees full data variability and is enriched by arbitrary facial attributes such as age or body weight. Moreover, it can automatically adapt to environmental constraints, such as illumination or viewing angle of recorded video footage from surveillance cameras. We use the tailored imagery to train a new many-core imple-mentation of Viola Jones ’ AdaBoost object detection frame-work. The new implementation is not only faster but also enables the use of multiple feature channels such as color features at training time. In our experiments we trained seven view-dependent face detectors and evaluate these on the Face Detection Data Set and Benchmark (FDDB). Our experiments show that the use of tailored training imagery outperforms state-of-the-art approaches on this challenging dataset. 1

    Acceleration and energy consumption optimization in cascading classifiers for face detection on low-cost ARM big.LITTLE asymmetric architectures

    Full text link
    This paper proposes a mechanism to accelerate and optimize the energy consumption of a face detection software based on Haar-like cascading classifiers, taking advantage of the features of low-cost Asymmetric Multicore Processors (AMPs) with limited power budget. A modelling and task scheduling/allocation is proposed in order to efficiently make use of the existing features on big.LITTLE ARM processors, including: (I) source-code adaptation for parallel computing, which enables code acceleration by applying the OmpSs programming model, a task-based programming model that handles data-dependencies between tasks in a transparent fashion; (II) different OmpSs task allocation policies which take into account the processor asymmetry and can dynamically set processing resources in a more efficient way based on their particular features. The proposed mechanism can be efficiently applied to take advantage of the processing elements existing on low-cost and low-energy multi-core embedded devices executing object detection algorithms based on cascading classifiers. Although these classifiers yield the best results for detection algorithms in the field of computer vision, their high computational requirements prevent them from being used on these devices under real-time requirements. Finally, we compare the energy efficiency of a heterogeneous architecture based on asymmetric multicore processors with a suitable task scheduling, with that of a homogeneous symmetric architecture
    corecore