2 research outputs found

    Scale And Pose Invariant Real-time Face Detection And Tracking

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2008Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2008Bu çalışmada görüntü tabanlı en gözde ve en yeni yöntemlerden biri olan ve Adaboost algoritması, “Integral Görüntü” tekniği ve kaskat sınıflandırıcılara dayalı yöntem kullanılarak insan yüzünün bulunması ve izlenmesi gerçeklendi. Beş değişik poza (sol, sol+45°, ön yüz, sağ+45° ve sağ) ait insan yüzü bu yöntemle eğitildi. Ayrıca, kolay uygulanabilirliğinden ve gerçek zamanlı uygulamalardaki hızından dolayı, yüzün izlenmesi için CAMSHIFT algoritması kullanıldı. Görüntü işlemenin gerçek zamanlı uygulamalara kötü yöndeki etkisinden kaçınmak için paralel programlama gerçeklendi. Bunu sağlamak için iki iplikçik (ana ve çocuk) oluşturuldu. Çocuk iplikçik alınan görüntü çerçeveleri üzerinde yüzleri bulmaya çalışırken, ana iplikçik de gelen tüm görüntüleri çoçuk iplikçikten aldığı veriye göre işler ve bunu kullanıcı penceresine basar. Sonuç olarak, insan yüzlerini bulma ve izleme sistemi başarılı bi gerçeklendi ve üç farklı test kümesi ile bir video kümesindeki test sonuçlarına göre yüksek başarım oranı sağladığı görüldü.In this study, one of the most popular and recent appearance based face detection method used which is a combination of Adaboost algorithm, Integral Image and cascading classifiers. Faces are trained for five different poses (left, left+45°, front, right+45° and right). Also, CAMSHIFT algorithm is used for face tracking because of its speed and easy implementation for face. To avoid impact of image analysis’s computations on Real-time application, parallel processing methods were used. Two processes (main and child) were created for this purpose. Child process detects faces periodically on the given frame while the main one process all frames and displays the results of child process to the user screen. In conclusion, our face detection and tracking system has been implemented successfully and it has demonstrated significantly high detection/tracking rates based on the tests on three different image databases and one video database.Yüksek LisansM.Sc

    Hand gesture recognition in uncontrolled environments

    Get PDF
    Human Computer Interaction has been relying on mechanical devices to feed information into computers with low efficiency for a long time. With the recent developments in image processing and machine learning methods, the computer vision community is ready to develop the next generation of Human Computer Interaction methods, including Hand Gesture Recognition methods. A comprehensive Hand Gesture Recognition based semantic level Human Computer Interaction framework for uncontrolled environments is proposed in this thesis. The framework contains novel methods for Hand Posture Recognition, Hand Gesture Recognition and Hand Gesture Spotting. The Hand Posture Recognition method in the proposed framework is capable of recognising predefined still hand postures from cluttered backgrounds. Texture features are used in conjunction with Adaptive Boosting to form a novel feature selection scheme, which can effectively detect and select discriminative texture features from the training samples of the posture classes. A novel Hand Tracking method called Adaptive SURF Tracking is proposed in this thesis. Texture key points are used to track multiple hand candidates in the scene. This tracking method matches texture key points of hand candidates within adjacent frames to calculate the movement directions of hand candidates. With the gesture trajectories provided by the Adaptive SURF Tracking method, a novel classi�er called Partition Matrix is introduced to perform gesture classification for uncontrolled environments with multiple hand candidates. The trajectories of all hand candidates extracted from the original video under different frame rates are used to analyse the movements of hand candidates. An alternative gesture classifier based on Convolutional Neural Network is also proposed. The input images of the Neural Network are approximate trajectory images reconstructed from the tracking results of the Adaptive SURF Tracking method. For Hand Gesture Spotting, a forward spotting scheme is introduced to detect the starting and ending points of the prede�ned gestures in the continuously signed gesture videos. A Non-Sign Model is also proposed to simulate meaningless hand movements between the meaningful gestures. The proposed framework can perform well with unconstrained scene settings, including frontal occlusions, background distractions and changing lighting conditions. Moreover, it is invariant to changing scales, speed and locations of the gesture trajectories
    corecore