120 research outputs found

    Video dizilerindeki araç plakalarının FE-Yoğunlaştırma algoritması kullanılarak izlenmesi

    Get PDF
    Automated vehicle identification (AVI) is still an important research issue and drawing attention in machine vision community. Its potential commercial applications are automatic barrier systems, automatic payment of parking or highway toll fee, automatic locating of a stolen vehicle, automatic calculation of traffic volume and so on. License plate enables us to identify a vehicle and its owner. License plate recognition is the most effective method for identification of the vehicle. A suitable and promising solution to vehicle identification is visual recognition of the license plate from camera view. This approach is applicable because it does not require vehicles to carry additional equipment such as special RF transmitters. Without additional cost, these systems are capable of installation to the field. But visual license plate detection and recognition is a very difficult task. It is quite a challenging problem because vehicles are running in an outdoor environment, where lighting conditions can change rapidly, weather conditions can cause poor image quality, license plates can be dirty or in poor condition and occlusions can occur frequently. Therefore, Visual License Plate Recognition (VLPR) systems may fail because of uncontrollable external conditions. Beside the challenging nature of the problem, the high-dimensional nature of the VLPR problem may impose a significant computational load on the target processing platform. The aim of this work is 3D tracking of license plate in order to determine the state (spatial position and 3D orientation) of the license plate from sequential frames of the video. This can be accomplished in a brute force way by testing every possible orientation and translation and then selecting the one that best fits the current frame. If all six degrees of spatial freedom of the object are to be determined, the state space of the object is six dimensional. Setting the number of possible values of each degree of freedom to 100, the task of tracking by brute force then requires 1006 comparisons of a state with the image data. Even with such a limited resolution and a six dimensional feature space it is clear that, it is computationally impossible to perform tracking in real time by brute force. That is where stochastic tracking is meaningful. A stochastic process is one whose behaviour is non-deterministic in that the next state of the environment is partially but not fully determined by the previous state of the environment. Instead of comparing every possible configuration of the object with each video frame, the idea behind stochastic tracking is to make a set of guesses of the state, compare these guesses with the current frame, and use the result of this comparison as the basis for a new set of guesses when the next frame comes. The new guesses are made by selecting the best guesses from the last frame and applying a model of the movement of the object from one frame to the next. The set of guesses (called particles or samples) will frame by frame converge around the correct state of the object. In recent years, there has been a great interest in applying Particle Filtering to computer vision problems. This specialized Particle Filtering method for computer vision problems is introduced as Condensation or Sequential Importance Sampling. Condensation algorithm utilizes factored sampling and given dynamic models to propagate an entire probability distribution for object's position and shape over time. It can perform successfully robust tracking of object motion. On the other hand, its convergence greatly depends on the trade off between the number of particles/hypotheses and the fitness of the dynamic model. For example, in cases where the dynamics are complex or poorly modelled, thousands of samples are usually required for realapplications. In order to improve the performance of the Condensation algorithm, DE-Condensation algorithm is proposed, which is an integration of the Differential Evolution and Condensation algorithms. DE-Condensation algorithm is utilized for spatial position estimation and tracking of license plates in 3D from monocular camera view. The performance and computational load of the Extended Kalman filter, Condensation Algorithm, DE-Condensation algorithm and Genetic Condensation algorithm are compared for evaluating DE-Condensation Algorithm's performance.  Keywords: License Plate Tracking, Condensation, DE-Condensation. Bu çalışmanın amacı, araç plakalarının üç boyutlu uzayda konum ve yöneliminin bulunması için video görüntüsünden izlenmesidir. Eğer nesnenin altı dereceli uzay serbestliği belirlenmek isteniyorsa, durum uzayı altı boyutlu olur. Her serbestlik derecesi için olası değerler kümesini 100 elemanlı kabul edersek, bu nesneyi her olası durumu deneyerek izleyebilmek için görüntü verisi üzerinde 1006 karşılaştırma yapmamız gerekmektedir. Bu sınırlı çözünürlük ve altı dereceli serbestlik uzayında dahi, bu şekilde gerçek zamanlı izleme yapmanın mümkün olmadığı açıktır. Stokastik izlemenin ardında yatan düşünce, her olası nesne durumunu denemek yerine, durum hakkında tahminlerde bulunmak ve bu tahminleri o anki video karesi ile karşılaştırarak sonuçları bir sonraki video karesi için tahmin yapmakta kullanmaktır. Son yıllarda, bilgisayar ile görüntü işleme problemlerinde Parçacık Filtreleri’nin kullanımına yönelik bir ilgi görülmektedir. Bilgisayar ile görüntü işleme problemlerinde kullanılan özel Parçacık Filtresi’ne Yoğunlaştırma algoritması veya Ardışıl Önem Örnekleme denmektedir. Bu yöntem hareketli nesneler için gürbüz bir izleme olanağı sunmaktadır. Öte yandan, bu algoritmanın yakınsaması büyük oranda parçacık sayısı ve dinamik modelin doğruluğu arasındaki ilişkiye bağlıdır. Bu tezde Yoğunlaştırma algoritmasını iyileştirmek amacıyla FE-Yoğunlaştırma algoritması önerilmektedir. Bu algoritma Farksal Evrim ve Yoğunlaştırma algoritmalarının bir birleşimidir. FE-Yoğunlaştırma algoritması üç boyutlu uzayda tek bir kamerayla araç plakası konum ve yöneliminin izlenmesi için kullanıldı. Genişletilmiş Kalman filtresi, Yoğunlaştırma, Genetik Yoğunlaştırma ve FE-Yoğunlaştırma algoritmalarının izleme başarımları karşılaştırıldı. FE-Yoğunlaştırma algoritması diğer üç algoritmaya göre çok daha iyi başarım göstermektedir. Anahtar Kelimeler: Araç plakası izlenmesi, yoğunlaştırma, FE-Yoğunlaştırma

    Video analysis based vehicle detection and tracking using an MCMC sampling framework

    Full text link
    This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method

    Visual Perception for Manipulation and Imitation in Humanoid Robots

    Get PDF
    This thesis deals with visual perception for manipulation and imitation in humanoid robots. In particular, real-time applicable methods for object recognition and pose estimation as well as for markerless human motion capture have been developed. As only sensor a small baseline stereo camera system (approx. human eye distance) was used. An extensive experimental evaluation has been performed on simulated as well as real image data from real-world scenarios using the humanoid robot ARMAR-III

    Probabilistic Models for 3D Urban Scene Understanding from Movable Platforms

    Get PDF
    This work is a contribution to understanding multi-object traffic scenes from video sequences. All data is provided by a camera system which is mounted on top of the autonomous driving platform AnnieWAY. The proposed probabilistic generative model reasons jointly about the 3D scene layout as well as the 3D location and orientation of objects in the scene. In particular, the scene topology, geometry as well as traffic activities are inferred from short video sequences

    Mechatronic Systems

    Get PDF
    Mechatronics, the synergistic blend of mechanics, electronics, and computer science, has evolved over the past twenty five years, leading to a novel stage of engineering design. By integrating the best design practices with the most advanced technologies, mechatronics aims at realizing high-quality products, guaranteeing at the same time a substantial reduction of time and costs of manufacturing. Mechatronic systems are manifold and range from machine components, motion generators, and power producing machines to more complex devices, such as robotic systems and transportation vehicles. With its twenty chapters, which collect contributions from many researchers worldwide, this book provides an excellent survey of recent work in the field of mechatronics with applications in various fields, like robotics, medical and assistive technology, human-machine interaction, unmanned vehicles, manufacturing, and education. We would like to thank all the authors who have invested a great deal of time to write such interesting chapters, which we are sure will be valuable to the readers. Chapters 1 to 6 deal with applications of mechatronics for the development of robotic systems. Medical and assistive technologies and human-machine interaction systems are the topic of chapters 7 to 13.Chapters 14 and 15 concern mechatronic systems for autonomous vehicles. Chapters 16-19 deal with mechatronics in manufacturing contexts. Chapter 20 concludes the book, describing a method for the installation of mechatronics education in schools

    Object Tracking

    Get PDF
    Object tracking consists in estimation of trajectory of moving objects in the sequence of images. Automation of the computer object tracking is a difficult task. Dynamics of multiple parameters changes representing features and motion of the objects, and temporary partial or full occlusion of the tracked objects have to be considered. This monograph presents the development of object tracking algorithms, methods and systems. Both, state of the art of object tracking methods and also the new trends in research are described in this book. Fourteen chapters are split into two sections. Section 1 presents new theoretical ideas whereas Section 2 presents real-life applications. Despite the variety of topics contained in this monograph it constitutes a consisted knowledge in the field of computer object tracking. The intention of editor was to follow up the very quick progress in the developing of methods as well as extension of the application

    A Trainable System for Object Detection in Images and Video Sequences

    Get PDF
    This thesis presents a general, trainable system for object detection in static images and video sequences. The core system finds a certain class of objects in static images of completely unconstrained, cluttered scenes without using motion, tracking, or handcrafted models and without making any assumptions on the scene structure or the number of objects in the scene. The system uses a set of training data of positive and negative example images as input, transforms the pixel images to a Haar wavelet representation, and uses a support vector machine classifier to learn the difference between in-class and out-of-class patterns. To detect objects in out-of-sample images, we do a brute force search over all the subwindows in the image. This system is applied to face, people, and car detection with excellent results. For our extensions to video sequences, we augment the core static detection system in several ways -- 1) extending the representation to five frames, 2) implementing an approximation to a Kalman filter, and 3) modeling detections in an image as a density and propagating this density through time according to measured features. In addition, we present a real-time version of the system that is currently running in a DaimlerChrysler experimental vehicle. As part of this thesis, we also present a system that, instead of detecting full patterns, uses a component-based approach. We find it to be more robust to occlusions, rotations in depth, and severe lighting conditions for people detection than the full body version. We also experiment with various other representations including pixels and principal components and show results that quantify how the number of features, color, and gray-level affect performance
    corecore