8 research outputs found

    Методика моделирования 3D динамической среды на основе теоремы Байеса

    Get PDF
    METHODS OF MODELING 3D OF DYNAMIC MEDIA BASED ON BAYES’ THEOREM A. HOSPADДемонстрируется новый вероятностный подход в изучении пространственных представлений динамической окружающей среды с помощью 3D-лазерных измерений. Показано, что предлагаемый метод может применяться в режиме реального времени даже при наличии большого количества динами-ческих объектов, в отличие от большинства ранее разработанных методик, которые затратны в вычислительном плане при рассмотрении данного вопроса. Существуют способы для изучения активности переднего плана изображения. Однако они, как правило, не учитывают неопределенности, полученные в процессе зондирования. Сделан вывод, что проблема обнаружения динамических объектов может быть решена онлайн при помощи последовательной структуры Байеса. Все параметры, участвующие в процессе обнаружения, подчиняются вероятностной интерпретации. При использовании в реальных условиях, результаты, получаемые предложенным методом, могут применяться для решения различных задач, таких как навигация робота, создание карт, определение локализаций.= We propose a new probabilistic approach to the study of spatial representations of of dynamic environment using the 3D-laser measurements. Most of the previous developed techniques computationally costly when considering this issue, and the new method can be applied in real time, even in the presence of a large number of dynamic objects. There are ways for studying activity of the foreground image. However, they generally do not take into account the uncertainty generated during sensing. In this paper we consider the problem of detection of dynamic objects can be solved by means of sequential online structure Bayes. All the parameters involved in the detection process are subject to a probabilistic interpretation. When used in real-world conditions, the results obtained by the proposed method can be used for various tasks: navigation robot creation of maps, localization

    CORRECTING FALSE SEGMENTATION IN VIDEO USING IMAGE OVER-SEGMENTATION

    Get PDF
    Moving objects detection is a fundamental step in many vision based applications. Background subtraction is the typical method. When scene exhibits pertinent dynamism method based on mixture of Gaussians is a good balance between accuracy and complexity, but fails due to two kinds of false segmentations i.e moving shadows incorrectly detected as objects and some actual moving objects not detected as moving objects. In computer vision, segmentation refers to process of partitioning a digital image in to multiple segments and goal of segmentation is to simplify and/or change representation of image in to something that is more meaningful and easier to analyse. A colour clustering based on k-means and image over-segmentation are used to segment the input frame into patches and shadow suppression done by HSV colour space, the outputs of mixture of Gaussians are combined with the colour clustered regions to a module for area confidence measurement. In this way, two major segment errors can be corrected. Experimental results show that the proposed approach can significantly enhance segmentation results

    Motion Segmentation Aided Super Resolution Image Reconstruction

    Get PDF
    This dissertation addresses Super Resolution (SR) Image Reconstruction focusing on motion segmentation. The main thrust is Information Complexity guided Gaussian Mixture Models (GMMs) for Statistical Background Modeling. In the process of developing our framework we also focus on two other topics; motion trajectories estimation toward global and local scene change detections and image reconstruction to have high resolution (HR) representations of the moving regions. Such a framework is used for dynamic scene understanding and recognition of individuals and threats with the help of the image sequences recorded with either stationary or non-stationary camera systems. We introduce a new technique called Information Complexity guided Statistical Background Modeling. Thus, we successfully employ GMMs, which are optimal with respect to information complexity criteria. Moving objects are segmented out through background subtraction which utilizes the computed background model. This technique produces superior results to competing background modeling strategies. The state-of-the-art SR Image Reconstruction studies combine the information from a set of unremarkably different low resolution (LR) images of static scene to construct an HR representation. The crucial challenge not handled in these studies is accumulating the corresponding information from highly displaced moving objects. In this aspect, a framework of SR Image Reconstruction of the moving objects with such high level of displacements is developed. Our assumption is that LR images are different from each other due to local motion of the objects and the global motion of the scene imposed by non-stationary imaging system. Contrary to traditional SR approaches, we employed several steps. These steps are; the suppression of the global motion, motion segmentation accompanied by background subtraction to extract moving objects, suppression of the local motion of the segmented out regions, and super-resolving accumulated information coming from moving objects rather than the whole scene. This results in a reliable offline SR Image Reconstruction tool which handles several types of dynamic scene changes, compensates the impacts of camera systems, and provides data redundancy through removing the background. The framework proved to be superior to the state-of-the-art algorithms which put no significant effort toward dynamic scene representation of non-stationary camera systems

    On-board three-dimensional object tracking: Software and hardware solutions

    Full text link
    We describe a real time system for recognition and tracking 3D objects such as UAVs, airplanes, fighters with the optical sensor. Given a 2D image, the system has to perform background subtraction, recognize relative rotation, scale and translation of the object to sustain a prescribed topology of the fleet. In the thesis a comparative study of different algorithms and performance evaluation is carried out based on time and accuracy constraints. For background subtraction task we evaluate frame differencing, approximate median filter, mixture of Gaussians and propose classification based on neural network methods. For object detection we analyze the performance of invariant moments, scale invariant feature transform and affine scale invariant feature transform methods. Various tracking algorithms such as mean shift with variable and a fixed sized windows, scale invariant feature transform, Harris and fast full search based on fast fourier transform algorithms are evaluated. We develop an algorithm for the relative rotations and the scale change calculation based on Zernike moments. Based on the design criteria the selection is made for on-board implementation. The candidate techniques have been implemented on the Texas Instrument TMS320DM642 EVM board. It is shown in the thesis that 14 frames per second can be processed; that supports the real time implementation of the tracking system under reasonable accuracy limits
    corecore