11 research outputs found

    Stand-Alone Objective Segmentation Quality Evaluation

    Get PDF
    The identification of objects in video sequences, that is, video segmentation, plays a major role in emerging interactive multimedia services, such as those enabled by the ISO MPEG-4 and MPEG-7 standards. In this context, assessing the adequacy of the identified objects to the application targets, that is, evaluating the segmentation quality, assumes a crucial importance. Video segmentation technology has received considerable attention in the literature, with algorithms being proposed to address various types of applications. However, the segmentation quality performance evaluation of those algorithms is often ad hoc, and a well-established solution is not available. In fact, the field of objective segmentation quality evaluation is still maturing; recently, some more efforts have been made, mainly following the emergence of the MPEG object-based coding and description standards. This paper discusses the problem of objective segmentation quality evaluation in its most difficult scenario: standalone evaluation, that is, when a reference segmentation is not available for comparative evaluation. In particular, objective metrics are proposed for the evaluation of standalone segmentation quality for both individual objects and overall segmentation partitions

    On the evaluation of background subtraction algorithms without ground-truth

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. C. San Miguel, and J. M. Martínez, "On the evaluation of background subtraction algorithms without ground-truth" in 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance, 2013, 180 - 187In video-surveillance systems, the moving object segmentation stage (commonly based on background subtraction) has to deal with several issues like noise, shadows and multimodal backgrounds. Hence, its failure is inevitable and its automatic evaluation is a desirable requirement for online analysis. In this paper, we propose a hierarchy of existing performance measures not-based on ground-truth for video object segmentation. Then, four measures based on color and motion are selected and examined in detail with different segmentation algorithms and standard test sequences for video object segmentation. Experimental results show that color-based measures perform better than motion-based measures and background multimodality heavily reduces the accuracy of all obtained evaluation results.This work is partially supported by the Spanish Government (TEC2007- 65400 SemanticVideo), by Cátedra Infoglobal-UAM for “Nuevas Tecnologías de video aplicadas a la seguridad”, by the Consejería de Educación of the Comunidad de Madrid and by the European Social Fund

    The reconstructed residual error: a novel segmentation evaluation measure for reconstructed images in tomography

    Get PDF
    In this paper, we present the reconstructed residual error, which evaluates the quality of a given segmentation of a reconstructed image in tomography. This novel evaluation method, which is independent of the methods that were used to reconstruct and segment the image, is applicable to segmentations that are based on the density of the scanned object. It provides a spatial map of the errors in the segmented image, based on the projection data. The reconstructed residual error is a reconstruction of the difference between the recorded data and the forward projection of that segmented image. The properties and applications of the algorithm are v

    Framework for near real time feature detection from the atmospheric imaging assembly images of the solar dynamics observatory

    Get PDF
    The study of the variability of the solar corona and the monitoring of its traditional regions (Coronal Holes, Quiet Sun and Active Regions) are of great importance in astrophysics as well as in view of the Space Weather applications. The Atmospheric Imaging Assembly (AIA) of the Solar Dynamics Observatory (SDO) provides high resolution images of the sun imaged at different wavelengths at a rate of approximately one every 10 seconds, a great resource for solar monitoring . Today, the process of identifying features and estimating their properties is applied manually in an iterative fashion to verify the detection results. We introduce a complete, automated image-processing pipeline, starting with raw data and ending with quantitative data of high level feature parameters. We implement two multichannel unsupervised algorithms that automatically segments EUV AIA solar images into Coronal Holes, Quiet Sun and Active Regions in near real time. We also develop a method of post processing to deal with fragments in a segmented image by spatial validity based compact clustering. The segmentation results are consistent with well-known algorithms and databases. The parameters extracted from the segments like area closely follow the solar activity pattern. Moreover, the methods developed within the proposed framework are generic enough to allow the study of any solar feature (e.g. Coronal Bright points) provided that the feature can be deduced from AIA images

    Estimación de fiabilidad del seguimiento de objetos en vídeo

    Full text link
    En este proyecto se propone una técnica para estimar la habilidad de los algoritmos de seguimiento de objetos en vídeo (trackers). Esta estimación consiste en determinar durante la ejecución del tracker (online) y en ausencia de los datos de ground-truth (anotaciones manuales de los resultados de seguimiento ideales) los instantes en los que el algoritmo sigue al objeto deseado de manera satisfactoria. En primer lugar se lleva a cabo un estudio del estado del arte referente a la estimación de habilidad en el que se analizan algunas de las técnicas existentes. Después el proyecto se centra en la descripción del algoritmo propuesto cuyo principal objetivo es detectar los frames de cambio en los que el tracker pierde o recupera al objeto. Para ello se hace uso de un conjunto de características relacionadas con la forma, el movimiento y la apariencia del objeto seguido. En los instantes en los que estas características experimentan variaciones repentinas se considera la existencia de un frame de cambio. Para identificar estos valores atípicos de las características se plantea una estrategia de detección de anomalías. Posteriormente se emplea una máquina de estados para decidir en cada frame si el seguimiento es correcto o incorrecto (habilidad estimada). Por último se evalúa el algoritmo propuesto sobre seis trackers distintos y se compara con las principales técnicas relacionadas. Para ello se ha seleccionado un dataset que incluye los problemas más comunes en el seguimiento de objetos.In this master thesis we propose an approach to estimate the reliability of video tracking algorithms. This estimation consists on determining during online analysis and in the absence of ground truth data (manual annotations of the ideal tracking results) the instants in which the algorithm successfully tracks the desired object. First an study of related work is done in which the existing techniques are analyzed. Then the work is focused on the description of the proposed approach whose main aim is to detect the frames of change in which the tracker loses or recovers the target. For this purpose a set of features related to shape, motion and appearance of the tracked object is used. When these features have sudden variations the existence of a frame of change is considered. To identify these atypical values of the features we propose an anomaly detection strategy. Then a state machine is used to decide in each frame whether tracking is correct or wrong (reliability estimated). Finally the proposed approach is tested with six di erent video trackers and compared against the most relevant techniques of the state-of-the-art. For such evaluation task we have selected a dataset that includes the most common problems in video tracking

    Reinforced Segmentation of Images Containing One Object of Interest

    Get PDF
    In many image-processing applications, one object of interest must be segmented. The techniques used for segmentation vary depending on the particular situation and the specifications of the problem at hand. In methods that rely on a learning process, the lack of a sufficient number of training samples is usually an obstacle, especially when the samples need to be manually prepared by an expert. The performance of some other methods may suffer from frequent user interactions to determine the critical segmentation parameters. Also, none of the existing approaches use online (permanent) feedback, from the user, in order to evaluate the generated results. Considering the above factors, a new multi-stage image segmentation system, based on Reinforcement Learning (RL) is introduced as the main contribution of this research. In this system, the RL agent takes specific actions, such as changing the tasks parameters, to modify the quality of the segmented image. The approach starts with a limited number of training samples and improves its performance in the course of time. In this system, the expert knowledge is continuously incorporated to increase the segmentation capabilities of the method. Learning occurs based on interactions with an offline simulation environment, and later online through interactions with the user. The offline mode is performed using a limited number of manually segmented samples, to provide the segmentation agent with basic information about the application domain. After this mode, the agent can choose the appropriate parameter values for different processing tasks, based on its accumulated knowledge. The online mode, consequently, guarantees that the system is continuously training and can increase its accuracy, the more the user works with it. During this mode, the agent captures the user preferences and learns how it must change the segmentation parameters, so that the best result is achieved. By using these two learning modes, the RL agent allows us to optimally recognize the decisive parameters for the entire segmentation process

    Quality-Driven video analysis for the improvement of foreground segmentation

    Full text link
    Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones.Fecha de lectura: 15-06-2018It was partially supported by the Spanish Government (TEC2014-53176-R, HAVideo

    Advances in semantic-guided and feedback-based approaches for video analysis

    Full text link
    Tesis doctoral inédita. Universidad Autónoma de Madrid, Escuela Politécnica Superior, septiembre 201
    corecore