5 research outputs found

    Mid-level feature set for specific event and anomaly detection in crowded scenes

    Get PDF
    Proceedings of: 20th IEEE International Conference on Image Processing (ICIP 2013). Melbourne, Australia, September 15-18, 2013.In this paper we propose a system for automatic detection of specific events and abnormal behaviors in crowded scenes. In particular, we focus on the parametrization by proposing a set of mid-level spatio-temporal features that successfully model the characteristic motion of typical events in crowd behaviors. Furthermore, due to the fact that some features are more suitable than others to model specific events of interest, we also present an automatic process for feature selection. Our experiments prove that the suggested feature set works successfully for both explicit event detection and distance-based anomaly detection tasks. The results on PETS for explicit event detection are generally better than those previously reported. Regarding anomaly detection, the proposed method performance is comparable to those of state-of-the-art method for PETS and substantially better than that reported for Web dataset.Publicad

    Image based approach for early assessment of heart failure.

    Get PDF
    In diagnosing heart diseases, the estimation of cardiac performance indices requires accurate segmentation of the left ventricle (LV) wall from cine cardiac magnetic resonance (CMR) images. MR imaging is noninvasive and generates clear images; however, it is impractical to manually process the huge number of images generated to calculate the performance indices. In this dissertation, we introduce a novel, fast, robust, bi-directional coupled parametric deformable models that are capable of segmenting the LV wall borders using first- and second-order visual appearance features. These features are embedded in a new stochastic external force that preserves the topology of the LV wall to track the evolution of the parametric deformable models control points. We tested the proposed segmentation approach on 15 data sets in 6 infarction patients using the Dice similarity coefficient (DSC) and the average distance (AD) between the ground truth and automated segmentation contours. Our approach achieves a mean DSC value of 0.926±0.022 and mean AD value of 2.16±0.60 mm compared to two other level set methods that achieve mean DSC values of 0.904±0.033 and 0.885±0.02; and mean AD values of 2.86±1.35 mm and 5.72±4.70 mm, respectively. Also, a novel framework for assessing both 3D functional strain and wall thickening from 4D cine cardiac magnetic resonance imaging (CCMR) is introduced. The introduced approach is primarily based on using geometrical features to track the LV wall during the cardiac cycle. The 4D tracking approach consists of the following two main steps: (i) Initially, the surface points on the LV wall are tracked by solving a 3D Laplace equation between two subsequent LV surfaces; and (ii) Secondly, the locations of the tracked LV surface points are iteratively adjusted through an energy minimization cost function using a generalized Gauss-Markov random field (GGMRF) image model in order to remove inconsistencies and preserve the anatomy of the heart wall during the tracking process. Then the circumferential strains are straight forward calculated from the location of the tracked LV surface points. In addition, myocardial wall thickening is estimated by co-allocation of the corresponding points, or matches between the endocardium and epicardium surfaces of the LV wall using the solution of the 3D laplace equation. Experimental results on in vivo data confirm the accuracy and robustness of our method. Moreover, the comparison results demonstrate that our approach outperforms 2D wall thickening estimation approaches

    O uso da Divergência de Kullback-Leibler e da Divergência Generalizada como medida de similaridade em sistemas CBIR

    Get PDF
    The content-based image retrieval is important for various purposes like disease diagnoses from computerized tomography, for example. The relevance, social and economic of image retrieval systems has created the necessity of its improvement. Within this context, the content-based image retrieval systems are composed of two stages, the feature extraction and similarity measurement. The stage of similarity is still a challenge due to the wide variety of similarity measurement functions, which can be combined with the different techniques present in the recovery process and return results that aren’t always the most satisfactory. The most common functions used to measure the similarity are the Euclidean and Cosine, but some researchers have noted some limitations in these functions conventional proximity, in the step of search by similarity. For that reason, the Bregman divergences (Kullback Leibler and I-Generalized) have attracted the attention of researchers, due to its flexibility in the similarity analysis. Thus, the aim of this research was to conduct a comparative study over the use of Bregman divergences in relation the Euclidean and Cosine functions, in the step similarity of content-based image retrieval, checking the advantages and disadvantages of each function. For this, it was created a content-based image retrieval system in two stages: offline and online, using approaches BSM, FISM, BoVW and BoVW-SPM. With this system was created three groups of experiments using databases: Caltech101, Oxford and UK-bench. The performance of content-based image retrieval system using the different functions of similarity was tested through of evaluation measures: Mean Average Precision, normalized Discounted Cumulative Gain, precision at k, precision x recall. Finally, this study shows that the use of Bregman divergences (Kullback Leibler and Generalized) obtains better results than the Euclidean and Cosine measures with significant gains for content-based image retrieval.Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorDissertação (Mestrado)A recuperação de imagem baseada em conteúdo é importante para diversos fins, como diagnósticos de doenças a partir de tomografias computadorizadas, por exemplo. A relevância social e econômica de sistemas de recuperação de imagens criou a necessidade do seu aprimoramento. Dentro deste contexto, os sistemas de recuperação de imagens baseadas em conteúdo são compostos de duas etapas: extração de característica e medida de similaridade. A etapa de similaridade ainda é um desafio, devido à grande variedade de funções de medida de similaridade, que podem ser combinadas com as diferentes técnicas presentes no processo de recuperação e retornar resultados que nem sempre são os mais satisfatórios. As funções geralmente mais usadas para medir a similaridade são as Euclidiana e Cosseno, mas alguns pesquisadores têm notado algumas limitações nestas funções de proximidade convencionais, na etapa de busca por similaridade. Por esse motivo, as divergências de Bregman (Kullback Leibler e Generalizada) têm atraído a atenção dos pesquisadores, devido à sua flexibilidade em análise de similaridade. Desta forma, o objetivo desta pesquisa foi realizar um estudo comparativo sobre a utilização das divergências de Bregman em relação às funções Euclidiana e Cosseno, na etapa de similaridade da recuperação de imagens baseadas em conteúdo, averiguando as vantagens e desvantagens de cada função. Para isso, criou-se um sistema de recuperação de imagens baseado em conteúdo em duas etapas: off-line e on-line, utilizando as abordagens BSM, FISM, BoVW e BoVW-SPM. Com esse sistema, foram realizados três grupos de experimentos utilizando os bancos de dados: Caltech101, Oxford e UK-bench. O desempenho do sistema de recuperação de imagem baseada em conteúdo utilizando as diferentes funções de similaridade foram testadas por meio das medidas de avaliação: Mean Average Precision, normalized Discounted Cumulative Gain, precisão em k, e precisão x revocação. Por fim, o presente estudo aponta que o uso das divergências de Bregman (Kullback Leibler e Generalizada) obtiveram melhores resultados do que as medidas Euclidiana e Cosseno, com ganhos relevantes para recuperação de imagem baseada em conteúdo
    corecore