207 research outputs found

    Content-sensitive superpixel generation with boundary adjustment.

    Get PDF
    Superpixel segmentation has become a crucial tool in many image processing and computer vision applications. In this paper, a novel content-sensitive superpixel generation algorithm with boundary adjustment is proposed. First, the image local entropy was used to measure the amount of information in the image, and the amount of information was evenly distributed to each seed. It placed more seeds to achieve the lower under-segmentation in content-dense regions, and placed the fewer seeds to increase computational efficiency in content-sparse regions. Second, the Prim algorithm was adopted to generate uniform superpixels efficiently. Third, a boundary adjustment strategy with the adaptive distance further optimized the superpixels to improve the performance of the superpixel. Experimental results on the Berkeley Segmentation Database show that our method outperforms competing methods under evaluation metrics

    Um arcabouço para estimativa de saliência em múltiplas iterações em diferentes domínios de imagem

    Get PDF
    Orientador: Alexandre Xavier FalcãoDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: A detecção de objetos salientes estima os objetos que mais se destacam em uma imagem. Os estimadores de saliência não-supervisionados utilizam um conjunto predeterminado de suposições a respeito de como humanos percebem saliência para identificar características discriminantes de objeto salientes. Como esses métodos fixam essas suposições predeterminadas como parte integral de seu modelo, esses métodos não podem ser facilmente estendidos para cenários específicos ou outros domínios de imagens. Nós propomos, então, um arcabouço iterativo para estimação de saliência baseado em superpixels, intitulado ITSELF (Iterative Saliency Estimation fLexible Framework). Nosso arcabouço permite que o usuário adicione múltiplas suposições de saliência para melhor representar seu modelo. Graças a avanços em algoritmos de segmentação por superpixels, mapas de saliência podem ser utilizados para melhorar o delineamento de superpixels. Combinando algoritmos de superpixels baseados em informações de saliência com algoritmos de estimação de saliência baseados em superpixels, nós propomos um ciclo para auto melhoria iterativa de mapas de saliência. Nós comparamos o ITSELF com outros dois estimadores de saliência no estado-da-arte em cinco métricas e seis conjuntos de dados, dos quais quatro são compostos por imagens naturais, e dois são compostos por imagens biomédicas. Os experimentos mostram que nossa abordagem é mais robusta quando comparada aos outros métodos, apresentando resultados competitivos em imagens naturais e os superando em imagens biomédicasAbstract: Saliency object detection estimates the objects that most stand out in an image. The available unsupervised saliency estimators rely on a pre-determined set of assumptions of how humans perceive saliency to create discriminating features. These methods cannot be easily extended for specific settings and different image domains by fixing the pre-selected assumptions as an integral part of their models. We then propose a superpixel-based ITerative Saliency Estimation fLexible Framework (ITSELF) that allows any user-defined assumptions to be added to the model when required. Thanks to recent advancements in superpixel segmentation algorithms, saliency-maps can be used to improve superpixel delineation. By combining a saliency-based superpixel algorithm to a superpixel-based saliency estimator, we propose a novel saliency/superpixel self-improving loop to enhance saliency maps iteratively. We compare ITSELF to two state-of-the-art saliency estimators on five metrics and six datasets, four of them with natural images and two with biomedical images. Experiments show that our approach is more robust than the compared methods, presenting competitive results on natural image datasets and outperforming them on biomedical image datasetsMestradoCiência da ComputaçãoMestre em Ciência da Computação134659/2018-0CNP

    Superpixel based sea ice segmentation with high-resolution optical images: analysis and evaluation.

    Get PDF
    By grouping pixels with visual coherence, superpixel algorithms provide an alternative representation of regular pixel grid for precise and efficient image segmentation. In this paper, a multi-stage model is used for sea ice segmentation from the high-resolution optical imagery, including the pre-processing to enhance the image contrast and suppress the noise, superpixel generation and classification, and post-processing to refine the segmented results. Four superpixel algorithms are evaluated within the framework, where the high-resolution imagery of the Chukchi sea is used for validation. Quantitative evaluation in terms of the segmentation quality and floe size distribution, and visual comparison for several selected regions of interest are presented. Overall, the model with TS-SLIC yields the best results, with a segmentation accuracy of 98.19% on average and adhering to the ice edges well

    Connecting the Dots: Graph Neural Network Powered Ensemble and Classification of Medical Images

    Full text link
    Deep learning models have demonstrated remarkable results for various computer vision tasks, including the realm of medical imaging. However, their application in the medical domain is limited due to the requirement for large amounts of training data, which can be both challenging and expensive to obtain. To mitigate this, pre-trained models have been fine-tuned on domain-specific data, but such an approach can suffer from inductive biases. Furthermore, deep learning models struggle to learn the relationship between spatially distant features and their importance, as convolution operations treat all pixels equally. Pioneering a novel solution to this challenge, we employ the Image Foresting Transform to optimally segment images into superpixels. These superpixels are subsequently transformed into graph-structured data, enabling the proficient extraction of features and modeling of relationships using Graph Neural Networks (GNNs). Our method harnesses an ensemble of three distinct GNN architectures to boost its robustness. In our evaluations targeting pneumonia classification, our methodology surpassed prevailing Deep Neural Networks (DNNs) in performance, all while drastically cutting down on the parameter count. This not only trims down the expenses tied to data but also accelerates training and minimizes bias. Consequently, our proposition offers a sturdy, economically viable, and scalable strategy for medical image classification, significantly diminishing dependency on extensive training data sets.Comment: Our code is available at https://github.com/aryan-at-ul/AICS_2023_submissio

    Uma abordagem de agrupamento baseada na técnica de divisão e conquista e floresta de caminhos ótimos

    Get PDF
    Orientador: Alexandre Xavier FalcãoDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O agrupamento de dados é um dos principais desafios em problemas de Ciência de Dados. Apesar do seu progresso científico em quase um século de existência, algoritmos de agrupamento ainda falham na identificação de grupos (clusters) naturalmente relacionados com a semântica do problema. Ademais, os avanços das tecnologias de aquisição, comunicação, e armazenamento de dados acrescentam desafios cruciais com o aumento considerável de dados, os quais não são tratados pela maioria das técnicas. Essas questões são endereçadas neste trabalho através da proposta de uma abordagem de divisão e conquista para uma técnica de agrupamento única em encontrar um grupo por domo da função de densidade de probabilidade dos dados --- o algoritmo de agrupamento por floresta de caminhos ótimos (OPF - Optimum-Path Forest). Nesta técnica, amostras são interpretadas como nós de um grafo cujos arcos conectam os kk-vizinhos mais próximos no espaço de características. Os nós são ponderados pela sua densidade de probabilidade e um mapa de conexidade é maximizado de modo que cada máximo da função densidade de probabilidade se torna a raiz de uma árvore de caminhos ótimos (grupo). O melhor valor de kk é estimado por otimização em um intervalo de valores dependente da aplicação. O problema com este método é que um número alto de amostras torna o algoritmo inviável, devido ao espaço de memória necessário para armazenar o grafo e o tempo computacional para encontrar o melhor valor de kk. Visto que as soluções existentes levam a resultados ineficazes, este trabalho revisita o problema através da proposta de uma abordagem de divisão e conquista com dois níveis. No primeiro nível, o conjunto de dados é dividido em subconjuntos (blocos) menores e as amostras pertencentes a cada bloco são agrupadas pelo algoritmo OPF. Em seguida, as amostras representativas de cada grupo (mais especificamente as raízes da floresta de caminhos ótimos) são levadas ao segundo nível, onde elas são agrupadas novamente. Finalmente, os rótulos de grupo obtidos no segundo nível são transferidos para todas as amostras do conjunto de dados através de seus representantes do primeiro nível. Nesta abordagem, todas as amostras, ou pelo menos muitas delas, podem ser usadas no processo de aprendizado não supervisionado, sem afetar a eficácia do agrupamento e, portanto, o procedimento é menos susceptível a perda de informação relevante ao agrupamento. Os resultados mostram agrupamentos satisfatórios em dois cenários, segmentação de imagem e agrupamento de dados arbitrários, tendo como base a comparação com abordagens populares. No primeiro cenário, a abordagem proposta atinge os melhores resultados em todas as bases de imagem testadas. No segundo cenário, os resultados são similares aos obtidos por uma versão otimizada do método original de agrupamento por floresta de caminhos ótimosAbstract: Data clustering is one of the main challenges when solving Data Science problems. Despite its progress over almost one century of research, clustering algorithms still fail in identifying groups naturally related to the semantics of the problem. Moreover, the advances in data acquisition, communication, and storage technologies add crucial challenges with a considerable data increase, which are not handled by most techniques. We address these issues by proposing a divide-and-conquer approach to a clustering technique, which is unique in finding one group per dome of the probability density function of the data --- the Optimum-Path Forest (OPF) clustering algorithm. In the OPF-clustering technique, samples are taken as nodes of a graph whose arcs connect the kk-nearest neighbors in the feature space. The nodes are weighted by their probability density values and a connectivity map is maximized such that each maximum of the probability density function becomes the root of an optimum-path tree (cluster). The best value of kk is estimated by optimization within an application-specific interval of values. The problem with this method is that a high number of samples makes the algorithm prohibitive, due to the required memory space to store the graph and the computational time to obtain the clusters for the best value of kk. Since the existing solutions lead to ineffective results, we decided to revisit the problem by proposing a two-level divide-and-conquer approach. At the first level, the dataset is divided into smaller subsets (blocks) and the samples belonging to each block are grouped by the OPF algorithm. Then, the representative samples (more specifically the roots of the optimum-path forest) are taken to a second level where they are clustered again. Finally, the group labels obtained in the second level are transferred to all samples of the dataset through their representatives of the first level. With this approach, we can use all samples, or at least many samples, in the unsupervised learning process without affecting the grouping performance and, therefore, the procedure is less likely to lose relevant grouping information. We show that our proposal can obtain satisfactory results in two scenarios, image segmentation and the general data clustering problem, in comparison with some popular baselines. In the first scenario, our technique achieves better results than the others in all tested image databases. In the second scenario, it obtains outcomes similar to an optimized version of the traditional OPF-clustering algorithmMestradoCiência da ComputaçãoMestre em Ciência da ComputaçãoCAPE

    Recursive Inference for Prediction of Objects in Urban Environments

    Get PDF
    Abstract Future advancements in robotic navigation and mapping rest to a large extent on robust, efficient and more advanced semantic understanding of the surrounding environment. The existing semantic mapping approaches typically consider small number of semantic categories, require complex inference or large number of training examples to achieve desirable performance. In the proposed work we present an efficient approach for predicting locations of generic objects in urban environments by means of semantic segmentation of a video into object and nonobject categories. We exploit widely available exemplars of non-object categories (such as road, buildings, vegetation) and use geometric cues which are indicative of the presence of object boundaries to gather the evidence about objects regardless of their category. We formulate the object/non-object semantic segmentation problem in the Conditional Random Field framework, where the structure of the graph is induced by a minimum spanning tree computed over a 3D point cloud, yielding an efficient algorithm for an exact inference. The chosen 3D representation naturally lends itself for on-line recursive belief updates with a simple soft data association mechanism. We carry out extensive experiments on videos of urban environments acquired by a moving vehicle and show quantitatively and qualitatively the benefits of our proposal.

    Doctor of Philosophy in Computing

    Get PDF
    dissertationImage segmentation is the problem of partitioning an image into disjoint segments that are perceptually or semantically homogeneous. As one of the most fundamental computer vision problems, image segmentation is used as a primary step for high-level vision tasks, such as object recognition and image understanding, and has even wider applications in interdisciplinary areas, such as longitudinal brain image analysis. Hierarchical models have gained popularity as a key component in image segmentation frameworks. By imposing structures, a hierarchical model can efficiently utilize features from larger image regions and make optimal inference for final segmentation feasible. We develop a hierarchical merge tree (HMT) model for image segmentation. Motivated by the application in large-scale segmentation of neuronal structures in electron microscopy (EM) images, our model provides a compact representation of region merging hypotheses and utilizes higher order information for efficient segmentation inference. Taking advantage of supervised learning, our model is free from parameter tuning and outperforms previous state-of-the-art methods on both two-dimensional (2D) and three-dimensional EM image data sets without any change. We also extend HMT to the hierarchical merge forest (HMF) model. By identifying region correspondences, HMF utilizes inter-section information to correct intra-section errors and improves 2D EM segmentation accuracy. HMT is a generic segmentation model. We demonstrate this by applying it to natural image segmentation problems. We propose a constrained conditional model formulation with a globally optimal inference algorithm for HMT and an iterative merge tree sampling algorithm that significantly improves its performance. Experimental results show our approach achieves state-of-the-art accuracy for object-independent image segmentation. Finally, we propose a semi-supervised HMT (SSHMT) model to reduce the high demand for labeled data by supervised learning. We introduce a differentiable unsupervised loss term that enforces consistent boundary predictions and develop a Bayesian learning model that combines supervised and unsupervised information. We show that with a very small amount of labeled data, SSHMT consistently performs close to the supervised HMT with full labeled data sets and significantly outperforms HMT trained with the same labeled subsets
    • …
    corecore