3,581 research outputs found

    Unsupervised level set parameterization using multi-scale filtering

    Get PDF
    This paper presents a novel framework for unsupervised level set parameterization using multi-scale filtering. A standard multi-scale, directional filtering algorithm is used in order to capture the orientation coherence in edge regions. The latter is encoded in entropy-based image `heatmaps', which are able to weight forces guiding level set evolution. Experiments are conducted on two large benchmark databases as well as on real proteomics images. The experimental results demonstrate that the proposed framework is capable of accelerating contour convergence, whereas it obtains a segmentation quality comparable to the one obtained with empirically optimized parameterization

    Image Analysis and Processing with Applications in Proteomics and Medicine

    Get PDF
    This thesis introduces unsupervised image analysis algorithms for the segmentation of several types of images, with an emphasis on proteomics and medical images. Segmentation is a challenging task in computer vision with essential applications in biomedical engineering, remote sensing, robotics and automation. Typically, the target region is separated from the rest of image regions utilizing defining features including intensity, texture, color or motion cues. In this light, multiple segments are generated and the selection of the most significant segments becomes a controversial decision as it highly hinges on heuristic considerations. Moreover, the separation of the target regions is impeded by several daunting factors such as: background clutter, the presence of noise and artifacts as well as occlusions on multiple target regions. This thesis focuses on image segmentation using deformable models and specifically region-based Active Contours (ACs) because of their strong mathematical foundation and their appealing properties

    Automated detection of extended sources in radio maps: progress from the SCORPIO survey

    Get PDF
    Automated source extraction and parameterization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper we present a new algorithm, dubbed CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parameterization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, including also different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the ASKAP-EMU survey. The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.Comment: 15 pages, 9 figure

    Role of homeostasis in learning sparse representations

    Full text link
    Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components

    Dynamic quantum clustering: a method for visual exploration of structures in data

    Full text link
    A given set of data-points in some feature space may be associated with a Schrodinger equation whose potential is determined by the data. This is known to lead to good clustering solutions. Here we extend this approach into a full-fledged dynamical scheme using a time-dependent Schrodinger equation. Moreover, we approximate this Hamiltonian formalism by a truncated calculation within a set of Gaussian wave functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition or feature filtering.Comment: 15 pages, 9 figure
    corecore