20,735 research outputs found

    A graph-based mathematical morphology reader

    Full text link
    This survey paper aims at providing a "literary" anthology of mathematical morphology on graphs. It describes in the English language many ideas stemming from a large number of different papers, hence providing a unified view of an active and diverse field of research

    A Fast, Memory-Efficient Alpha-Tree Algorithm using Flooding and Tree Size Estimation

    Get PDF
    The alpha-tree represents an image as hierarchical set of alpha-connected components. Computation of alpha-trees suffers from high computational and memory requirements compared with similar component tree algorithms such as max-tree. Here we introduce a novel alpha-tree algorithm using 1) a flooding algorithm for computational efficiency and 2) tree size estimation (TSE) for memory efficiency. In TSE, an exponential decay model was fitted to normalized tree sizes as a function of the normalized root mean squared deviation (NRMSD) of edge-dissimilarity distributions, and the model was used to estimate the optimum memory allocation size for alpha-tree construction. An experiment on 1256 images shows that our algorithm runs 2.27 times faster than Ouzounis and Soille's thanks to the flooding algorithm, and TSE reduced the average memory allocation of the proposed algorithm by 40.4%, eliminating unused allocated memory by 86.0% with a negligible computational cost

    Efficiently Tracking Homogeneous Regions in Multichannel Images

    Full text link
    We present a method for tracking Maximally Stable Homogeneous Regions (MSHR) in images with an arbitrary number of channels. MSHR are conceptionally very similar to Maximally Stable Extremal Regions (MSER) and Maximally Stable Color Regions (MSCR), but can also be applied to hyperspectral and color images while remaining extremely efficient. The presented approach makes use of the edge-based component-tree which can be calculated in linear time. In the tracking step, the MSHR are localized by matching them to the nodes in the component-tree. We use rotationally invariant region and gray-value features that can be calculated through first and second order moments at low computational complexity. Furthermore, we use a weighted feature vector to improve the data association in the tracking step. The algorithm is evaluated on a collection of different tracking scenes from the literature. Furthermore, we present two different applications: 2D object tracking and the 3D segmentation of organs.Comment: to be published in ICPRS 2017 proceeding

    A Physiologically Based System Theory of Consciousness

    Get PDF
    A system which uses large numbers of devices to perform a complex functionality is forced to adopt a simple functional architecture by the needs to construct copies of, repair, and modify the system. A simple functional architecture means that functionality is partitioned into relatively equal sized components on many levels of detail down to device level, a mapping exists between the different levels, and exchange of information between components is minimized. In the instruction architecture functionality is partitioned on every level into instructions, which exchange unambiguous system information and therefore output system commands. The von Neumann architecture is a special case of the instruction architecture in which instructions are coded as unambiguous system information. In the recommendation (or pattern extraction) architecture functionality is partitioned on every level into repetition elements, which can freely exchange ambiguous information and therefore output only system action recommendations which must compete for control of system behavior. Partitioning is optimized to the best tradeoff between even partitioning and minimum cost of distributing data. Natural pressures deriving from the need to construct copies under DNA control, recover from errors, failures and damage, and add new functionality derived from random mutations has resulted in biological brains being constrained to adopt the recommendation architecture. The resultant hierarchy of functional separations can be the basis for understanding psychological phenomena in terms of physiology. A theory of consciousness is described based on the recommendation architecture model for biological brains. Consciousness is defined at a high level in terms of sensory independent image sequences including self images with the role of extending the search of records of individual experience for behavioral guidance in complex social situations. Functional components of this definition of consciousness are developed, and it is demonstrated that these components can be translated through subcomponents to descriptions in terms of known and postulated physiological mechanisms

    New Approaches for Data-mining and Classification of Mental Disorder in Brain Imaging Data

    Get PDF
    Brain imaging data are incredibly complex and new information is being learned as approaches to mine these data are developed. In addition to studying the healthy brain, new approaches for using this information to provide information about complex mental illness such as schizophrenia are needed. Functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) are two well-known neuroimaging approaches that provide complementary information, both of which provide a huge amount of data that are not easily modelled. Currently, diagnosis of mental disorders is based on a patients self-reported experiences and observed behavior over the longitudinal course of the illness. There is great interest in identifying biologically based marker of illness, rather than relying on symptoms, which are a very indirect manifestation of the illness. The hope is that biological markers will lead to earlier diagnosis and improved treatment as well as reduced costs. Understanding mental disorders is a challenging task due to the complexity of brain structure and function, overlapping features between disorders, small numbers of data sets for training, heterogeneity within disorders, and a very large amount of high dimensional data. This doctoral work proposes machine learning and data mining based algorithms to detect abnormal functional network connectivity patterns of patients with schizophrenia and distinguish them from healthy controls using 1) independent components obtained from task related fMRI data, 2) functional network correlations based on resting-state and a hierarchy of tasks, and 3) functional network correlations in both fMRI and MEG data. The abnormal activation patterns of the functional network correlation of patients are characterized by using a statistical analysis and then used as an input to classification algorithms. The framework presented in this doctoral study is able to achieve good characterization of schizophrenia and provides an initial step towards designing an objective biological marker-based diagnostic test for schizophrenia. The methods we develop can also help us to more fully leverage available imaging technology in order to better understand the mystery of the human brain, the most complex organ in the human body

    Automation Process for Morphometric Analysis of Volumetric CT Data from Pulmonary Vasculature in Rats

    Get PDF
    With advances in medical imaging scanners, it has become commonplace to generate large multidimensional datasets. These datasets require tools for a rapid, thorough analysis. To address this need, we have developed an automated algorithm for morphometric analysis incorporating A Visualization Workshop computational and image processing libraries for three-dimensional segmentation, vascular tree generation and structural hierarchical ordering with a two-stage numeric optimization procedure for estimating vessel diameters. We combine this new technique with our mathematical models of pulmonary vascular morphology to quantify structural and functional attributes of lung arterial trees. Our physiological studies require repeated measurements of vascular structure to determine differences in vessel biomechanical properties between animal models of pulmonary disease. Automation provides many advantages including significantly improved speed and minimized operator interaction and biasing. The results are validated by comparison with previously published rat pulmonary arterial micro-CT data analysis techniques, in which vessels were manually mapped and measured using intense operator intervention

    Personalized Cinemagraphs using Semantic Understanding and Collaborative Learning

    Full text link
    Cinemagraphs are a compelling way to convey dynamic aspects of a scene. In these media, dynamic and still elements are juxtaposed to create an artistic and narrative experience. Creating a high-quality, aesthetically pleasing cinemagraph requires isolating objects in a semantically meaningful way and then selecting good start times and looping periods for those objects to minimize visual artifacts (such a tearing). To achieve this, we present a new technique that uses object recognition and semantic segmentation as part of an optimization method to automatically create cinemagraphs from videos that are both visually appealing and semantically meaningful. Given a scene with multiple objects, there are many cinemagraphs one could create. Our method evaluates these multiple candidates and presents the best one, as determined by a model trained to predict human preferences in a collaborative way. We demonstrate the effectiveness of our approach with multiple results and a user study.Comment: To appear in ICCV 2017. Total 17 pages including the supplementary materia

    Distributed Connected Component Filtering and Analysis in 2-D and 3-D Tera-Scale Data Sets

    Get PDF
    Connected filters and multi-scale tools are region-based operators acting on the connected components of an image. Component trees are image representations to efficiently perform these operations as they represent the inclusion relationship of the connected components hierarchically. This paper presents disccofan (DIStributed Connected COmponent Filtering and ANalysis), a new method that extends the previous 2-D implementation of the Distributed Component Forests (DCFs) to handle 3-D processing and higher dynamic range data sets. disccofan combines shared and distributed memory techniques to efficiently compute component trees, user-defined attributes filters, and multi-scale analysis. Compared to similar methods, disccofan is faster and scales better on low and moderate dynamic range images, and is the only method with a speed-up larger than 1 on a realistic, astronomical floating-point data set. It achieves a speed-up of 11.20 using 48 processes to compute the DCF of a 162 Gigapixels, single-precision floating-point 3-D data set, while reducing the memory used by a factor of 22. This approach is suitable to perform attribute filtering and multi-scale analysis on very large 2-D and 3-D data sets, up to single-precision floating-point value
    corecore