483 research outputs found

    Gap Filling of 3-D Microvascular Networks by Tensor Voting

    Get PDF
    We present a new algorithm which merges discontinuities in 3-D images of tubular structures presenting undesirable gaps. The application of the proposed method is mainly associated to large 3-D images of microvascular networks. In order to recover the real network topology, we need to fill the gaps between the closest discontinuous vessels. The algorithm presented in this paper aims at achieving this goal. This algorithm is based on the skeletonization of the segmented network followed by a tensor voting method. It permits to merge the most common kinds of discontinuities found in microvascular networks. It is robust, easy to use, and relatively fast. The microvascular network images were obtained using synchrotron tomography imaging at the European Synchrotron Radiation Facility. These images exhibit samples of intracortical networks. Representative results are illustrated

    Stability, Structure and Scale: Improvements in Multi-modal Vessel Extraction for SEEG Trajectory Planning

    Get PDF
    Purpose Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying signi cant associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer assisted planning systems that can optimise the safety pro le of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. Methods The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Results Twelve paired datasets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coe cient was 0.89 ± 0.04, representing a statistically signi cantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ±0.03). Conclusions Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity

    Extracting Perceptual Structure in Dot Patterns: An Integrated Approach

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryAir Force Office of Scientific Research / AFOSR 86-000

    An Edge-finder based on fuzzy perceptual grouping

    Get PDF
    Much recent research in computer vision has been aimed at the recognition of objects from scenes. Perceptual grouping has demonstrated its importance in creating descriptions of objects. Elementary edge descriptors are grouped into more complex structures based on relations such as proximity, parallelism, symmetry, and junction. However, exact determinations of these relations are impossible, because the relation is naturally uncertain and the output of low-level processing is not perfect. In this paper, an edge finding scheme based on fuzzy perceptual grouping is introduced. The geometrical relations are considered as fuzzy relations. The grouping operations that are based upon the fuzzy relations are discussed. The results of this edge-finder and their comparison with those of human beings are illustrated. The software implementation is also described

    Dynamics of Attention in Depth: Evidence from Mutli-Element Tracking

    Full text link
    The allocation of attention in depth is examined using a multi-element tracking paradigm. Observers are required to track a predefined subset of from two to eight elements in displays containing up to sixteen identical moving elements. We first show that depth cues, such as binocular disparity and occlusion through T-junctions, improve performance in a multi-element tracking task in the case where element boundaries are allowed to intersect in the depiction of motion in a single fronto-parallel plane. We also show that the allocation of attention across two perceptually distinguishable planar surfaces either fronto-parallel or receding at a slanting angle and defined by coplanar elements, is easier than allocation of attention within a single surface. The same result was not found when attention was required to be deployed across items of two color populations rather than of a single color. Our results suggest that, when surface information does not suffice to distinguish between targets and distractors that are embedded in these surfaces, division of attention across two surfaces aids in tracking moving targets.National Science Foundation (IRI-94-01659); Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657

    INVESTIGATING TYPE-TOKEN REGRESSION AND ITS POTENTIAL FOR AUTOMATED TEXT DISCRIMINATION

    Get PDF
    The motivation of the present paper is base don the intuition that the sole use of data on lexical relative to text samples of variations languages, authors, linguistic domains, etc. might be a potential indicator for automated text discrimination. In order to look for a reliable and valid lexical density index, we shall review and clarify the mathematical relationship between types (word forms) and tokens (words) by discussing and constructing adequeate regression models that might help to differentiate text types from each other. Additionally we shall use multivariate statistical models (cluster analysis and discriminant analysis) to complement the mathematical lexical density regression model (TYT-formula).La motivación del presente artículo nace de la intuición de que la sola utilización de la densidad léxica de muestras textuales pertenecientes a diferentes idiomas, autores, dominios lingüísticos, etc. Puede ser potencialmente válida para discriminar textos de forma automática. Con el fin de encontrar un índice de densidad léxica válido y fiable, hemos revisado y clarificado la relación matemática entre tipos (formas) y tokens (palabras), puro construir modelos de regresión adecuados que nos permitan distinguir tipos de textos. Por añadidura, hemos hecho uso de modelos estadísticos multivariantes (análisis de conglomerados y análisis discriminante) con el fin de complementar y optimizar el modelo matemático de regresión para la densidad léxica (la fórmula TYT)

    Towards an Empirically-based Model of Age-graded Behaviour: Trac(ing) linguistic malleability across the entire adult life-span

    Get PDF
    Previous panel research has provided individual evidence for aspects of the U-shaped pattern, but these studies typically rely on sampling the same speaker at two points in time, usually in close proximity. As a result, our knowledge about the patterning of age-graded variables across the entire adult life-span is limited. What is needed, thus, is a data-set that captures ongoing linguistic malleability in the individual speaker across all “life experiences that give age meaning” (Eckert 1997:167). Our study is the first to add real time evidence across the lifespan as a whole on an age-graded variable. We present the results of a novel dynamic data-set that allows us to model speakers’ linguistic choices between ages 19 and 78. We illustrate the age-graded patterns in our data and draw attention to the complex, socially niched ways in which speakers react to age-specific expectations

    Automated Extraction of Road Information from Mobile Laser Scanning Data

    Get PDF
    Effective planning and management of transportation infrastructure requires adequate geospatial data. Existing geospatial data acquisition techniques based on conventional route surveys are very time consuming, labor intensive, and costly. Mobile laser scanning (MLS) technology enables a rapid collection of enormous volumes of highly dense, irregularly distributed, accurate geo-referenced point cloud data in the format of three-dimensional (3D) point clouds. Today, more and more commercial MLS systems are available for transportation applications. However, many transportation engineers have neither interest in the 3D point cloud data nor know how to transform such data into their computer-aided model (CAD) formatted geometric road information. Therefore, automated methods and software tools for rapid and accurate extraction of 2D/3D road information from the MLS data are urgently needed. This doctoral dissertation deals with the development and implementation aspects of a novel strategy for the automated extraction of road information from the MLS data. The main features of this strategy include: (1) the extraction of road surfaces from large volumes of MLS point clouds, (2) the generation of 2D geo-referenced feature (GRF) images from the road-surface data, (3) the exploration of point density and intensity of MLS data for road-marking extraction, and (4) the extension of tensor voting (TV) for curvilinear pavement crack extraction. In accordance with this strategy, a RoadModeler prototype with three computerized algorithms was developed. They are: (1) road-surface extraction, (2) road-marking extraction, and (3) pavement-crack extraction. Four main contributions of this development can be summarized as follows. Firstly, a curb-based approach to road surface extraction with assistance of the vehicle’s trajectory is proposed and implemented. The vehicle’s trajectory and the function of curbs that separate road surfaces from sidewalks are used to efficiently separate road-surface points from large volume of MLS data. The accuracy of extracted road surfaces is validated with manually selected reference points. Secondly, the extracted road enables accurate detection of road markings and cracks for transportation-related applications in road traffic safety. To further improve computational efficiency, the extracted 3D road data are converted into 2D image data, termed as a GRF image. The GRF image of the extracted road enables an automated road-marking extraction algorithm and an automated crack detection algorithm, respectively. Thirdly, the automated road-marking extraction algorithm applies a point-density-dependent, multi-thresholding segmentation to the GRF image to overcome unevenly distributed intensity caused by the scanning range, the incidence angle, and the surface characteristics of an illuminated object. The morphological operation is then implemented to deal with the presence of noise and incompleteness of the extracted road markings. Fourthly, the automated crack extraction algorithm applies an iterative tensor voting (ITV) algorithm to the GRF image for crack enhancement. The tensor voting, a perceptual organization method that is capable of extracting curvilinear structures from the noisy and corrupted background, is explored and extended into the field of crack detection. The successful development of three algorithms suggests that the RoadModeler strategy offers a solution to the automated extraction of road information from the MLS data. Recommendations are given for future research and development to be conducted to ensure that this progress goes beyond the prototype stage and towards everyday use

    Intrinsic Dimension Estimation: Relevant Techniques and a Benchmark Framework

    Get PDF
    When dealing with datasets comprising high-dimensional points, it is usually advantageous to discover some data structure. A fundamental information needed to this aim is the minimum number of parameters required to describe the data while minimizing the information loss. This number, usually called intrinsic dimension, can be interpreted as the dimension of the manifold from which the input data are supposed to be drawn. Due to its usefulness in many theoretical and practical problems, in the last decades the concept of intrinsic dimension has gained considerable attention in the scientific community, motivating the large number of intrinsic dimensionality estimators proposed in the literature. However, the problem is still open since most techniques cannot efficiently deal with datasets drawn from manifolds of high intrinsic dimension and nonlinearly embedded in higher dimensional spaces. This paper surveys some of the most interesting, widespread used, and advanced state-of-the-art methodologies. Unfortunately, since no benchmark database exists in this research field, an objective comparison among different techniques is not possible. Consequently, we suggest a benchmark framework and apply it to comparatively evaluate relevant state-of-the-art estimators
    corecore