1,027 research outputs found

    A region-based algorithm for automatic bone segmentation in volumetric CT

    Get PDF
    In Computed Tomography (CT), bone segmentation is considered an important step to extract bone parameters, which are frequently useful for computer-aided diagnosis, surgery and treatment of many diseases such as osteoporosis. Consequently, the development of accurate and reliable segmentation techniques is essential, since it often provides a great impact on quantitative image analysis and diagnosis outcome. This chapter presents an automated multistep approach for bone segmentation in volumetric CT datasets. It starts with a three-dimensional (3D) watershed operation on an image gradient magnitude. The outcome of the watershed algorithm is an over-partioning image of many 3D regions that can be merged, yielding a meaningful image partitioning. In order to reduce the number of regions, a merging procedure was performed that merges neighbouring regions presenting a mean intensity distribution difference of ±15%. Finally, once all bones have been distinguished in high contrast, the final 3D bone segmentation was achieved by selecting all regions with bone fragments, using the information retrieved by a threshold mask. The bones contours were accurately defined according to the watershed regions outlines instead of considering the thresholding segmentation result. This new method was tested to segment the rib cage on 185 CT images, acquired at the São João Hospital of Porto (Portugal) and evaluated using the dice similarity coefficient as a statistical validation metric, leading to a coefficient mean score of 0.89. This could represent a step forward towards accurate and automatic quantitative analysis in clinical environments and decreasing time-consumption, user dependence and subjectivity.The authors acknowledge to Foundation for Science and Technology (FCT) - Portugal for the fellowships with the references: SFRH/BD/74276/2010; SFRH/BD/68270/2010; and, SFRH/BPD/46851/2008. This work was also supported by FCT R&D project PTDC/SAU-BEB/103368/2008

    Part decomposition of 3D surfaces

    Get PDF
    This dissertation describes a general algorithm that automatically decomposes realworld scenes and objects into visual parts. The input to the algorithm is a 3 D triangle mesh that approximates the surfaces of a scene or object. This geometric mesh completely specifies the shape of interest. The output of the algorithm is a set of boundary contours that dissect the mesh into parts where these parts agree with human perception. In this algorithm, shape alone defines the location of a bom1dary contour for a part. The algorithm leverages a human vision theory known as the minima rule that states that human visual perception tends to decompose shapes into parts along lines of negative curvature minima. Specifically, the minima rule governs the location of part boundaries, and as a result the algorithm is known as the Minima Rule Algorithm. Previous computer vision methods have attempted to implement this rule but have used pseudo measures of surface curvature. Thus, these prior methods are not true implementations of the rule. The Minima Rule Algorithm is a three step process that consists of curvature estimation, mesh segmentation, and quality evaluation. These steps have led to three novel algorithms known as Normal Vector Voting, Fast Marching Watersheds, and Part Saliency Metric, respectively. For each algorithm, this dissertation presents both the supporting theory and experimental results. The results demonstrate the effectiveness of the algorithm using both synthetic and real data and include comparisons with previous methods from the research literature. Finally, the dissertation concludes with a summary of the contributions to the state of the art

    Challenges in imaging and predictive modeling of rhizosphere processes

    Get PDF
    Background Plant-soil interaction is central to human food production and ecosystem function. Thus, it is essential to not only understand, but also to develop predictive mathematical models which can be used to assess how climate and soil management practices will affect these interactions. Scope In this paper we review the current developments in structural and chemical imaging of rhizosphere processes within the context of multiscale mathematical image based modeling. We outline areas that need more research and areas which would benefit from more detailed understanding. Conclusions We conclude that the combination of structural and chemical imaging with modeling is an incredibly powerful tool which is fundamental for understanding how plant roots interact with soil. We emphasize the need for more researchers to be attracted to this area that is so fertile for future discoveries. Finally, model building must go hand in hand with experiments. In particular, there is a real need to integrate rhizosphere structural and chemical imaging with modeling for better understanding of the rhizosphere processes leading to models which explicitly account for pore scale processes

    General Road Detection Algorithm, a Computational Improvement

    Get PDF
    International audienceThis article proposes a method improving Kong et al. algorithm called Locally Adaptive Soft-Voting (LASV) algorithm described in " General road detection from a single image ". This algorithm aims to detect and segment road in structured and unstructured environments. Evaluation of our method over different images datasets shows that it is speeded up by up to 32 times and precision is improved by up to 28% compared to the original method. This enables our method to come closer the real time requirements

    Optimization of facade segmentation based on layout priors

    Get PDF
    We propose an algorithm that provides a pixel-wise classification of building facades. Building facades provide a rich environment for testing semantic segmentation techniques. They come in a variety of styles affecting appearance and layout. On the other hand, they exhibit a degree of stability in the arrangement of structures across different instances. Furthermore, a single image is often composed of a repetitive architectural pattern. We integrate appearance, layout and repetition cues in a single energy function, that is optimized through the TRW-S algorithm to provide a classification of superpixels. The appearance energy is based on scores of a Random Forrest classifier. The feature space is composed of higher-level vectors encoding distance to structure clusters. Layout priors are obtained from locations and structural adjacencies in training data. In addition, priors result from translational symmetry cues acquired from the scene itself through clustering via the α -expansion graphcut algorithm. We are on par with state-of-the-art. We are able to fine tune classifications at the superpixel level, while most methods model all architectural features with bounding rectangles

    Information extraction from sensor networks using the Watershed transform algorithm

    Get PDF
    Wireless sensor networks are an effective tool to provide fine resolution monitoring of the physical environment. Sensors generate continuous streams of data, which leads to several computational challenges. As sensor nodes become increasingly active devices, with more processing and communication resources, various methods of distributed data processing and sharing become feasible. The challenge is to extract information from the gathered sensory data with a specified level of accuracy in a timely and power-efficient approach. This paper presents a new solution to distributed information extraction that makes use of the morphological Watershed algorithm. The Watershed algorithm dynamically groups sensor nodes into homogeneous network segments with respect to their topological relationships and their sensing-states. This setting allows network programmers to manipulate groups of spatially distributed data streams instead of individual nodes. This is achieved by using network segments as programming abstractions on which various query processes can be executed. Aiming at this purpose, we present a reformulation of the global Watershed algorithm. The modified Watershed algorithm is fully asynchronous, where sensor nodes can autonomously process their local data in parallel and in collaboration with neighbouring nodes. Experimental evaluation shows that the presented solution is able to considerably reduce query resolution cost without scarifying the quality of the returned results. When compared to similar purpose schemes, such as “Logical Neighborhood”, the proposed approach reduces the total query resolution overhead by up to 57.5%, reduces the number of nodes involved in query resolution by up to 59%, and reduces the setup convergence time by up to 65.1%

    Quantitative analysis of the epithelial lining architecture in radicular cysts and odontogenic keratocysts

    Get PDF
    BACKGROUND: This paper describes a quantitative analysis of the cyst lining architecture in radicular cysts (of inflammatory aetiology) and odontogenic keratocysts (thought to be developmental or neoplastic) including its 2 counterparts: solitary and associated with the Basal Cell Naevus Syndrome (BCNS). METHODS: Epithelial linings from 150 images (from 9 radicular cysts, 13 solitary keratocysts and 8 BCNS keratocysts) were segmented into theoretical cells using a semi-automated partition based on the intensity of the haematoxylin stain which defined exclusive areas relative to each detected nucleus. Various morphometrical parameters were extracted from these "cells" and epithelial layer membership was computed using a systematic clustering routine. RESULTS: Statistically significant differences were observed across the 3 cyst types both at the morphological and architectural levels of the lining. Case-wise discrimination between radicular cysts and keratocyst was highly accurate (with an error of just 3.3%). However, the odontogenic keratocyst subtypes could not be reliably separated into the original classes, achieving discrimination rates slightly above random allocations (60%). CONCLUSION: The methodology presented is able to provide new measures of epithelial architecture and may help to characterise and compare tissue spatial organisation as well as provide useful procedures for automating certain aspects of histopathological diagnosis

    Automatic Detection of Critical Dermoscopy Features for Malignant Melanoma Diagnosis

    Get PDF
    Improved methods for computer-aided analysis of identifying features of skin lesions from digital images of the lesions are provided. Improved preprocessing of the image that 1) eliminates artifacts that occlude or distort skin lesion features and 2) identifies groups of pixels within the skin lesion that represent features and/or facilitate the quantification of features are provided including improved digital hair removal algorithms. Improved methods for analyzing lesion features are also provided

    Histological Quantification in Temporal Lobe Epilepsy

    Get PDF
    Approximately 30 percent of epilepsy patients suffer from refractory temporal lobe epilepsy which is commonly treated with resection of the epileptogenic tissue. However, surgical treatment presents many challenges in locating the epileptogenic focus and thus not all patients become seizure-free following surgery. Advances in techniques can lead to improved localization of the epileptogenic zone and may be validated by correlating MRI with neuropathology of the excised cortical tissue. Focal cortical dysplasias are a neuropathological group of cortical malformations that are often found in cases of refractory epilepsy, however, they are subtle and difficult to quantify. The purpose of this research is to employ histology image analysis techniques to better characterize these abnormalities at the neuronal and laminar level, allowing for correlative MRI-histology studies and improved lesion detection in medically intractable TLE
    corecore