1,044 research outputs found

    Automatic Leaf Extraction from Outdoor Images

    Full text link
    Automatic plant recognition and disease analysis may be streamlined by an image of a complete, isolated leaf as an initial input. Segmenting leaves from natural images is a hard problem. Cluttered and complex backgrounds: often composed of other leaves are commonplace. Furthermore, their appearance is highly dependent upon illumination and viewing perspective. In order to address these issues we propose a methodology which exploits the leaves venous systems in tandem with other low level features. Background and leaf markers are created using colour, intensity and texture. Two approaches are investigated: watershed and graph-cut and results compared. Primary-secondary vein detection and a protrusion-notch removal are applied to refine the extracted leaf. The efficacy of our approach is demonstrated against existing work.Comment: 13 pages, India-UK Advanced Technology Centre of Excellence in Next Generation Networks, Systems and Services (IU-ATC), 201

    Multi-level Trainable Segmentation for Measuring Gestational and Yolk Sacs from Ultrasound Images

    Get PDF
    As a non-hazardous and non-invasive approach to medical diagnostic imaging, ultrasound serves as an ideal candidate for tracking and monitoring pregnancy development. One critical assessment during the first trimester of the pregnancy is the size measurements of the Gestation Sac (GS) and the Yolk Sac (YS) from ultrasound images. Such measurements tend to give a strong indication on the viability of the pregnancy. This paper proposes a novel multi-level trainable segmentation method to achieve three objectives in the following order: (1) segmenting and measuring the GS, (2) automatically identifying the stage of pregnancy, and (3) segmenting and measuring the YS. The first level segmentation employs a trainable segmentation technique based on the histogram of oriented gradients to segment the GS and estimate its size. This is then followed by an automatic identification of the pregnancy stage based on histogram analysis of the content of the segmented GS. The second level segmentation is used after that to detect the YS and extract its relevant size measurements. A trained neural network classifier is employed to perform the segmentation at both levels. The effectiveness of the proposed solution has been evaluated by comparing the automatic size measurements of the GS and YS against the ones obtained gynaecologist. Experimental results on 199 ultrasound images demonstrate the effectiveness of the proposal in producing accurate measurements as well as identifying the correct stage of pregnancy

    Latest developments in 3D analysis of geomaterials by Morpho+

    Get PDF
    At the Centre for X-ray Tomography of the Ghent University (Belgium) (www.ugct.ugent.be) besides hardware development for high-resolution X-ray CT scanners, a lot of progress is being made in the field of 3D analysis of the scanned samples. Morpho+ is a flexible 3D analysis software which provides the necessary petrophysical parameters of the scanned samples in 3D. Although Morpho+ was originally designed to provide any kind of 3D parameter, it contains some specific features especially designed for the analysis of geomaterial properties like porosity, partial porosity, pore-size distribution, grain size, grain orientation and surface determination. Additionally, the results of the 3D analysis can be visualized which enables to understand and interpret the analysis results in a straightforward way. The complementarities between high-quality X-ray CT images and flexible 3D software are opening up new gateways in the study of geomaterials

    Analysis and evaluation of fragment size distributions in rock blasting at the Erdenet Mine

    Get PDF
    Master's Project (M.S.) University of Alaska Fairbanks, 2015Rock blasting is one of the most important operations in mining. It significantly affects the subsequent comminution processes and, therefore, is critical to successful mining productions. In this study, for the evaluation of the blasting performance at the Erdenet Mine, we analyzed rock fragment size distributions with the digital image processing method. The uniformities of rock fragments and the mean fragment sizes were determined and applied in the Kuz-Ram model. Statistical prediction models were also developed based on the field measured parameters. The results were compared with the Kuz-Ram model predictions and the digital image processing measurements. A total of twenty-eight images from eleven blasting patterns were processed, and rock size distributions were determined by Split-Desktop program in this study. Based on the rock mass and explosive properties and the blasting parameters, the rock fragment size distributions were also determined with the Kuz-Ram model and compared with the measurements by digital image processing. Furthermore, in order to improve the prediction of rock fragment size distributions at the mine, regression analyses were conducted and statistical models w ere developed for the estimation of the uniformity and characteristic size. The results indicated that there were discrepancies between the digital image measurements and those estimated by the Kuz-Ram model. The uniformity indices of image processing measurements varied from 0.76 to 1.90, while those estimate by the Kuz-Ram model were from 1.07 to 1.13. The mean fragment size of the Kuz-Ram model prediction was 97.59% greater than the mean fragment size of the image processing. The multivariate nonlinear regression analyses conducted in this study indicated that rock uniaxial compressive strength and elastic modulus, explosive energy input in the blasting, bench height to burden ratio and blast area per hole were significant predictor variables in determining the fragment characteristic size and the uniformity index. The regression models developed based on the above predictor variables showed much closer agreement with the measurements

    Skeletonization and Partitioning of Digital Images Using Discrete Morse Theory

    No full text
    We show how discrete Morse theory provides a rigorous and unifying foundation for defining skeletons and partitions of grayscale digital images. We model a grayscale image as a cubical complex with a real-valued function defined on its vertices (the voxel values). This function is extended to a discrete gradient vector field using the algorithm presented in Robins, Wood, Sheppard TPAMI 33:1646 (2011). In the current paper we define basins (the building blocks of a partition) and segments of the skeleton using the stable and unstable sets associated with critical cells. The natural connection between Morse theory and homology allows us to prove the topological validity of these constructions; for example, that the skeleton is homotopic to the initial object. We simplify the basins and skeletons via Morse-theoretic cancellation of critical cells in the discrete gradient vector field using a strategy informed by persistent homology. Simple working Python code for our algorithms for efficient vector field traversal is included. Example data are taken from micro-CT images of porous materials, an application area where accurate topological models of pore connectivity are vital for fluid-flow modelling

    Spectral-spatial classification of n-dimensional images in real-time based on segmentation and mathematical morphology on GPUs

    Get PDF
    The objective of this thesis is to develop efficient schemes for spectral-spatial n-dimensional image classification. By efficient schemes, we mean schemes that produce good classification results in terms of accuracy, as well as schemes that can be executed in real-time on low-cost computing infrastructures, such as the Graphics Processing Units (GPUs) shipped in personal computers. The n-dimensional images include images with two and three dimensions, such as images coming from the medical domain, and also images ranging from ten to hundreds of dimensions, such as the multiand hyperspectral images acquired in remote sensing. In image analysis, classification is a regularly used method for information retrieval in areas such as medical diagnosis, surveillance, manufacturing and remote sensing, among others. In addition, as the hyperspectral images have been widely available in recent years owing to the reduction in the size and cost of the sensors, the number of applications at lab scale, such as food quality control, art forgery detection, disease diagnosis and forensics has also increased. Although there are many spectral-spatial classification schemes, most are computationally inefficient in terms of execution time. In addition, the need for efficient computation on low-cost computing infrastructures is increasing in line with the incorporation of technology into everyday applications. In this thesis we have proposed two spectral-spatial classification schemes: one based on segmentation and other based on wavelets and mathematical morphology. These schemes were designed with the aim of producing good classification results and they perform better than other schemes found in the literature based on segmentation and mathematical morphology in terms of accuracy. Additionally, it was necessary to develop techniques and strategies for efficient GPU computing, for example, a block–asynchronous strategy, resulting in an efficient implementation on GPU of the aforementioned spectral-spatial classification schemes. The optimal GPU parameters were analyzed and different data partitioning and thread block arrangements were studied to exploit the GPU resources. The results show that the GPU is an adequate computing platform for on-board processing of hyperspectral information

    A Novel Gaussian Extrapolation Approach for 2D Gel Electrophoresis Saturated Protein Spots

    Get PDF
    Analysis of images obtained from two-dimensional gel electrophoresis (2D-GE) is a topic of utmost importance in bioinformatics research, since commercial and academic software available currently has proven to be neither completely effective nor fully automatic, often requiring manual revision and refinement of computer generated matches. In this work, we present an effective technique for the detection and the reconstruction of over-saturated protein spots. Firstly, the algorithm reveals overexposed areas, where spots may be truncated, and plateau regions caused by smeared and overlapping spots. Next, it reconstructs the correct distribution of pixel values in these overexposed areas and plateau regions, using a two-dimensional least-squares fitting based on a generalized Gaussian distribution. Pixel correction in saturated and smeared spots allows more accurate quantification, providing more reliable image analysis results. The method is validated for processing highly exposed 2D-GE images, comparing reconstructed spots with the corresponding non-saturated image, demonstrating that the algorithm enables correct spot quantificatio
    corecore