20 research outputs found

    Fuzzy-Based Histogram Partitioning for Bi-Histogram Equalisation of Low Contrast Images

    Get PDF
    The conventional histogram equalisation (CHE), though being simple and widely used technique for contrast enhancement, but fails to preserve the mean brightness and natural appearance of images. Most of the improved histogram equalisation (HE) methods give better performance in terms of one or two metrics and sacri ce their performance in terms of other metrics. In this paper, a novel fuzzy based bi-HE method is proposed which equalises low contrast images optimally in terms of all considered metrics. The novelty of the proposed method lies in selection of fuzzy threshold value using level-snip technique which is then used to partition the histogram into segments. The segmented sub-histograms, like other bi-HE methods, are equalised independently and are combined together. Simulation results show that for widerange of test images, the proposed method improves the contrast while preserving other characteristics and provides good trade-off among all the considered performance metrics.This work was supported by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under Grant DF-374-135-1441

    Prominent region of interest contrast enhancement for knee MR images: data from the OAI

    Get PDF
    Osteoarthritis is the most commonly seen arthritis, where there are 30.8 million adults affected in 2015. Magnetic resonance imaging (MRI) plays a key role to provide direct visualization and quantitative measurement on knee cartilage to monitor the osteoarthritis progression. However, the visual quality of MRI data can be influenced by poor background luminance, complex human knee anatomy, and indistinctive tissue contrast. Typical histogram equalisation methods are proven to be irrelevant in processing the biomedical images due to their steep cumulative density function (CDF) mapping curve which could result in severe washout and distortion on subject details. In this paper, the prominent region of interest contrast enhancement method (PROICE) is proposed to separate the original histogram of a 16-bit biomedical image into two Gaussians that cover dark pixels region and bright pixels region respectively. After obtaining the mean of the brighter region, where our ROI – knee cartilage falls, the mean becomes a break point to process two Bezier transform curves separately. The Bezier curves are then combined to replace the typical CDF curve to equalize the original histogram. The enhanced image preserves knee feature as well as region of interest (ROI) mean brightness. The image enhancement performance tests show that PROICE has achieved the highest peak signal-to-noise ratio (PSNR=24.747±1.315dB), lowest absolute mean brightness error (AMBE=0.020±0.007) and notably structural similarity index (SSIM=0.935±0.019). In other words, PROICE has considerably outperformed the other approaches in terms of its noise reduction, perceived image quality, its precision and has shown great potential to visually assist physicians in their diagnosis and decision-making process

    Improved stereo matching algorithm based on census transform and dynamic histogram cost computation

    Get PDF
    Stereo matching is a significant subject in the stereo vision algorithm. Traditional taxonomy composition consists of several issues in the stereo correspondences process such as radiometric distortion, discontinuity, and low accuracy at the low texture regions. This new taxonomy improves the local method of stereo matching algorithm based on the dynamic cost computation for disparity map measurement. This method utilised modified dynamic cost computation in the matching cost stage. A modified Census Transform with dynamic histogram is used to provide the cost volume. An adaptive bilateral filtering is applied to retain the image depth and edge information in the cost aggregation stage. A Winner Takes All (WTA) optimisation is applied in the disparity selection and a left-right check with an adaptive bilateral median filtering are employed for final refinement. Based on the dataset of standard Middlebury, the taxonomy has better accuracy and outperformed several other state-of-the-art algorithms

    Mathematical Morphology for Quantification in Biological & Medical Image Analysis

    Get PDF
    Mathematical morphology is an established field of image processing first introduced as an application of set and lattice theories. Originally used to characterise particle distributions, mathematical morphology has gone on to be a core tool required for such important analysis methods as skeletonisation and the watershed transform. In this thesis, I introduce a selection of new image analysis techniques based on mathematical morphology. Utilising assumptions of shape, I propose a new approach for the enhancement of vessel-like objects in images: the bowler-hat transform. Built upon morphological operations, this approach is successful at challenges such as junctions and robust against noise. The bowler-hat transform is shown to give better results than competitor methods on challenging data such as retinal/fundus imagery. Building further on morphological operations, I introduce two novel methods for particle and blob detection. The first of which is developed in the context of colocalisation, a standard biological assay, and the second, which is based on Hilbert-Edge Detection And Ranging (HEDAR), with regard to nuclei detection and counting in fluorescent microscopy. These methods are shown to produce accurate and informative results for sub-pixel and supra-pixel object counting in complex and noisy biological scenarios. I propose a new approach for the automated extraction and measurement of object thickness for intricate and complicated vessels, such as brain vascular in medical images. This pipeline depends on two key technologies: semi-automated segmentation by advanced level-set methods and automatic thickness calculation based on morphological operations. This approach is validated and results demonstrating the broad range of challenges posed by these images and the possible limitations of this pipeline are shown. This thesis represents a significant contribution to the field of image processing using mathematical morphology and the methods within are transferable to a range of complex challenges present across biomedical image analysis

    Foetal echocardiographic segmentation

    Get PDF
    Congenital heart disease affects just under one percentage of all live births [1]. Those defects that manifest themselves as changes to the cardiac chamber volumes are the motivation for the research presented in this thesis. Blood volume measurements in vivo require delineation of the cardiac chambers and manual tracing of foetal cardiac chambers is very time consuming and operator dependent. This thesis presents a multi region based level set snake deformable model applied in both 2D and 3D which can automatically adapt to some extent towards ultrasound noise such as attenuation, speckle and partial occlusion artefacts. The algorithm presented is named Mumford Shah Sarti Collision Detection (MSSCD). The level set methods presented in this thesis have an optional shape prior term for constraining the segmentation by a template registered to the image in the presence of shadowing and heavy noise. When applied to real data in the absence of the template the MSSCD algorithm is initialised from seed primitives placed at the centre of each cardiac chamber. The voxel statistics inside the chamber is determined before evolution. The MSSCD stops at open boundaries between two chambers as the two approaching level set fronts meet. This has significance when determining volumes for all cardiac compartments since cardiac indices assume that each chamber is treated in isolation. Comparison of the segmentation results from the implemented snakes including a previous level set method in the foetal cardiac literature show that in both 2D and 3D on both real and synthetic data, the MSSCD formulation is better suited to these types of data. All the algorithms tested in this thesis are within 2mm error to manually traced segmentation of the foetal cardiac datasets. This corresponds to less than 10% of the length of a foetal heart. In addition to comparison with manual tracings all the amorphous deformable model segmentations in this thesis are validated using a physical phantom. The volume estimation of the phantom by the MSSCD segmentation is to within 13% of the physically determined volume

    BioTwist - overcoming severe distortions in ridge-based biometrics for successful identication

    Get PDF
    Biometrics rely on a physical trait's permanence and stability over time, as well as its individuality, robustness and ease to be captured. Challenges arise when working with newborns or infants because of the tininess and fragility of an infant's features, their uncooperative nature and their rapid growth. The last of these is particularly relevant when one tries to verify an infant's identity based on captures of a biometric taken at an earlier age. Finding a physical trait that is feasible for infants is often referred to as the infant biometric problem. This thesis explores the quality aspect of adult fingermarks and the correlation between image quality and the mark’s usefulness for an ongoing forensic investigation, and researches various aspects of the “ballprint” as an infant biometric. The ballprint, the friction ridge skin area of the foot pad under the big toe, exhibits similar properties to fingerprint but the ball possesses larger physical structures and a greater number of features. We collected a longitudinal ballprint database from 54 infants within 3 days of birth, at two months old, at 6 months and at 2 years. It has been observed that the skin of a newborn's foot dries and cracks so the ridge lines are often not visible to the naked eye and an adult fingerprint scanner cannot capture them. This thesis presents the physiological discovery that the ballprint grows isotropically during infancy and can be well approximated by a linear function of the infant's age. Fingerprint technology developed for adult fingerprints can match ballprints if they are adjusted by a physical feature (the inter-ridge spacing) to be of a similar size to adult fingerprints. The growth in ballprint inter-ridge spacing mirrors infant growth in terms of length/height. When growth is compensated for by isotropic rescaling, impressive verification scores are achieved even for captures taken 22 months apart. The scores improve even further when low-quality prints are rejected; the removal of the bottom third improves the Equal Error Rate from 1-2% to 0%. In conclusion, this thesis demonstrates that the ballprint is a feasible solution to the infant biometric problem

    Image processing by region extraction using a clustering approach based on color

    Get PDF
    This thesis describes an image segmentation technique based on watersheds, a clustering technique which does not use spatial information, but relies on multispectral images. These are captured using a monochrome camera and narrow-band filters; we call this color segmentation, although it does not use color in a physiological sense. A major part of the work is testing the method developed using different color images. Starting with a general discussion of image processing, the different techniques used in image segmentation are reviewed, and the application of mathematical morphology to image processing is discussed. The use of watersheds as a clustering technique in two- dimensional color space is discussed, and system performance illustrated. The method can be improved for industrial applications by using normalized color to eliminate the problem of shadows. These methods are extended to segment the image into regions recursively. Different types of color images including both man made color images, and natural color images have been used to illustrate performance. There is a brief discussion and a simple illustration showing how segmentation can be used in image compression, and of the application of pyramidal data structures in clustering for coarse segmentation. The thesis concludes with an investigation of the methods which can be used to improve these segmentation results. This includes edge extraction, texture extraction, and recursive merging

    Automated Analysis of X-ray Images for Cargo Security

    Get PDF
    Customs and border officers are overwhelmed by the hundreds of millions of cargo containers that constitute the backbone of the global supply chain, any one of which could contain a security- or customs-related threat. Searching for these threats is akin to searching for needles in an ever-growing field of haystacks. This thesis considers novel automated image analysis methods to automate or assist elements of cargo inspection. The four main contributions of this thesis are as follows. Methods are proposed for the measurement and correction of detector wobble in large-scale transmission radiography using Beam Position Detectors (BPDs). Wobble is estimated from BPD measurements using a Random Regression Forest (RRF) model, Bayesian fused with a prior estimate from an Auto-Regression (AR). Next, a series of image corrections are derived, and it is shown that 87% of image error due to wobble can be corrected. This is the first proposed method for correction of wobble in large-scale transmission radiography. A Threat Image Projection (TIP) framework is proposed, for training, probing and evaluating Automated Threat Detection (ATD) algorithms. The TIP method is validated experimentally, and a method is proposed to test whether algorithms can learn to exploit TIP artefacts. A system for Empty Container Verification (ECV) is proposed. The system, trained using TIP, is based on Random Forest (RF) classification of image patches according to fixed geometric features and container location. The method outperforms previous reported results, and is able to detect very small amounts of synthetically concealed smuggled contraband. Finally, a method for ATD is proposed, based on a deep Convolutional Neural Network (CNN), trained from scratch using TIP, and exploits the material information encoded within dual-energy X-ray images to suppress false alarms. The system offers a 100-fold improvement in the false positive rate over prior work

    Archaeological geophysical prospection in peatland environments.

    Get PDF
    Waterlogged sites in peat often preserve organic material, both in the form of artefacts and pa1aeoenvironmenta1 evidence as a result of the prevailing anaerobic environment. After three decades of excavation and large scale study projects in the UK, the subdiscipline of wetland archaeology is rethinking theoretical approaches to these environments. Wet1and sites are generally discovered while they are being damaged or destroyed by human activity. The survival in situ of these important sites is also threatened by drainage, agriculture, erosion and climate change as the deposits cease to be anaerobic. Sites are lost without ever being discovered as the nature of the substrate changes. A prospection tool is badly needed to address these wet1and areas as conventional prospection methods such as aerial photography, field walking and remote sensing are not able to detect sites under the protective over burden. This thesis presents research undertaken between 2007 and 2010 at Bournemouth University. It aimed to examine the potential for conventional geophysical survey methods (resistivity, gradiometry, ground penetrating radar and frequency domain electromagnetic) as site prospection and landscape investigation tools in peatland environments. It examines previous attempts to prospect peatland sites, both in archaeology and environmental science. These attempts show that under the right circumstances, archaeological and landscape features could be detected by these methods, but that the reasons why techniques often fail are not well understood. Eight case-study sites were surveyed using a combination of conventional techniques. At three of the sites ground truthing work in the form of excavations, bulk sampling and coring was undertaken to validate the survey interpretations. This was followed up by laboratory analysis ofthe physical and chemical properties ofthe peat and mineral soils encountered. The key conclusion of the case study work undertaken is that conventional geophysical prospection tools are capable of detecting archaeological features in peat1and environments, but that the nature of the deposits encountered creates challenges in interpretation. Too few previous surveys have been adequately ground truthed to allow inferences and cross comparisons. The upland case studies demonstrated that geophysical survey on shallow types ofupland peat using conventional techniques yields useful information about prehistoric landscapes. The situation in the lowlands is more complex. In shallow peat without minerogenic layers, timber detection is possible. There are indications that in saturated peat the chemistry ofthe peat and pore water causes responses in the geophysical surveys, which could be developed as a proxy means to detect or monitor archaeological remains. On sites where the sediments are more complex or affected by desiccation, timbers were not detected with the methods attempted. However, important landscape features were and there are indications that geophysical surveys could be used as part of management and conservation strategies. This thesis concludes that geophysical prospection contributes to theoretically informed wet1and archaeology as a tool for site detection, landscape interpretation, and conservation. Future research should aim to further our understanding of the relationship between geophysical response and peat1and geochemistry, alongside a more extensive programme of surveys and ground-truthing work to improve survey methodologies and archaeological interpretations
    corecore