1,330 research outputs found

    Geometric Morphology of Granular Materials

    Full text link
    We present a new method to transform the spectral pixel information of a micrograph into an affine geometric description, which allows us to analyze the morphology of granular materials. We use spectral and pulse-coupled neural network based segmentation techniques to generate blobs, and a newly developed algorithm to extract dilated contours. A constrained Delaunay tesselation of the contour points results in a triangular mesh. This mesh is the basic ingredient of the Chodal Axis Transform, which provides a morphological decomposition of shapes. Such decomposition allows for grain separation and the efficient computation of the statistical features of granular materials.Comment: 6 pages, 9 figures. For more information visit http://www.nis.lanl.gov/~bschlei/labvis/index.htm

    The investigation of the characterisation of flotation froths and design of a machine vision system for monitoring the operation of a flotation cell ore concentration

    Get PDF
    Electrical and Electronic EngineeringThis dissertation investigates the application of digital image processing techniques in the development of a machine vision system that is capable of characterising the froth structures prevalent on the surface of industrial flotation cells. At present, there is no instrument available that has the ability to measure the size and shape of the bubbles that constitute the surface froth. For this reason, research into a vision based system for surface froth characterisation has been undertaken. Being able to measure bubble size and shape would have far reaching consequences, not only in enhancing the understanding of the flotation process but also in the control and optimization of flotation cells

    Applying physical science techniques and CERN technology to an unsolved problem in radiation treatment for cancer: the multidisciplinary ‘VoxTox’ research programme

    Get PDF
    The VoxTox research programme has applied expertise from the physical sciences to the problem of radiotherapy toxicity, bringing together expertise from engineering, mathematics, high energy physics (including the Large Hadron Collider), medical physics and radiation oncology. In our initial cohort of 109 men treated with curative radiotherapy for prostate cancer, daily image guidance computed tomography (CT) scans have been used to calculate delivered dose to the rectum, as distinct from planned dose, using an automated approach. Clinical toxicity data have been collected, allowing us to address the hypothesis that delivered dose provides a better predictor of toxicity than planned dose.JES was supported by Cancer Research UK through the Cambridge Cancer Centre. NGB, ASP and MG are supported by the National Institute of Health Research Cambridge Biomedical Research Centre. KH, MR AMB, EW and SJB were supported by the VoxTox Research Programme, funded by Cancer Research UK. DJN is supported by Addenbrooke’s Charitable Trust and Cancer Research UK through the Cambridge Cancer Centre. FMB was supported by the Science and Technology Facilities Council. MPDS was part supported by the VoxTox Research Programme, funded by Cancer Research UK. RJ was part supported by the VoxTox Research Programme, funded by Cancer Research UK. LS is supported by the Armstrong Trust. XC was supported by the Isaac Newton Trust. CBS acknowledges support from the EPSRC Centre for Mathematical and Statistical Analysis of Multimodal Clinical Imaging, the Leverhulme Trust, the EU-RISE project CHiPS and the Cantab Capital Institute for the Mathematics of Information. NT was supported by a Gates-Cambridge Scholarship, funded by the Bill and Melinda Gates Foundation, PLY and SYKS by the Singapore Government

    An investigation of a pattern recognition system to analyse and classify dried fruit

    Get PDF
    Includes bibliographical references.Both the declining cost and increasing capabilities of specialised computer hardware for image processing have enabled computer vision systems to become a viable alternative to human visual inspection in industrial applications. In this thesis a vision system that will analyse and classify dried fruit is investigated. In human visual inspection of dried fruit, the colour of the fruit is often the main determinant of its grade; in specific cases the presence of blemishes and geometrical fault are also incorporated in order to determine the fruit grade. A colour model that would successfully represent the colour variations within dried fruit grades, was investigated. The selected colour feature space formed the basis of a classification system which automatically allocated a sample unit of dried fruit to one specific grade. Various classification methods were investigated, and that which suited the system data and parameters was selected and evaluated using test sets of three types of dried fruit. In order to successfully grade dried fruit, a number of additional problems had to be catered for: the red/brown coloured central core area of dried peaches had to be removed from the colour analysis, and Black blemishes upon dried pears had to be isolated and sized in order to supplement the colour classifier in the final classification of the pear. The core area of a dried peach was isolated using the Morphological Top-Hat transform, and Black blemishes upon pears were isolated using colour histogram thresholding techniques. The test results indicated that although colour classification was the major determinant in the grading of dried fruit, other characteristics of the fruit had to be incorporated to achieve successful final classification results; these characteristics may be different for different types of dried fruit, but in the case of dried apricots, dried peaches and dried pears, they include the: peach core area removal, fruit geometry validation, and dried pear blemish isolation and sizing

    Quantitative Optical Studies of Oxidative Stress in Rodent Models of Eye and Lung Injuries

    Get PDF
    Optical imaging techniques have emerged as essential tools for reliable assessment of organ structure, biochemistry, and metabolic function. The recognition of metabolic markers for disease diagnosis has rekindled significant interest in the development of optical methods to measure the metabolism of the organ. The objective of my research was to employ optical imaging tools and to implement signal and image processing techniques capable of quantifying cellular metabolism for the diagnosis of diseases in human organs such as eyes and lungs. To accomplish this goal, three different tools, cryoimager, fluorescent microscope, and optical coherence tomography system were utilized to study the physiological metabolic markers and early structural changes due to injury in vitro, ex vivo, and at cryogenic temperatures. Cryogenic studies of eye injuries in animal models were performed using a fluorescence cryoimager to monitor two endogenous mitochondrial fluorophores, NADH (nicotinamide adenine dinucleotide) and FAD (flavin adenine dinucleotide). The mitochondrial redox ratio (NADH/ FAD), which is correlated with oxidative stress level, is an optical biomarker. The spatial distribution of mitochondrial redox ratio in injured eyes with different durations of the disease was delineated. This spatiotemporal information was helpful to investigate the heterogeneity of the ocular oxidative stress in the eyes during diseases and its association with retinopathy. To study the metabolism of the eye tissue, the retinal layer was targeted, which required high resolution imaging of the eye as well as developing a segmentation algorithm to quantitatively monitor and measure the metabolic redox state of the retina. To achieve a high signal to noise ratio in fluorescence image acquisition, the imaging was performed at cryogenic temperatures, which increased the quantum yield of the intrinsic fluorophores. Microscopy studies of cells were accomplished by using an inverted fluorescence microscope. Fixed slides of the retina tissue as well as exogenous fluorophores in live lung cells were imaged using fluorescent and time-lapse microscopy. Image processing techniques were developed to quantify subtle changes in the morphological parameters of the retinal vasculature network for the early detection of the injury. This implemented image cytometry tool was capable of segmenting vascular cells, and calculating vasculature features including: area, caliber, branch points, fractal dimension, and acellular capillaries, and classifying the healthy and injured retinas. Using time-lapse microscopy, the dynamics of cellular ROS (Reactive Oxygen Species) concentration was quantified and modeled in ROS-mediated lung injuries. A new methodology and an experimental protocol were designed to quantify changes of oxidative stress in different stress conditions and to localize the site of ROS in an uncoupled state of pulmonary artery endothelial cells (PAECs). Ex vivo studies of lung were conducted using a spectral-domain optical coherence tomography (SD-OCT) system and 3D scanned images of the lung were acquired. An image segmentation algorithm was developed to study the dynamics of structural changes in the lung alveoli in real time. Quantifying the structural dynamics provided information to diagnose pulmonary diseases and to evaluate the severity of the lung injury. The implemented software was able to quantify and present the changes in alveoli compliance in lung injury models, including edema. In conclusion, optical instrumentation, combined with signal and image processing techniques, provides quantitative physiological and structural information reflecting disease progression due to oxidative stress. This tool provides a unique capability to identify early points of intervention, which play a vital role in the early detection of eye and lung injuries. The future goal of this research is to translate optical imaging to clinical settings, and to transfer the instruments developed for animal models to the bedside for patient diagnosis

    Shapes from Pixels

    Get PDF
    In today's digital world, sampling is at the heart of any signal acquisition device. Imaging devices are ubiquitous examples that capture two-dimensional visual signals and store them as the pixels of discrete images. The main concern is whether and how the pixels provide an exact or at least a fair representation of the original visual signal in the continuous domain. This motivates the design of exact reconstruction or approximation techniques for a target class of images. Such techniques benefit different imaging tasks such as super-resolution, deblurring and compression. This thesis focuses on the reconstruction of visual signals representing a shape over a background, from their samples. Shape images have only two intensity values. However, the filtering effect caused by the sampling kernel of imaging devices smooths out the sharp transitions in the image and results in samples with varied intensity levels. To trace back the shape boundaries, we need strategies to reconstruct the original bilevel image. But, abrupt intensity changes along the shape boundaries as well as diverse shape geometries make reconstruction of this class of signals very challenging. Curvelets and contourlets have been proved as efficient multiresolution representations for the class of shape images. This motivates the approximation of shape images in the aforementioned domains. In the first part of this thesis, we study generalized sampling and infinite-dimensional compressed sensing to approximate a signal in a domain that is known to provide a sparse or efficient representation for the signal, given its samples in a different domain. We show that the generalized sampling, due to its linearity, is incapable of generating good approximation of shape images from a limited number of samples. The infinite-dimensional compressed sensing is a more promising approach. However, the concept of random sampling in this scheme does not apply to the shape reconstruction problem. Next, we propose a sampling scheme for shape images with finite rate of innovation (FRI). More specifically, we model the shape boundaries as a subset of an algebraic curve with an implicit bivariate polynomial. We show that the image parameters are solutions of a set of linear equations with the coefficients being the image moments. We then replace conventional moments with more stable generalized moments that are adjusted to the given sampling kernel. This leads to successful reconstruction of shapes with moderate complexities from samples generated with realistic sampling kernels and in the presence of moderate noise levels. Our next contribution is a scheme for recovering shapes with smooth boundaries from a set of samples. The reconstructed image is constrained to regenerate the same samples (consistency) as well as forming a bilevel image. We initially formulate the problem by minimizing the shape perimeter over the set of consistent shapes. Next, we relax the non-convex shape constraint to transform the problem into minimizing the total variation over consistent non-negative-valued images. We introduce a requirement -called reducibility- that guarantees equivalence between the two problems. We illustrate that the reducibility effectively sets a requirement on the minimum sampling density. Finally, we study a relevant problem in the Boolean algebra: the Boolean compressed sensing. The problem is about recovering a sparse Boolean vector from a few collective binary tests. We study a formulation of this problem as a binary linear program, which is NP hard. To overcome the computational burden, we can relax the binary constraint on the variables and apply a rounding to the solution. We replace the rounding procedure with a randomized algorithm. We show that the proposed algorithm considerably improves the success rate with only a slight increase in the computational cost

    Combining Image Processing with Signal Processing to Improve Transmitter Geolocation Estimation

    Get PDF
    This research develops an algorithm which combines image processing with signal processing to improve transmitter geolocation capability. A building extraction algorithm is compiled from current techniques in order to provide the locations of rectangular buildings within an aerial, orthorectified, RGB image to a geolocation algorithm. The geolocation algorithm relies on measured TDOA data from multiple ground sensors to locate a transmitter by searching a grid of possible transmitter locations within the image region. At each evaluated grid point, theoretical TDOA values are computed for comparison to the measured TDOA values. To compute the theoretical values, the shortest path length between the transmitter and each of the sensors is determined. The building locations are used to determine if the LOS path between these two points is obstructed and what would be the shortest reflected path length. The grid location producing theoretical TDOA values closest to the measured TDOA values is the result of the algorithm. Measured TDOA data is simulated in this thesis. The thesis method performance is compared to that of a current geolocation method that uses Taylor series expansion to solve for the intersection of hyperbolic curves created by the TDOA data. The average online runtime of thesis simulations range from around 20 seconds to around 2 minutes, while the Taylor series method only takes about 0.02 seconds. The thesis method also includes an offline runtime of up to 30 minutes for a given image region and sensor configuration. The thesis method improves transmitter geolocation error by an average of 44m, or 53% in the obstructed simulation cases when compared with the current Taylor series method. However, in cases when all sensors have a direct LOS, the current method performs more accurately. Therefore, the thesis method is most applicable to missions requiring tracking of slower-moving targets in an urban environment with stationary sensors

    Object-based video representations: shape compression and object segmentation

    Get PDF
    Object-based video representations are considered to be useful for easing the process of multimedia content production and enhancing user interactivity in multimedia productions. Object-based video presents several new technical challenges, however. Firstly, as with conventional video representations, compression of the video data is a requirement. For object-based representations, it is necessary to compress the shape of each video object as it moves in time. This amounts to the compression of moving binary images. This is achieved by the use of a technique called context-based arithmetic encoding. The technique is utilised by applying it to rectangular pixel blocks and as such it is consistent with the standard tools of video compression. The blockbased application also facilitates well the exploitation of temporal redundancy in the sequence of binary shapes. For the first time, context-based arithmetic encoding is used in conjunction with motion compensation to provide inter-frame compression. The method, described in this thesis, has been thoroughly tested throughout the MPEG-4 core experiment process and due to favourable results, it has been adopted as part of the MPEG-4 video standard. The second challenge lies in the acquisition of the video objects. Under normal conditions, a video sequence is captured as a sequence of frames and there is no inherent information about what objects are in the sequence, not to mention information relating to the shape of each object. Some means for segmenting semantic objects from general video sequences is required. For this purpose, several image analysis tools may be of help and in particular, it is believed that video object tracking algorithms will be important. A new tracking algorithm is developed based on piecewise polynomial motion representations and statistical estimation tools, e.g. the expectationmaximisation method and the minimum description length principle
    • 

    corecore