528 research outputs found

    Artificial neural network-statistical approach for PET volume analysis and classification

    Get PDF
    Copyright © 2012 The Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This article has been made available through the Brunel Open Access Publishing Fund.The increasing number of imaging studies and the prevailing application of positron emission tomography (PET) in clinical oncology have led to a real need for efficient PET volume handling and the development of new volume analysis approaches to aid the clinicians in the clinical diagnosis, planning of treatment, and assessment of response to therapy. A novel automated system for oncological PET volume analysis is proposed in this work. The proposed intelligent system deploys two types of artificial neural networks (ANNs) for classifying PET volumes. The first methodology is a competitive neural network (CNN), whereas the second one is based on learning vector quantisation neural network (LVQNN). Furthermore, Bayesian information criterion (BIC) is used in this system to assess the optimal number of classes for each PET data set and assist the ANN blocks to achieve accurate analysis by providing the best number of classes. The system evaluation was carried out using experimental phantom studies (NEMA IEC image quality body phantom), simulated PET studies using the Zubal phantom, and clinical studies representative of nonsmall cell lung cancer and pharyngolaryngeal squamous cell carcinoma. The proposed analysis methodology of clinical oncological PET data has shown promising results and can successfully classify and quantify malignant lesions.This study was supported by the Swiss National Science Foundation under Grant SNSF 31003A-125246, Geneva Cancer League, and the Indo Swiss Joint Research Programme ISJRP 138866. This article is made available through the Brunel Open Access Publishing Fund

    A survey of outlier detection methodologies

    Get PDF
    Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review

    Quality Measurements on Quantised Meshes

    Get PDF
    In computer graphics, triangle mesh has emerged as the ubiquitous shape rep- resentation for 3D modelling and visualisation applications. Triangle meshes, often undergo compression by specialised algorithms for the purposes of storage and trans- mission. During the compression processes, the coordinates of the vertices of the triangle meshes are quantised using fixed-point arithmetic. Potentially, that can alter the visual quality of the 3D model. Indeed, if the number of bits per vertex coordinate is too low, the mesh will be deemed by the user as visually too coarse as quantisation artifacts will become perceptible. Therefore, there is the need for the development of quality metrics that will enable us to predict the visual appearance of a triangle mesh at a given level of vertex coordinate quantisation

    Digital signal processing for the analysis of fetal breathing movements

    Get PDF

    A robust framework for medical image segmentation through adaptable class-specific representation

    Get PDF
    Medical image segmentation is an increasingly important component in virtual pathology, diagnostic imaging and computer-assisted surgery. Better hardware for image acquisition and a variety of advanced visualisation methods have paved the way for the development of computer based tools for medical image analysis and interpretation. The routine use of medical imaging scans of multiple modalities has been growing over the last decades and data sets such as the Visible Human Project have introduced a new modality in the form of colour cryo section data. These developments have given rise to an increasing need for better automatic and semiautomatic segmentation methods. The work presented in this thesis concerns the development of a new framework for robust semi-automatic segmentation of medical imaging data of multiple modalities. Following the specification of a set of conceptual and technical requirements, the framework known as ACSR (Adaptable Class-Specific Representation) is developed in the first case for 2D colour cryo section segmentation. This is achieved through the development of a novel algorithm for adaptable class-specific sampling of point neighbourhoods, known as the PGA (Path Growing Algorithm), combined with Learning Vector Quantization. The framework is extended to accommodate 3D volume segmentation of cryo section data and subsequently segmentation of single and multi-channel greyscale MRl data. For the latter the issues of inhomogeneity and noise are specifically addressed. Evaluation is based on comparison with previously published results on standard simulated and real data sets, using visual presentation, ground truth comparison and human observer experiments. ACSR provides the user with a simple and intuitive visual initialisation process followed by a fully automatic segmentation. Results on both cryo section and MRI data compare favourably to existing methods, demonstrating robustness both to common artefacts and multiple user initialisations. Further developments into specific clinical applications are discussed in the future work section

    High accuracy ultrasonic degradation monitoring

    Get PDF
    This thesis is concerned with maximising the precision of permanently installed ultrasonic time of flight sensors. Numerous sources of uncertainty affecting the measurement precision were considered and a measurement protocol was suggested to minimise variability. The repeatability that can be achieved with the described measurement protocol was verified in simulations and in laboratory corrosion experiments as well as various other experiments. One of the most significant and complex problems affecting the precision, inner wall surface roughness, was also investigated and a signal processing method was proposed to improve the accuracy of estimated wall thickness loss rates by an order of magnitude compared to standard methods. It was found that the error associated with temperature effects is the most significant among typical experimental sources of uncertainty (e.g. coherent noise and coupling stability). By implementing temperature compensation, it was shown in laboratory experiments that wall thickness can be estimated with a standard deviation of less than 20 nm when temperature is stable (within 0.1 C) using the signal processing protocol described in this thesis. In more realistic corrosion experiments, where temperature changes were of the order of 4 C), it was shown that a wall thickness loss of 1 micron can be detected reliably by applying the same measurement protocol. Another major issue affecting both accuracy and precision is changing inner wall surface morphology. Ultrasonic wave reflections from rough inner surfaces result in distorted signals. These distortions significantly affect the accuracy of wall thickness estimates. A new signal processing method, Adaptive Cross-Correlation (AXC), was described to mitigate the effects of such distortions. It was shown that AXC reduces measurement errors of wall thickness loss rates by an order of magnitude compared to standard signal processing methods so that mean wall loss can be accurately determined. When wall thickness loss is random and spatially uniform, 90% of wall thickness rates measured using AXC lie within 7.5 ± 18% of the actual slope. This means that with mean corrosion rates of 1 mm/year, the wall thickness estimate with AXC would be of the order of 0.75-1.1 mm/year. In addition, the feasibility of increasing the accuracy of wall thickness loss rate measurements even further was demonstrated using multiple sensors for measuring a single wall thickness loss rate. It was shown that measurement errors can be decreased to 30% of the variability of a single sensor. The main findings of this thesis have led to 1) a solid understanding of the numerous factors that affect accuracy and precision of wall thickness loss monitoring, 2) a robust signal acquisition protocol as well as 3) AXC, a post processing technique that improves the monitoring accuracy by an order of magnitude. This will benefit corrosion mitigation around the world, which is estimated to cost a developed nation in excess of 2-5% of its GDP. The presented techniques help to reduce response times to detect industrially actionable corrosion rates of 0.1 mm/year to a few days. They therefore help to minimise the risk of process fluid leakage and increase overall confidence in asset management.Open Acces

    The development of artificial neural networks for the analysis of market research and electronic nose data

    Get PDF
    This thesis details research carried out into the application of unsupervised neural network and statistical clustering techniques to market research interview survey analysis. The objective of the research was to develop mathematical mechanisms to locate and quantify internal clusters within the data sets with definite commonality. As the data sets being used were binary, this commonality was expressed in terms of identical question answers. Unsupervised neural network paradigms are investigated, along with statistical clustering techniques. The theory of clustering in a binary space is also looked at. Attempts to improve the clarity of output of Self-Organising Maps (SOM) consisted of several stages of investigation culminating in the conception of the Interrogative Memory Structure (lMS). IMS proved easy to use, fast in operation and consistently produced results with the highest degree of commonality when tested against SOM, Adaptive Resonance Theory (ART!) and FASTCLUS. ARTl performed well when clusters were measured using general metrics. During the course of the research a supervised technique, the Vector Memory Array (VMA), was developed. VMA was tested against Back Propagation (BP) (using data sets provided by the Warwick electronic nose project) and consistently produced higher classification accuracies. The main advantage of VMA is its speed of operation - in testing it produced results in minutes compared to hours for the BP method, giving speed increases in the region of 100: 1

    An investigation into the prognosis of electromagnetic relays.

    Get PDF
    Electrical contacts provide a well-proven solution to switching various loads in a wide variety of applications, such as power distribution, control applications, automotive and telecommunications. However, electrical contacts are known for limited reliability due to degradation effects upon the switching contacts due to arcing and fretting. Essentially, the life of the device may be determined by the limited life of the contacts. Failure to trip, spurious tripping and contact welding can, in critical applications such as control systems for avionics and nuclear power application, cause significant costs due to downtime, as well as safety implications. Prognostics provides a way to assess the remaining useful life (RUL) of a component based on its current state of health and its anticipated future usage and operating conditions. In this thesis, the effects of contact wear on a set of electromagnetic relays used in an avionic power controller is examined, and how contact resistance combined with a prognostic approach, can be used to ascertain the RUL of the device. Two methodologies are presented, firstly a Physics based Model (PbM) of the degradation using the predicted material loss due to arc damage. Secondly a computationally efficient technique using posterior degradation data to form a state space model in real time via a Sliding Window Recursive Least Squares (SWRLS) algorithm. Health monitoring using the presented techniques can provide knowledge of impending failure in high reliability applications where the risks associated with loss-of-functionality are too high to endure. The future states of the systems has been estimated based on a Particle and Kalman-filter projection of the models via a Bayesian framework. Performance of the prognostication health management algorithm during the contacts life has been quantified using performance evaluation metrics. Model predictions have been correlated with experimental data. Prognostic metrics including Prognostic Horizon (PH), alpha-Lamda (α-λ), and Relative Accuracy have been used to assess the performance of the damage proxies and a comparison of the two models made
    corecore