528 research outputs found
Artificial neural network-statistical approach for PET volume analysis and classification
Copyright © 2012 The Authors. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This article has been made available through the Brunel Open Access Publishing Fund.The increasing number of imaging studies and the prevailing application of positron emission tomography (PET) in clinical oncology have led to a real need for efficient PET volume handling and the development of new volume analysis approaches to aid the clinicians in the clinical diagnosis, planning of treatment, and assessment of response to therapy. A novel automated system for oncological PET volume analysis is proposed in this work. The proposed intelligent system deploys two types of artificial neural networks (ANNs) for classifying PET volumes. The first methodology is a competitive neural network (CNN), whereas the second one is based on learning vector quantisation neural network (LVQNN). Furthermore, Bayesian information criterion (BIC) is used in this system to assess the optimal number of classes for each PET data set and assist the ANN blocks to achieve accurate analysis by providing the best number of classes. The system evaluation was carried out using experimental phantom studies (NEMA IEC image quality body phantom), simulated PET studies using the Zubal phantom, and clinical studies representative of nonsmall cell lung cancer and pharyngolaryngeal squamous cell carcinoma. The proposed analysis methodology of clinical oncological PET data has shown promising results and can successfully classify and quantify malignant lesions.This study was supported by the Swiss National Science Foundation under Grant SNSF 31003A-125246, Geneva Cancer League, and the Indo Swiss Joint Research Programme ISJRP 138866. This article is made available through the Brunel Open Access Publishing Fund
A survey of outlier detection methodologies
Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review
Quality Measurements on Quantised Meshes
In computer graphics, triangle mesh has emerged as the ubiquitous shape rep- resentation for 3D modelling and visualisation applications. Triangle meshes, often undergo compression by specialised algorithms for the purposes of storage and trans- mission. During the compression processes, the coordinates of the vertices of the triangle meshes are quantised using fixed-point arithmetic. Potentially, that can alter the visual quality of the 3D model. Indeed, if the number of bits per vertex coordinate is too low, the mesh will be deemed by the user as visually too coarse as quantisation artifacts will become perceptible. Therefore, there is the need for the development of quality metrics that will enable us to predict the visual appearance of a triangle mesh at a given level of vertex coordinate quantisation
Recommended from our members
Computer-Generated Holography for Areal Additive Manufacture
With a market of approximately $10B, additive manufacture (AM) is an exciting next-generation technology with the promise of significant environmental and societal impact. AM promises to help reduce emissions and waste during manufacture while improving sustainability. Widely used in applications from hip implants to jet engines, AM remains the domain of experts due to the material and thermal challenges encountered.
AM in metals is dominated by Laser Powder Based Fusion (L-PBF). Powder is spread in layers 10s of microns thick and selectively melted by scanning a small laser spot heat source over the bed.
Traditional AM systems have limited ability to manage or compensate for heat generated. The rapidly moving heat source spot results in high thermal cycling and is a major influence on residual stress and distortion. Mechanical limitations in the galvoscanner mean that over or under-heating is common and can lead to voids, boiling and spatter. The scale difference between the part size and the spot size means that predictive modelling is beyond the scope of even today’s best computing clusters. These factors have led to frequent inability to ensure part quality without physical prototyping and destructive testing.
This thesis sets out initial research into creating a radically new AM process that uses computer-generated holography (CGH) to produce complex light patterns in a single pulse. Projecting power to the whole layer at once will mean that the thermal properties of the powders before and after writing can be factored into the processed hologram and part design. It will also significantly reduce thermal gradients and melt-pool instability.
The fields of additive manufacture and computer-generated holography are introduced in Chapter 1. Chapters 2 and 3 then provide more detail on CGH and AM modelling respectively. The first deliverable, a reusable software package capable of generating holograms, is presented in Chapter 4. Algorithms developed for the project are introduced in Chapter 4.3. The first project demonstrator, an AM machine capable of printing in resins using holographic projection is discussed in Section 6.2. This shows performance comparable to modern 3D printing machines and highlights the applicability of computer-generated holography to areal processes. Section 6.3 then discusses the ongoing development of a metal powder demonstrator. As this PhD forms the first stage of a larger project, only preliminary work on the powder demonstrator is discussed. Chapter 7 then draws conclusions and outlines the way forward for future research.
The thesis appendices then discuss an in-depth discussion of algorithm performances in Appendices A and B. Appendices C and D then discuss digressions into the implementation. Appendices E and F present a laser induced damage threshold (LIDT) measurement system developed. Finally, Appendices G and H provide more detail on the software developed and Appendix I gives links to additional project resources.EP/T008369/1;
EP/L016567/1;
EP/V055003/
A robust framework for medical image segmentation through adaptable class-specific representation
Medical image segmentation is an increasingly important component in virtual pathology, diagnostic imaging and computer-assisted surgery. Better hardware for image acquisition and a variety of advanced visualisation methods have paved the way for the development of computer based tools for medical image analysis and interpretation. The routine use of medical imaging scans of multiple modalities has been growing over the last decades and data sets such as the Visible Human Project have introduced a new modality in the form of colour cryo section data. These developments have given rise to an increasing need for better automatic and semiautomatic segmentation methods. The work presented in this thesis concerns the development of a new framework for robust semi-automatic segmentation of medical imaging data of multiple modalities. Following the specification of a set of conceptual and technical requirements, the framework known as ACSR (Adaptable Class-Specific Representation) is developed in the first case for 2D colour cryo section
segmentation. This is achieved through the development of a novel algorithm for adaptable class-specific sampling of point neighbourhoods, known as the PGA (Path Growing Algorithm), combined with Learning Vector Quantization. The framework is extended to accommodate 3D volume segmentation of cryo section data and subsequently segmentation of single and multi-channel greyscale MRl data. For the latter the issues of inhomogeneity and noise are specifically addressed. Evaluation is based on comparison with previously published results on standard simulated and real data sets, using visual presentation, ground truth comparison and human observer experiments. ACSR provides the user with a simple and intuitive visual initialisation process followed by a fully automatic segmentation. Results on both cryo section and MRI data compare favourably to existing methods, demonstrating robustness both to common artefacts and multiple user initialisations. Further developments into specific clinical applications are discussed in the future work section
High accuracy ultrasonic degradation monitoring
This thesis is concerned with maximising the precision of permanently installed ultrasonic time of flight sensors. Numerous sources of uncertainty affecting the measurement precision were considered and a measurement protocol was suggested to minimise variability. The repeatability that can be achieved with the described measurement protocol was verified in simulations and in laboratory corrosion experiments as well as various other experiments. One of the most significant and complex problems affecting the precision, inner wall surface roughness, was also investigated and a signal processing method was proposed to improve the accuracy of estimated wall thickness loss rates by an order of magnitude compared to standard methods.
It was found that the error associated with temperature effects is the most significant among typical experimental sources of uncertainty (e.g. coherent noise and coupling stability). By implementing temperature compensation, it was shown in laboratory experiments that wall thickness can be estimated with a standard deviation of less than 20 nm when temperature is stable (within 0.1 C) using the signal processing protocol described in this thesis. In more realistic corrosion experiments, where temperature changes were of the order of 4 C), it was shown that a wall thickness loss of 1 micron can be detected reliably by applying the same measurement protocol.
Another major issue affecting both accuracy and precision is changing inner wall surface morphology. Ultrasonic wave reflections from rough inner surfaces result in distorted signals. These distortions significantly affect the accuracy of wall thickness estimates. A new signal processing method, Adaptive Cross-Correlation (AXC), was described to mitigate the effects of such distortions. It was shown that AXC reduces measurement errors of wall thickness loss rates by an order of magnitude compared to standard signal processing methods so that mean wall loss can be accurately determined. When wall thickness loss is random and spatially uniform, 90% of wall thickness rates measured using AXC lie within 7.5 ± 18% of the actual slope. This means that with mean corrosion rates of 1 mm/year, the wall thickness estimate with AXC would be of the order of 0.75-1.1 mm/year.
In addition, the feasibility of increasing the accuracy of wall thickness loss rate measurements even further was demonstrated using multiple sensors for measuring a single wall thickness loss rate. It was shown that measurement errors can be decreased to 30% of the variability of a single sensor.
The main findings of this thesis have led to 1) a solid understanding of the numerous factors that affect accuracy and precision of wall thickness loss monitoring, 2) a robust signal acquisition protocol as well as 3) AXC, a post processing technique that improves the monitoring accuracy by an order of magnitude. This will benefit corrosion mitigation around the world, which is estimated to cost a developed nation in excess of 2-5% of its GDP. The presented techniques help to reduce response times to detect industrially actionable corrosion rates of 0.1 mm/year to a few days. They therefore help to minimise the risk of process fluid leakage and increase overall confidence in asset management.Open Acces
Recommended from our members
The design of an effective sensor fusion model for condition monitoring systems of turning processes
High energy price and the increasing requirements of quality and low cost of products have created an urgent need to implement new technologies in current automated manufacturing environments. Condition monitoring systems of manufacturing processes have been recognised in recent years as one of the essential technologies that provide the competitive advantage in many manufacturing environments. This research aims to develop an effective sensor fusion model for turning processes for the detection of tool wear. Multi-sensors combined with a novelty detection algorithm and Learning Vector Quantisation (LVQ) neural networks are used in this research to detect tool wear and provide diagnostic and prognostic information. A novel approach, termed ASPST, (Automated Sensor and Signal Processing Selection System for Turning) is used to select the most appropriate sensors and signal processing methods. The aim is to reduce the number of sensors needed in the overall system and reduce the cost. The ASPST approach is based on simplifying complex sensory signals into a group of Sensory Characteristic Features (SCFs) and evaluating the sensitivity of these SCFs in detecting tool wear. A wide range of sensory signals (cutting forces, strain, acceleration, acoustic emission and sound) and signal processing methods are also implemented to verify the capability of the approach. A cost reduction method is also implemented based on eliminating the least utilised sensor in an attempt to reduce the overall cost of the system without sacrificing the capability of the condition monitoring system. The experimental results prove that the suggested approach provides a responsive and effective solution in monitoring tool wear in turning with reduced time and cost
The development of artificial neural networks for the analysis of market research and electronic nose data
This thesis details research carried out into the application of unsupervised neural
network and statistical clustering techniques to market research interview survey
analysis. The objective of the research was to develop mathematical mechanisms to
locate and quantify internal clusters within the data sets with definite commonality.
As the data sets being used were binary, this commonality was expressed in terms of
identical question answers. Unsupervised neural network paradigms are investigated,
along with statistical clustering techniques. The theory of clustering in a binary space
is also looked at.
Attempts to improve the clarity of output of Self-Organising Maps (SOM) consisted
of several stages of investigation culminating in the conception of the Interrogative
Memory Structure (lMS). IMS proved easy to use, fast in operation and consistently
produced results with the highest degree of commonality when tested against SOM,
Adaptive Resonance Theory (ART!) and FASTCLUS. ARTl performed well when
clusters were measured using general metrics. During the course of the research a
supervised technique, the Vector Memory Array (VMA), was developed. VMA was
tested against Back Propagation (BP) (using data sets provided by the Warwick
electronic nose project) and consistently produced higher classification accuracies.
The main advantage of VMA is its speed of operation - in testing it produced results
in minutes compared to hours for the BP method, giving speed increases in the
region of 100: 1
An investigation into the prognosis of electromagnetic relays.
Electrical contacts provide a well-proven solution to switching various loads in a wide variety of applications, such as power distribution, control applications, automotive and telecommunications. However, electrical contacts are known for limited reliability due to degradation effects upon the switching contacts due to arcing and fretting. Essentially, the life of the device may be determined by the limited life of the contacts. Failure to trip, spurious tripping and contact welding can, in critical applications such as control systems for avionics and nuclear power application, cause significant costs due to downtime, as well as safety implications.
Prognostics provides a way to assess the remaining useful life (RUL) of a component based on its current state of health and its anticipated future usage and operating conditions. In this thesis, the effects of contact wear on a set of electromagnetic relays used in an avionic power controller is examined, and how contact resistance combined with a prognostic approach, can be used to ascertain the RUL of the device.
Two methodologies are presented, firstly a Physics based Model (PbM) of the degradation using the predicted material loss due to arc damage. Secondly a computationally efficient technique using posterior degradation data to form a state space model in real time via a Sliding Window Recursive Least Squares (SWRLS) algorithm.
Health monitoring using the presented techniques can provide knowledge of impending failure in high reliability applications where the risks associated with loss-of-functionality are too high to endure. The future states of the systems has been estimated based on a Particle and Kalman-filter projection of the models via a Bayesian framework. Performance of the prognostication health management algorithm during the contacts life has been quantified using performance evaluation metrics. Model predictions have been correlated with experimental data. Prognostic metrics including Prognostic Horizon (PH), alpha-Lamda (α-λ), and Relative Accuracy have been used to assess the performance of the damage proxies and a comparison of the two models made
- …