372 research outputs found

    A multilayered approach to the automatic analysis of the multifocal electroretinogram

    Get PDF
    The multifocal electroretinogram (mfERG) provides spatial and temporal information on the retina’s function in an objective manner, making it a valuable tool for monitoring a wide range of retinal abnormalities. Analysis of this clinical test can however be both difficult and subjective, particularly if recordings are contaminated with noise, for example muscle movement or blinking. This can sometimes result in inconsistencies in the interpretation process. An automated and objective method for analysing the mfERG would be beneficial, for example in multi-centre clinical trials when large volumes of data require quick and consistent interpretation. The aim of this thesis was therefore to develop a system capable of standardising mfERG analysis. A series of methods aimed at achieving this are presented. These include a technique for grading the quality of a recording, both during and after a test, and several approaches for stating if a waveform contains a physiological response or no significant retinal function. Different techniques are also utilised to report if a response is within normal latency and amplitude values. The integrity of a recording was assessed by viewing the raw, uncorrelated data in the frequency domain; clear differences between acceptable and unacceptable recordings were revealed. A scale ranging from excellent to unreportable was defined for the recording quality, first in terms of noise resulting from blinking and loss of fixation, and secondly, for muscle noise. 50 mfERG tests of varying recording quality were graded using this method with particular emphasis on the distinction between a test which should or should not be reported. Three experts also assessed the mfERG recordings independently; the grading provided by the experts was compared with that of the system. Three approaches were investigated to classify a mfERG waveform as ‘response’ or ‘no response’ (i.e. whether or not it contained a physiological response): artificial neural networks (ANN); analysis of the frequency domain profile; and the signal to noise ratio. These techniques were then combined using an ANN to provide a final classification for ‘response’ or ‘no response’. Two methods were studied to differentiate responses which were delayed from those within normal timing limits: ANN; and spline fitting. Again the output of each was combined to provide a latency classification for the mfERG waveform. Finally spline fitting was utilised to classify responses as ‘decreased in amplitude’ or ‘not decreased’. 1000 mfERG waveforms were subsequently analysed by an expert; these represented a wide variety of retinal function and quality. Classifications stated by the system were compared with those of the expert to assess its performance. An agreement of 94% was achieved between the experts and the system when making the distinction between tests which should or should not be reported. The final system classified 95% of the 1000 mfERG waveforms correctly as ‘response’ or ‘no response’. Of those said to represent an area of functioning retina it concurred with the expert for 93% of the responses when categorising them as normal or abnormal in terms of their P1 amplitude and latency. The majority of misclassifications were made when analysing waveforms with a P1 amplitude or latency close to the boundary between normal and abnormal. It was evident that the multilayered system has the potential to provide an objective and automated assessment of the mfERG test; this would not replace the expert but can provide an initial analysis for the expert to review

    DEVELOPMENT OF METHODOLOGIES FOR RAMAN SPECTRAL ANALYSIS OF HUMAN SALIVA FOR DETECTION OF ORAL CANCER

    Get PDF
    Oral cancer is one of the most common malignancies worldwide, with over 350,000 to 400,000 new cases reported each year. Early detection, followed by appropriate treatment, can increase cure rates to 80 or 90%, and greatly improve the quality of life by minimising extensive, debilitating treatments. Usually, the clinical diagnosis of most head and neck neoplasms, including oral cancer, is performed through time-consuming and invasive biopsies followed by histological examination of the excised tissue and may present psychological trauma and risk of infection to patients. In addition, histological grading can be subjective, as it is based on subtle morphological changes. In this context, saliva is gaining interest as a diagnostic fluid, since it represents a non-invasive, safe, cheap source of complex biomolecular information that can easily be obtained from the oral cavity. In parallel, increased effort is being devoted to developing less invasive early diagnostic modalities for oral cancer, of which novel optical systems, such as Raman spectroscopy, hold great promise. The overall aim of this study is to develop methodologies for analysis of human saliva using Raman spectroscopy with a future applicability for oral cancer diagnosis. In order to optimise the measurement protocol, a number of different microscope configurations, source lasers, and substrates were trialled. Once the measurement protocol was optimised, it was validated using artificial saliva and real human saliva. The individual saliva constituent components as well as the artificial saliva itself were characterised and recorded. Following the standardisation protocol, real human whole saliva samples collected using two different collection methods were subjected to centrifugal filtration. The Raman signal from whole saliva was acquired and analysed through statistical tools, demonstrating the potential for diagnostic applications. Then, the Raman spectroscopic profiles of patients with saliva samples of different oral dysplastic pathologies, such as V epithelial oral dysplasia and oral cancer, were further analysed and spectroscopically assessed. To finalise, confounding factors, such as smoking habits and alcohol consumption, were also assessed in terms of their influence on the Raman classification of these pathologies. This research showed that, Raman spectroscopy was able to successfully discriminate stimulated saliva samples from healthy volunteers and patients with oral cancer or potentially malignant lesions, highlighting the weak influence of confounding factors, such as gender, age, smoking and alcohol consumption. However further studies are still required to improve classification among the different dysplasia grades

    Condition monitoring systems : a systematic literature review on machine-learning methods improving offshore-wind turbine operational management

    Get PDF
    Information is key. Offshore wind farms are installed with supervisory control and data acquisition systems (SCADA) gathering valuable information. Determining the precise condition of an asset is essential on achieving the expected operational lifetime and efficiency. Equipment fault detection is necessary to achieve this. This paper presents a systematic literature review of machine learning methods applied to condition monitoring systems, using both vibration information and SCADA data together. Starting with conventional methods using vibration models, such as Fast-Fourier transforms to five prominent supervised learning regression models; Artificial neural network, support vector regression, Bayesian network, random forest and K-nearest neighbour. This review specifically looks at how conventional vibration data can be combined with SCADA data to determine the assets condition

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Computer audition for emotional wellbeing

    Get PDF
    This thesis is focused on the application of computer audition (i. e., machine listening) methodologies for monitoring states of emotional wellbeing. Computer audition is a growing field and has been successfully applied to an array of use cases in recent years. There are several advantages to audio-based computational analysis; for example, audio can be recorded non-invasively, stored economically, and can capture rich information on happenings in a given environment, e. g., human behaviour. With this in mind, maintaining emotional wellbeing is a challenge for humans and emotion-altering conditions, including stress and anxiety, have become increasingly common in recent years. Such conditions manifest in the body, inherently changing how we express ourselves. Research shows these alterations are perceivable within vocalisation, suggesting that speech-based audio monitoring may be valuable for developing artificially intelligent systems that target improved wellbeing. Furthermore, computer audition applies machine learning and other computational techniques to audio understanding, and so by combining computer audition with applications in the domain of computational paralinguistics and emotional wellbeing, this research concerns the broader field of empathy for Artificial Intelligence (AI). To this end, speech-based audio modelling that incorporates and understands paralinguistic wellbeing-related states may be a vital cornerstone for improving the degree of empathy that an artificial intelligence has. To summarise, this thesis investigates the extent to which speech-based computer audition methodologies can be utilised to understand human emotional wellbeing. A fundamental background on the fields in question as they pertain to emotional wellbeing is first presented, followed by an outline of the applied audio-based methodologies. Next, detail is provided for several machine learning experiments focused on emotional wellbeing applications, including analysis and recognition of under-researched phenomena in speech, e. g., anxiety, and markers of stress. Core contributions from this thesis include the collection of several related datasets, hybrid fusion strategies for an emotional gold standard, novel machine learning strategies for data interpretation, and an in-depth acoustic-based computational evaluation of several human states. All of these contributions focus on ascertaining the advantage of audio in the context of modelling emotional wellbeing. Given the sensitive nature of human wellbeing, the ethical implications involved with developing and applying such systems are discussed throughout

    Proceedings of the 7th Sound and Music Computing Conference

    Get PDF
    Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010
    • …
    corecore