570 research outputs found

    Processing of image sequences from fundus camera

    Get PDF
    Cílem mé diplomové práce bylo navrhnout metodu analýzy retinálních sekvencí, která bude hodnotit kvalitu jednotlivých snímků. V teoretické části se také zabývám vlastnostmi retinálních sekvencí a způsobem registrace snímků z fundus kamery. V praktické části je implementována metoda hodnocení kvality snímků, která je otestována na reálných retinálních sekvencích a vyhodnocena její úspěšnost. Práce hodnotí i vliv této metody na registraci retinálních snímků.The aim of my master's thesis was to propose a method of retinal sequence analysis which will evaluate the quality of each frame. In the theoretical part, I will also deal with the properties of retinal sequences and the way of registering the images of the fundus camera. In the practical part the method of evaluating image quality is implemented. This algorithm is tested on real retinal sequences and its success is assessed. This work also evaluates the impact of proposed method on the registration of retinal images.

    Combining Shape and Learning for Medical Image Analysis

    Get PDF
    Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields

    Classification of THz pulse signals using two-dimensional cross-correlation feature extraction and non-linear classifiers

    Get PDF
    This work provides a performance comparison of four different machine learning classifiers: multinomial logistic regression with ridge estimators (MLR) classifier, k-nearest neighbours (KNN), support vector machine (SVM) and naïve Bayes (NB) as applied to terahertz (THz) transient time domain sequences associated with pixelated images of different powder samples. The six substances considered, although have similar optical properties, their complex insertion loss at the THz part of the spectrum is significantly different because of differences in both their frequency dependent THz extinction coefficient as well as differences in their refractive index and scattering properties. As scattering can be unquantifiable in many spectroscopic experiments, classification solely on differences in complex insertion loss can be inconclusive. The problem is addressed using two-dimensional (2-D) cross-correlations between background and sample interferograms, these ensure good noise suppression of the datasets and provide a range of statistical features that are subsequently used as inputs to the above classifiers. A cross-validation procedure is adopted to assess the performance of the classifiers. Firstly the measurements related to samples that had thicknesses of 2 mm were classified, then samples at thicknesses of 4 mm, and after that 3 mm were classified and the success rate and consistency of each classifier was recorded. In addition, mixtures having thicknesses of 2 and 4 mm as well as mixtures of 2, 3 and 4 mm were presented simultaneously to all classifiers. This approach provided further cross-validation of the classification consistency of each algorithm. The results confirm the superiority in classification accuracy and robustness of the MLR (least accuracy 88.24%) and KNN (least accuracy 90.19%) algorithms which consistently outperformed the SVM (least accuracy 74.51%) and NB (least accuracy 56.86%) classifiers for the same number of feature vectors across all studies. The work establishes a general methodology for assessing the performance of other hyperspectral dataset classifiers on the basis of 2-D cross-correlations in far-infrared spectroscopy or other parts of the electromagnetic spectrum. It also advances the wider proliferation of automated THz imaging systems across new application areas e.g., biomedical imaging, industrial processing and quality control where interpretation of hyperspectral images is still under development

    Classification of Corneal Nerve Images Using Machine Learning Techniques

    Get PDF
    Recent research shows that small nerve fiber damage is an early detector of neuropathy. These small nerve fibers are present in the human cornea and can be visualized through the use of a corneal confocal microscope. A series of images can be acquired from the subbasal nerve plexus of the cornea. Before the images can be quantified for nerve loss, a human expert manually traces the nerves in the image and then classifies the image as having neuropathy or not. Some nerve tracing algorithms are available in the literature, but none of them are reported as being used in clinical practice. An alternate practice is to visually classify the image for neuropathy without quantification. In this paper, we evaluate the potential of various machine learning techniques for automating corneal nerve image classification. First, the images are down-sampled using discrete wavelet transform, filtering and a number of morphological operations. The resulting binary image is used for extracting characteristic features of the image. This is followed by training the classifier on the extracted features. The trained classifier is then used for predicting the state of the nerves in the images. Our experiments yield a classification accuracy of 0.91 reflecting the effectiveness of the proposed method

    A review of domain adaptation without target labels

    Full text link
    Domain adaptation has become a prominent problem setting in machine learning and related fields. This review asks the question: how can a classifier learn from a source domain and generalize to a target domain? We present a categorization of approaches, divided into, what we refer to as, sample-based, feature-based and inference-based methods. Sample-based methods focus on weighting individual observations during training based on their importance to the target domain. Feature-based methods revolve around on mapping, projecting and representing features such that a source classifier performs well on the target domain and inference-based methods incorporate adaptation into the parameter estimation procedure, for instance through constraints on the optimization procedure. Additionally, we review a number of conditions that allow for formulating bounds on the cross-domain generalization error. Our categorization highlights recurring ideas and raises questions important to further research.Comment: 20 pages, 5 figure

    Image-guided port placement for minimally invasive cardiac surgery

    Get PDF
    Minimally invasive surgery is becoming popular for a number of interventions. Use of robotic surgical systems in coronary artery bypass intervention offers many benefits to patients, but is however limited by remaining challenges in port placement. Choosing the entry ports for the robotic tools has a large impact on the outcome of the surgery, and can be assisted by pre-operative planning and intra-operative guidance techniques. In this thesis, pre-operative 3D computed tomography (CT) imaging is used to plan minimally invasive robotic coronary artery bypass (MIRCAB) surgery. From a patient database, port placement optimization routines are implemented and validated. Computed port placement configurations approximated past expert chosen configurations with an error of 13.7 ±5.1 mm. Following optimization, statistical classification was used to assess patient candidacy for MIRCAB. Various pattern recognition techniques were used to predict MIRCAB success, and could be used in the future to reduce conversion rates to conventional open-chest surgery. Gaussian, Parzen window, and nearest neighbour classifiers all proved able to detect ‘candidate’ and ‘non-candidate’ MIRCAB patients. Intra-operative registration and laser projection of port placements was validated on a phantom and then evaluated in four patient cases. An image-guided laser projection system was developed to map port placement plans from pre-operative 3D images. Port placement mappings on the phantom setup were accurate with an error of 2.4 ± 0.4 mm. In the patient cases, projections remained within 1 cm of computed port positions. Misregistered port placement mappings in human trials were due mainly to the rigid-body registration assumption and can be improved by non-rigid techniques. Overall, this work presents an integrated approach for: 1) pre-operative port placement planning and classification of incoming MIRCAB patients; and 2) intra-operative guidance of port placement. Effective translation of these techniques to the clinic will enable MIRCAB as a more efficacious and accessible procedure

    Depth Segmentation Method for Cancer Detection in Mammography Images

    Get PDF
    Breast cancer detection remains a subject matter of intense and also a stream that will create a path for numerous debates. Mammography has long been the mainstay of breast cancer detection and is the only screening test proven to reduce mortality. Computer-aided diagnosis (CAD) systems have the potential to assist radiologists in the early detection of cancer. Many techniques were introduced based on SVM classifier, spatial and frequency domain, active contour method, k-NN clustering method but these methods have so many disadvantages on the SNR ratio, efficiency etc. The quality of detection of cancer cells is dependent with the segmentation of the mammography image. Here a new method is proposed for segmentation. This algorithm focuses to segment the image depth wise and also coloured based segmentation is implemented. Here the feature identification and detection of malignant and benign cells are done more easily and also to increase the efficiency to detect the early stages of breast cancer through mammography images. In which the relative signal enhancement technique is also done for high dynamic range images. Markovian random function can be used in the depth segmentation. Markov Random Field (MRF) is used in mammography images. It is because this method can model intensity in homogeneities occurring in these images. This will be helpful to find the featured tumor DOI: 10.17762/ijritcc2321-8169.15023

    Text Classification

    Get PDF
    There is an abundance of text data in this world but most of it is raw. We need to extract information from this data to make use of it. One way to extract this information from raw text is to apply informative labels drawn from a pre-defined fixed set i.e. Text Classification. In this thesis, we focus on the general problem of text classification, and work towards solving challenges associated to binary/multi-class/multi-label classification. More specifically, we deal with the problem of (i) Zero-shot labels during testing; (ii) Active learning for text screening; (iii) Multi-label classification under low supervision; (iv) Structured label space; (v) Classifying pairs of words in raw text i.e. Relation Extraction. For (i), we use a zero-shot classification model that utilizes independently learned semantic embeddings. Regarding (ii), we propose a novel active learning algorithm that reduces problem of bias in naive active learning algorithms. For (iii), we propose neural candidate-selector architecture that starts from a set of high-recall candidate labels to obtain high-precision predictions. In the case of (iv), we proposed an attention based neural tree decoder that recursively decodes an abstract into the ontology tree. For (v), we propose using second-order relations that are derived by explicitly connecting pairs of words via context token(s) for improved relation extraction. We use a wide variety of both traditional and deep machine learning tools. More specifically, we used traditional machine learning models like multi-valued linear regression and logistic regression for (i, ii), deep convolutional neural networks for (iii), recurrent neural networks for (iv) and transformer networks for (v)
    corecore