27 research outputs found

    Fusion based analysis of ophthalmologic image data

    Get PDF
    summary:The paper presents an overview of image analysis activities of the Brno DAR group in the medical application area of retinal imaging. Particularly, illumination correction and SNR enhancement by registered averaging as preprocessing steps are briefly described; further mono- and multimodal registration methods developed for specific types of ophthalmological images, and methods for segmentation of optical disc, retinal vessel tree and autofluorescence areas are presented. Finally, the designed methods for neural fibre layer detection and evaluation on retinal images, utilising different combined texture analysis approaches and several types of classifiers, are shown. The results in all the areas are shortly commented on at the respective sections. In order to emphasise methodological aspects, the methods and results are ordered according to consequential phases of processing rather then divided according to individual medical applications

    FCM Clustering Algorithms for Segmentation of Brain MR Images

    Get PDF
    The study of brain disorders requires accurate tissue segmentation of magnetic resonance (MR) brain images which is very important for detecting tumors, edema, and necrotic tissues. Segmentation of brain images, especially into three main tissue types: Cerebrospinal Fluid (CSF), Gray Matter (GM), and White Matter (WM), has important role in computer aided neurosurgery and diagnosis. Brain images mostly contain noise, intensity inhomogeneity, and weak boundaries. Therefore, accurate segmentation of brain images is still a challenging area of research. This paper presents a review of fuzzy c-means (FCM) clustering algorithms for the segmentation of brain MR images. The review covers the detailed analysis of FCM based algorithms with intensity inhomogeneity correction and noise robustness. Different methods for the modification of standard fuzzy objective function with updating of membership and cluster centroid are also discussed

    Computer aided analysis of inflammatory muscle disease using magnetic resonance imaging

    Get PDF
    Inflammatory muscle disease (myositis) is characterised by inflammation and a gradual increase in muscle weakness. Diagnosis typically requires a range of clinical tests, including magnetic resonance imaging of the thigh muscles to assess the disease severity. In the past, this has been measured by manually counting the number of muscles affected. In this work, a computer-aided analysis of inflammatory muscle disease is presented to help doctors diagnose and monitor the disease. Methods to quantify the level of oedema and fat infiltration from magnetic resonance scans are proposed and the disease quantities determined are shown to have positive correlation against expert medical opinion. The methods have been designed and tested on a database of clinically acquired T1 and STIR sequences, and are proven to be robust despite suboptimal image quality. General background information is first introduced, giving an overview of the medical, technical, and theoretical topics necessary to understand the problem domain. Next, a detailed introduction to the physics of magnetic resonance imaging is given. A review of important literature from similar and related domains is presented, with valuable insights that are utilised at a later stage. Scans are carefully pre-processed to bring all slices in to a common frame of reference and the methods to quantify the level of oedema and fat infiltration are defined and shown to have good positive correlation with expert medical opinion. A number of validation tests are performed with re-scanned subjects to indicate the level of repeatability. The disease quantities, together with statistical features from the T1-STIR joint histogram, are used for automatic classification of the disease severity. Automatic classification is shown to be successful on out of sample data for both the oedema and fat infiltration problems

    Statistical analysis for longitudinal MR imaging of dementia

    Get PDF
    Serial Magnetic Resonance (MR) Imaging can reveal structural atrophy in the brains of subjects with neurodegenerative diseases such as Alzheimer’s Disease (AD). Methods of computational neuroanatomy allow the detection of statistically significant patterns of brain change over time and/or over multiple subjects. The focus of this thesis is the development and application of statistical and supporting methodology for the analysis of three-dimensional brain imaging data. There is a particular emphasis on longitudinal data, though much of the statistical methodology is more general. New methods of voxel-based morphometry (VBM) are developed for serial MR data, employing combinations of tissue segmentation and longitudinal non-rigid registration. The methods are evaluated using novel quantitative metrics based on simulated data. Contributions to general aspects of VBM are also made, and include a publication concerning guidelines for reporting VBM studies, and another examining an issue in the selection of which voxels to include in the statistical analysis mask for VBM of atrophic conditions. Research is carried out into the statistical theory of permutation testing for application to multivariate general linear models, and is then used to build software for the analysis of multivariate deformation- and tensor-based morphometry data, efficiently correcting for the multiple comparison problem inherent in voxel-wise analysis of images. Monte Carlo simulation studies extend results available in the literature regarding the different strategies available for permutation testing in the presence of confounds. Theoretical aspects of longitudinal deformation- and tensor-based morphometry are explored, such as the options for combining within- and between-subject deformation fields. Practical investigation of several different methods and variants is performed for a longitudinal AD study

    Combining global and local information for the segmentation of MR images of the brain

    Get PDF
    Magnetic resonance imaging can provide high resolution volumetric images of the brain with exceptional soft tissue contrast. These factors allow the complex structure of the brain to be clearly visualised. This has lead to the development of quantitative methods to analyse neuroanatomical structures. In turn, this has promoted the use of computational methods to automate and improve these techniques. This thesis investigates methods to accurately segment MRI images of the brain. The use of global and local image information is considered, where global information includes image intensity distributions, means and variances and local information is based on the relationship between spatially neighbouring voxels. Methods are explored that aim to improve the classification and segmentation of MR images of the brain by combining these elements. Some common artefacts exist in MR brain images that can be seriously detrimental to image analysis methods. Methods to correct for these artifacts are assessed by exploring their effect, first with some well established classification methods and then with methods that combine global information with local information in the form of a Markov random field model. Another characteristic of MR images is the partial volume effect that occurs where signals from different tissues become mixed over the finite volume of a voxel. This effect is demonstrated and quantified using a simulation. Analysis methods that address these issues are tested on simulated and real MR images. They are also applied to study the structure of the temporal lobes in a group of patients with temporal lobe epilepsy. The results emphasise the benefits and limitations of applying these methods to a problem of this nature. The work in this thesis demonstrates the advantages of using global and local information together in the segmentation of MR brain images and proposes a generalised framework that allows this information to be combined in a flexible way

    Deep learning in food category recognition

    Get PDF
    Integrating artificial intelligence with food category recognition has been a field of interest for research for the past few decades. It is potentially one of the next steps in revolutionizing human interaction with food. The modern advent of big data and the development of data-oriented fields like deep learning have provided advancements in food category recognition. With increasing computational power and ever-larger food datasets, the approach’s potential has yet to be realized. This survey provides an overview of methods that can be applied to various food category recognition tasks, including detecting type, ingredients, quality, and quantity. We survey the core components for constructing a machine learning system for food category recognition, including datasets, data augmentation, hand-crafted feature extraction, and machine learning algorithms. We place a particular focus on the field of deep learning, including the utilization of convolutional neural networks, transfer learning, and semi-supervised learning. We provide an overview of relevant studies to promote further developments in food category recognition for research and industrial applicationsMRC (MC_PC_17171)Royal Society (RP202G0230)BHF (AA/18/3/34220)Hope Foundation for Cancer Research (RM60G0680)GCRF (P202PF11)Sino-UK Industrial Fund (RP202G0289)LIAS (P202ED10Data Science Enhancement Fund (P202RE237)Fight for Sight (24NN201);Sino-UK Education Fund (OP202006)BBSRC (RM32G0178B8

    Imaging of the Breast

    Get PDF
    Early detection of breast cancer combined with targeted therapy offers the best outcome for breast cancer patients. This volume deal with a wide range of new technical innovations for improving breast cancer detection, diagnosis and therapy. There is a special focus on improvements in mammographic image quality, image analysis, magnetic resonance imaging of the breast and molecular imaging. A chapter on targeted therapy explores the option of less radical postoperative therapy for women with early, screen-detected breast cancers

    Hardware-accelerated algorithms in visual computing

    Get PDF
    This thesis presents new parallel algorithms which accelerate computer vision methods by the use of graphics processors (GPUs) and evaluates them with respect to their speed, scalability, and the quality of their results. It covers the fields of homogeneous and anisotropic diffusion processes, diffusion image inpainting, optic flow, and halftoning. In this turn, it compares different solvers for homogeneous diffusion and presents a novel \u27extended\u27 box filter. Moreover, it suggests to use the fast explicit diffusion scheme (FED) as an efficient and flexible solver for nonlinear and in particular for anisotropic parabolic diffusion problems on graphics hardware. For elliptic diffusion-like processes, it recommends to use cascadic FED or Fast Jacobi schemes. The presented optic flow algorithm represents one of the fastest yet very accurate techniques. Finally, it presents a novel halftoning scheme which yields state-of-the-art results for many applications in image processing and computer graphics.Diese Arbeit präsentiert neue parallele Algorithmen zur Beschleunigung von Methoden in der Bildinformatik mittels Grafikprozessoren (GPUs), und evaluiert diese im Hinblick auf Geschwindigkeit, Skalierungsverhalten, und Qualität der Resultate. Sie behandelt dabei die Gebiete der homogenen und anisotropen Diffusionsprozesse, Inpainting (Bildvervollständigung) mittels Diffusion, die Bestimmung des optischen Flusses, sowie Halbtonverfahren. Dabei werden verschiedene Löser für homogene Diffusion verglichen und ein neuer \u27erweiterter\u27 Mittelwertfilter präsentiert. Ferner wird vorgeschlagen, das schnelle explizite Diffusionsschema (FED) als effizienten und flexiblen Löser für parabolische nichtlineare und speziell anisotrope Diffusionsprozesse auf Grafikprozessoren einzusetzen. Für elliptische diffusionsartige Prozesse wird hingegen empfohlen, kaskadierte FED- oder schnelle Jacobi-Verfahren einzusetzen. Der vorgestellte Algorithmus zur Berechnung des optischen Flusses stellt eines der schnellsten und dennoch äußerst genauen Verfahren dar. Schließlich wird ein neues Halbtonverfahren präsentiert, das in vielen Bereichen der Bildverarbeitung und Computergrafik Ergebnisse produziert, die den Stand der Technik repräsentieren

    Adaptive Nonlocal Signal Restoration and Enhancement Techniques for High-Dimensional Data

    Get PDF
    The large number of practical applications involving digital images has motivated a significant interest towards restoration solutions that improve the visual quality of the data under the presence of various acquisition and compression artifacts. Digital images are the results of an acquisition process based on the measurement of a physical quantity of interest incident upon an imaging sensor over a specified period of time. The quantity of interest depends on the targeted imaging application. Common imaging sensors measure the number of photons impinging over a dense grid of photodetectors in order to produce an image similar to what is perceived by the human visual system. Different applications focus on the part of the electromagnetic spectrum not visible by the human visual system, and thus require different sensing technologies to form the image. In all cases, even with the advance of technology, raw data is invariably affected by a variety of inherent and external disturbing factors, such as the stochastic nature of the measurement processes or challenging sensing conditions, which may cause, e.g., noise, blur, geometrical distortion and color aberration. In this thesis we introduce two filtering frameworks for video and volumetric data restoration based on the BM3D grouping and collaborative filtering paradigm. In its general form, the BM3D paradigm leverages the correlation present within a nonlocal emph{group} composed of mutually similar basic filtering elements, e.g., patches, to attain an enhanced sparse representation of the group in a suitable transform domain where the energy of the meaningful part of the signal can be thus separated from that of the noise through coefficient shrinkage. We argue that the success of this approach largely depends on the form of the used basic filtering elements, which in turn define the subsequent spectral representation of the nonlocal group. Thus, the main contribution of this thesis consists in tailoring specific basic filtering elements to the the inherent characteristics of the processed data at hand. Specifically, we embed the local spatial correlation present in volumetric data through 3-D cubes, and the local spatial and temporal correlation present in videos through 3-D spatiotemporal volumes, i.e. sequences of 2-D blocks following a motion trajectory. The foundational aspect of this work is the analysis of the particular spectral representation of these elements. Specifically, our frameworks stack mutually similar 3-D patches along an additional fourth dimension, thus forming a 4-D data structure. By doing so, an effective group spectral description can be formed, as the phenomena acting along different dimensions in the data can be precisely localized along different spectral hyperplanes, and thus different filtering shrinkage strategies can be applied to different spectral coefficients to achieve the desired filtering results. This constitutes a decisive difference with the shrinkage traditionally employed in BM3D-algorithms, where different hyperplanes of the group spectrum are shrunk subject to the same degradation model. Different image processing problems rely on different observation models and typically require specific algorithms to filter the corrupted data. As a consequent contribution of this thesis, we show that our high-dimensional filtering model allows to target heterogeneous noise models, e.g., characterized by spatial and temporal correlation, signal-dependent distributions, spatially varying statistics, and non-white power spectral densities, without essential modifications to the algorithm structure. As a result, we develop state-of-the-art methods for a variety of fundamental image processing problems, such as denoising, deblocking, enhancement, deflickering, and reconstruction, which also find practical applications in consumer, medical, and thermal imaging
    corecore