159 research outputs found

    Tumor Segmentation and Classification Using Machine Learning Approaches

    Get PDF
    Medical image processing has recently developed progressively in terms of methodologies and applications to increase serviceability in health care management. Modern medical image processing employs various methods to diagnose tumors due to the burgeoning demand in the related industry. This study uses the PG-DBCWMF, the HV area method, and CTSIFT extraction to identify brain tumors that have been combined with pancreatic tumors. In terms of efficiency, precision, creativity, and other factors, these strategies offer improved performance in therapeutic settings. The three techniques, PG-DBCWMF, HV region algorithm, and CTSIFT extraction, are combined in the suggested method. The PG-DBCWMF (Patch Group Decision Couple Window Median Filter) works well in the preprocessing stage and eliminates noise. The HV region technique precisely calculates the vertical and horizontal angles of the known images. CTSIFT is a feature extraction method that recognizes the area of tumor images that is impacted. The brain tumor and pancreatic tumor databases, which produce the best PNSR, MSE, and other results, were used for the experimental evaluation

    Improved Wavelet Threshold for Image De-noising

    Get PDF
    With the development of communication technology and network technology, as well as the rising popularity of digital electronic products, an image has become an important carrier of access to outside information. However, images are vulnerable to noise interference during collection, transmission and storage, thereby decreasing image quality. Therefore, image noise reduction processing is necessary to obtain higher-quality images. For the characteristics of its multi-analysis, relativity removal, low entropy, and flexible bases, the wavelet transform has become a powerful tool in the field of image de-noising. The wavelet transform in application mathematics has a rapid development. De-noising methods based on wavelet transform is proposed and achieved with good results, but shortcomings still remain. Traditional threshold functions have some deficiencies in image de-noising. A hard threshold function is discontinuous, whereas a soft threshold function causes constant deviation. To address these shortcomings, a method for removing image noise is proposed in this paper. First, the method decomposes the noise image to determine the wavelet coefficients. Second, the wavelet coefficient is applied on the high-frequency part of the threshold processing by using the improved threshold function. Finally, the de-noised images are obtained to rebuild the images in accordance with the estimation in the wavelet-based conditions. Experiment results show that this method, discussed in this paper, is better than traditional hard threshold de-noising and soft threshold de-noising methods, in terms of objective effects and subjective visual effects

    Integrated Modelling Approach for Enhancing Brain MRI with Flexible Pre-Processing Capability

    Get PDF
    The assurance of an information quality of the input medical image is a critical step to offer highly precise and reliable diagnosis of clinical condition in human. The importance of such assurance becomes more while dealing with important organ like brain. Magnetic Resonance Imaging (MRI) is one of the most trusted mediums to investigate brain. Looking into the existing trends of investigating brain MRI, it was observed that researchers are more prone to investigate advanced problems e.g. segmentation, localization, classification, etc considering image dataset. There is less work carried out towards image preprocessing that potential affects the later stage of diagnosing. Therefore, this paper introduces a novel model of integrated image enhancement algorithm that is capable of solving different and discrete problems of performing image pre-processing for offering highly improved and enhanced brain MRI. The comparative outcomes exhibit the advantage of its simplistic implemetation strategy

    Quantitative diffusion MRI with application to multiple sclerosis

    Get PDF
    Diffusion MRI (dMRI) is a uniquely non-invasive probe of biological tissue properties, increasingly able to provide access to ever more intricate structural and microstructural tissue information. Imaging biomarkers that reveal pathological alterations can help advance our knowledge of complex neurological disorders such as multiple sclerosis (MS), but depend on both high quality image data and robust post-processing pipelines. The overarching aim of this thesis was to develop methods to improve the characterisation of brain tissue structure and microstructure using dMRI. Two distinct avenues were explored. In the first approach, network science and graph theory were used to identify core human brain networks with improved sensitivity to subtle pathological damage. A novel consensus subnetwork was derived using graph partitioning techniques to select nodes based on independent measures of centrality, and was better able to explain cognitive impairment in relapsing-remitting MS patients than either full brain or default mode networks. The influence of edge weighting scheme on graph characteristics was explored in a separate study, which contributes to the connectomics field by demonstrating how study outcomes can be affected by an aspect of network design often overlooked. The second avenue investigated the influence of image artefacts and noise on the accuracy and precision of microstructural tissue parameters. Correction methods for the echo planar imaging (EPI) Nyquist ghost artefact were systematically evaluated for the first time in high b-value dMRI, and the outcomes were used to develop a new 2D phase-corrected reconstruction framework with simultaneous channel-wise noise reduction appropriate for dMRI. The technique was demonstrated to alleviate biases associated with Nyquist ghosting and image noise in dMRI biomarkers, but has broader applications in other imaging protocols that utilise the EPI readout. I truly hope the research in this thesis will influence and inspire future work in the wider MR community

    Efficient reconfigurable architectures for 3D medical image compression

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Recently, the more widespread use of three-dimensional (3-D) imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US) have generated a massive amount of volumetric data. These have provided an impetus to the development of other applications, in particular telemedicine and teleradiology. In these fields, medical image compression is important since both efficient storage and transmission of data through high-bandwidth digital communication lines are of crucial importance. Despite their advantages, most 3-D medical imaging algorithms are computationally intensive with matrix transformation as the most fundamental operation involved in the transform-based methods. Therefore, there is a real need for high-performance systems, whilst keeping architectures exible to allow for quick upgradeability with real-time applications. Moreover, in order to obtain efficient solutions for large medical volumes data, an efficient implementation of these operations is of significant importance. Reconfigurable hardware, in the form of field programmable gate arrays (FPGAs) has been proposed as viable system building block in the construction of high-performance systems at an economical price. Consequently, FPGAs seem an ideal candidate to harness and exploit their inherent advantages such as massive parallelism capabilities, multimillion gate counts, and special low-power packages. The key achievements of the work presented in this thesis are summarised as follows. Two architectures for 3-D Haar wavelet transform (HWT) have been proposed based on transpose-based computation and partial reconfiguration suitable for 3-D medical imaging applications. These applications require continuous hardware servicing, and as a result dynamic partial reconfiguration (DPR) has been introduced. Comparative study for both non-partial and partial reconfiguration implementation has shown that DPR offers many advantages and leads to a compelling solution for implementing computationally intensive applications such as 3-D medical image compression. Using DPR, several large systems are mapped to small hardware resources, and the area, power consumption as well as maximum frequency are optimised and improved. Moreover, an FPGA-based architecture of the finite Radon transform (FRAT)with three design strategies has been proposed: direct implementation of pseudo-code with a sequential or pipelined description, and block random access memory (BRAM)- based method. An analysis with various medical imaging modalities has been carried out. Results obtained for image de-noising implementation using FRAT exhibits promising results in reducing Gaussian white noise in medical images. In terms of hardware implementation, promising trade-offs on maximum frequency, throughput and area are also achieved. Furthermore, a novel hardware implementation of 3-D medical image compression system with context-based adaptive variable length coding (CAVLC) has been proposed. An evaluation of the 3-D integer transform (IT) and the discrete wavelet transform (DWT) with lifting scheme (LS) for transform blocks reveal that 3-D IT demonstrates better computational complexity than the 3-D DWT, whilst the 3-D DWT with LS exhibits a lossless compression that is significantly useful for medical image compression. Additionally, an architecture of CAVLC that is capable of compressing high-definition (HD) images in real-time without any buffer between the quantiser and the entropy coder is proposed. Through a judicious parallelisation, promising results have been obtained with limited resources. In summary, this research is tackling the issues of massive 3-D medical volumes data that requires compression as well as hardware implementation to accelerate the slowest operations in the system. Results obtained also reveal a significant achievement in terms of the architecture efficiency and applications performance.Ministry of Higher Education Malaysia (MOHE), Universiti Tun Hussein Onn Malaysia (UTHM) and the British Counci

    Adaptive nonlocal and structured sparse signal modeling and applications

    Get PDF
    Features based on sparse representation, especially using the synthesis dictionary model, have been heavily exploited in signal processing and computer vision. Many applications such as image and video denoising, inpainting, demosaicing, super-resolution, magnetic resonance imaging (MRI), and computed tomography (CT) reconstruction have been shown to benefit from adaptive sparse signal modeling. However, synthesis dictionary learning typically involves expensive sparse coding and learning steps. Recently, sparsifying transform learning received interest for its cheap computation and its optimal updates in the alternating algorithms. Prior works on transform learning have certain limitations, including (1) limited model richness and structure for handling diverse data, (2) lack of non-local structure, and (3) lack of effective extension to high-dimensional or streaming data. This dissertation focuses on advanced data-driven sparse modeling techniques, especially with nonlocal and structured sparse signal modeling. In the first work of this dissertation, we propose a methodology for learning, dubbed Flipping and Rotation Invariant Sparsifying Transforms (FRIST), to better represent natural images that contain textures with various geometrical directions. The proposed alternating FRIST learning algorithm involves efficient optimal updates. We provide a convergence guarantee, and demonstrate the empirical convergence behavior of the proposed FRIST learning approach. Preliminary experiments show the promising performance of FRIST learning for image sparse representation, segmentation, denoising, robust inpainting, and compressed sensing-based magnetic resonance image reconstruction. Next, we present an online high-dimensional sparsifying transform learning method for spatio-temporal data, and demonstrate its usefulness with a novel video denoising framework, dubbed VIDOSAT. The proposed method is based on our previous work on online sparsifying transform learning, which has low computational and memory costs, and can potentially handle streaming video. By combining with a block matching (BM) technique, the learned model can effectively adapt to video data with various motions. The patches are constructed either from corresponding 2D patches in successive frames or using an online block matching technique. The proposed online video denoising requires little memory and others efficient processing. Numerical experiments are used to analyze the contribution of the various components of the proposed video denoising scheme by "switching off" these components - for example, fixing the transform to be 3D DCT, rather than a learned transform. Other experiments compare to the performance of prior schemes such as dictionary learning-based schemes, and the state-of-the-art VBM3D and VBM4D on several video data sets, demonstrating the promising performance of the proposed methods. In the third part of the dissertation, we propose a joint sparse and low-rank model, dubbed STROLLR, to better represent natural images. Patch-based methods exploit local patch sparsity, whereas other works apply low-rankness of grouped patches to exploit image non-local structures. However, using either approach alone usually limits performance in image restoration applications. In order to fully utilize both the local and non-local image properties, we develop an image restoration framework using a transform learning scheme with joint low-rank regularization. The approach owes some of its computational efficiency and good performance to the use of transform learning for adaptive sparse representation rather than the popular synthesis dictionary learning algorithms, which involve approximation of NP-hard sparse coding and expensive learning steps. We demonstrate the proposed framework in various applications to image denoising, inpainting, and compressed sensing based magnetic resonance imaging. Results show promising performance compared to state-of-the-art competing methods. Last, we extend the effective joint sparsity and low-rankness model from image to video applications. We propose a novel video denoising method, based on an online tensor reconstruction scheme with a joint adaptive sparse and low-rank model, dubbed SALT. An efficient and unsupervised online unitary sparsifying transform learning method is introduced to impose adaptive sparsity on the fly. We develop an efficient 3D spatio-temporal data reconstruction framework based on the proposed online learning method, which exhibits low latency and can potentially handle streaming videos. To the best of our knowledge, this is the first work that combines adaptive sparsity and low-rankness for video denoising, and the first work that solves the proposed problem in an online fashion. We demonstrate video denoising results over commonly used videos from public datasets. Numerical experiments show that the proposed video denoising method outperforms competing methods

    Graph Signal Processing: Overview, Challenges and Applications

    Full text link
    Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing. We then summarize recent developments in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning. We finish by providing a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE

    Independent component analysis and source analysis of auditory evoked potentials for assessment of cochlear implant users

    No full text
    Source analysis of the Auditory Evoked Potential (AEP) has been used before to evaluate the maturation of the auditory system in both adult and children; in the same way, this technique could be applied to ongoing EEG recordings, in response to acoustic specific frequency stimuli, from children with cochlear implants (CI). This is done in oder to objectively assess the performance of this electronic device and the maturation of the child?s hearing. However, these recordings are contaminated by an artifact produced by the normal operation of the CI; this artifact in particular makes the detection and analysis of AEPs much harder and generates errors in the source analysis process. The artifact can be spatially filtered using Independent Component Analysis (ICA); in this research, three different ICA algorithms were compared in order to establish the more suited algorithm to remove the CI artifact. Additionally, we show that pre-processing the EEG recording, using a temporal ICA algorithm, facilitates not only the identification of the AEP peaks but also the source analysis procedure. From results obtained in this research and limited dataset of CI vs normal recordings, it is possible to conclude that the AEPs source locations change from the inferior temporal areas in the first 2 years after implantation to the superior temporal area after three years using the CIs, close to the locations obtained in normal hearing children. It is intended that the results of this research are used as an objective technique for a general evaluation of the performance of children with CIs

    Network models in neuroimaging: a survey of multimodal applications

    Get PDF
    Mapping the brain structure and function is one of the hardest problems in science. Different image modalities, in particular the ones based on magnetic resonance imaging (MRI) can shed more light on how it is organised and how its functions unfold, but a theoretical framework is needed. In the last years, using network models and graph theory to represent the brain structure and function has become a major trend in neuroscience. In this review, we outline how network modelling has been used in neuroimaging, clarifying what are the underlying mathematical concepts and the consequent methodological choices. The major findings are then presented for structural, functional and multimodal applications. We conclude outlining what are still the current issues and the perspective for the immediate future
    corecore