53 research outputs found

    Securing DICOM images based on adaptive pixel thresholding approach

    Get PDF
    This paper presents a novel efficient two-region Selective encryption approach that exploits medical images statistical properties to adaptively segment Digital Imaging and Communications in Medicine (DICOM) images into regions using thresholding technique in the spatial domain. This approach uses adaptive pixel thresholding, in which thresholds for same DICOM modality, anatomy part and pixel intensities' range were extracted off-line. Then, the extracted thresholds were objectively and subjectively evaluated to select the most accurate threshold for the correspondent pixel intensities' range. In the on-line stage, DICOM images were segmented into a Region Of Interest (ROI) and a Region Of Background (ROB) based on their pixels intensities using the adopted thresholds. After that, ROI was encrypted using Advanced Encryption Standard (AES), while ROB was encrypted using XXTEA. The main goal of the proposed approach is to reduce the encryption processing time overhead in comparison with the Naïve approach; where all image pixels are encrypted using AES. The proposed approach aims to achieve a trade-off between processing time and a high level of security. The encryption time of the proposed approach can save up to 60% of the Naïve encryption time for DICOM images with small-medium ROI

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Computer integrated system: medical imaging & visualization

    Get PDF
    The intent of this book’s conception is to present research work using a user centered design approach. Due to space constraints, the story of the journey, included in this book is relatively brief. However we believe that it manages to adequately represent the story of the journey, from its humble beginnings in 2008 to the point where it visualizes future trends amongst both researchers and practitioners across the Computer Science and Medical disciplines. This book aims not only to present a representative sampling of real-world collaboration between said disciplines but also to provide insights into the different aspects related to the use of real-world Computer Assisted Medical applications. Readers and potential clients should find the information particularly useful in analyzing the benefits of collaboration between these two fields, the products in and of their institutions. The work discussed here is a compilation of the work of several PhD students under my supervision, who have since graduated and produced several publications either in journals or proceedings of conferences. As their work has been published, this book will be more focused on the research methodology based on medical technology used in their research. The research work presented in this book partially encompasses the work under the MOA for collaborative Research and Development in the field of Computer Assisted Surgery and Diagnostics pertaining to Thoracic and Cardiovascular Diseases between UPM, UKM and IJN, spanning five years beginning from 15 Feb 2013

    A graph-based approach for the retrieval of multi-modality medical images

    Get PDF
    Medical imaging has revolutionised modern medicine and is now an integral aspect of diagnosis and patient monitoring. The development of new imaging devices for a wide variety of clinical cases has spurred an increase in the data volume acquired in hospitals. These large data collections offer opportunities for search-based applications in evidence-based diagnosis, education, and biomedical research. However, conventional search methods that operate upon manual annotations are not feasible for this data volume. Content-based image retrieval (CBIR) is an image search technique that uses automatically derived visual features as search criteria and has demonstrable clinical benefits. However, very few studies have investigated the CBIR of multi-modality medical images, which are making a monumental impact in healthcare, e.g., combined positron emission tomography and computed tomography (PET-CT) for cancer diagnosis. In this thesis, we propose a new graph-based method for the CBIR of multi-modality medical images. We derive a graph representation that emphasises the spatial relationships between modalities by structurally constraining the graph based on image features, e.g., spatial proximity of tumours and organs. We also introduce a graph similarity calculation algorithm that prioritises the relationships between tumours and related organs. To enable effective human interpretation of retrieved multi-modality images, we also present a user interface that displays graph abstractions alongside complex multi-modality images. Our results demonstrated that our method achieved a high precision when retrieving images on the basis of tumour location within organs. The evaluation of our proposed UI design by user surveys revealed that it improved the ability of users to interpret and understand the similarity between retrieved PET-CT images. The work in this thesis advances the state-of-the-art by enabling a novel approach for the retrieval of multi-modality medical images

    A Review on Brain Tumor Segmentation Based on Deep Learning Methods with Federated Learning Techniques

    Get PDF
    Brain tumors have become a severe medical complication in recent years due to their high fatality rate. Radiologists segment the tumor manually, which is time-consuming, error-prone, and expensive. In recent years, automated segmentation based on deep learning has demonstrated promising results in solving computer vision problems such as image classification and segmentation. Brain tumor segmentation has recently become a prevalent task in medical imaging to determine the tumor location, size, and shape using automated methods. Many researchers have worked on various machine and deep learning approaches to determine the most optimal solution using the convolutional methodology. In this review paper, we discuss the most effective segmentation techniques based on the datasets that are widely used and publicly available. We also proposed a survey of federated learning methodologies to enhance global segmentation performance and ensure privacy. A comprehensive literature review is suggested after studying more than 100 papers to generalize the most recent techniques in segmentation and multi-modality information. Finally, we concentrated on unsolved problems in brain tumor segmentation and a client-based federated model training strategy. Based on this review, future researchers will understand the optimal solution path to solve these issues

    A Method for the Geometric Analysis of Rugose Coral Growth Ridges as Paleoenvironmental Indicators in the Middle Devonian Hungry Hollow Member of Widder Formation, Michigan Basin

    Get PDF
    Skeletons of Devonian rugose corals feature submillimetre-scale growth ridges on their outer surface (epitheca) that record the successive positions of the coral polyp during longitudinal corallite growth. Specimens of rugose corals Eridophyllum and Cystiphylloides from the Hungry Hollow Member of the Middle Devonian Widder Formation were sectioned longitudinally and imaged by SEM, and image processing techniques were applied to extract a line representing the epithecal surface. Local extrema found through peak detection allowed growth ridges to be represented as simplified triangles, so that geometric measurements (area, length) could be related to coral growth and analyzed in reference to possible paleoenvironmental cycles. This research has produced an objective method for the extraction of growth ridge data from a two-dimensional coral slice, although slice location was found to influence results. Results show potential sub-monthly bundles of ~15-17 ridges, not previously observed, which suggest a lunar/tidal influence on coral growth

    On the Application of PSpice for Localised Cloud Security

    Get PDF
    The work reported in this thesis commenced with a review of methods for creating random binary sequences for encoding data locally by the client before storing in the Cloud. The first method reviewed investigated evolutionary computing software which generated noise-producing functions from natural noise, a highly-speculative novel idea since noise is stochastic. Nevertheless, a function was created which generated noise to seed chaos oscillators which produced random binary sequences and this research led to a circuit-based one-time pad key chaos encoder for encrypting data. Circuit-based delay chaos oscillators, initialised with sampled electronic noise, were simulated in a linear circuit simulator called PSpice. Many simulation problems were encountered because of the nonlinear nature of chaos but were solved by creating new simulation parts, tools and simulation paradigms. Simulation data from a range of chaos sources was exported and analysed using Lyapunov analysis and identified two sources which produced one-time pad sequences with maximum entropy. This led to an encoding system which generated unlimited, infinitely-long period, unique random one-time pad encryption keys for plaintext data length matching. The keys were studied for maximum entropy and passed a suite of stringent internationally-accepted statistical tests for randomness. A prototype containing two delay chaos sources initialised by electronic noise was produced on a double-sided printed circuit board and produced more than 200 Mbits of OTPs. According to Vladimir Kotelnikov in 1941 and Claude Shannon in 1945, one-time pad sequences are theoretically-perfect and unbreakable, provided specific rules are adhered to. Two other techniques for generating random binary sequences were researched; a new circuit element, memristance was incorporated in a Chua chaos oscillator, and a fractional-order Lorenz chaos system with order less than three. Quantum computing will present many problems to cryptographic system security when existing systems are upgraded in the near future. The only existing encoding system that will resist cryptanalysis by this system is the unconditionally-secure one-time pad encryption

    Denoising Low-Dose CT Images using Multi-frame techniques

    Get PDF
    This study examines potential methods of achieving a reduction in X-ray radiation dose of Computer Tomography (CT) using multi-frame low-dose CT images. Even though a single-frame low-dose CT image is not very diagnostically useful due to excessive noise, we have found that by using multi-frame low-dose CT images we can denoise these low-dose CT images quite significantly at lower radiation dose. We have proposed two approaches leveraging these multi-frame low-dose CT denoising techniques. In our first method, we proposed a blind source separation (BSS) based CT image method using a multiframe low-dose image sequence. By using BSS technique, we estimated the independent image component and noise components from the image sequences. The extracted image component then is further donoised using a nonlocal groupwise denoiser named BM3D that used the mean standard deviation of the noise components. We have also proposed an extension of this method using a window splitting technique. In our second method, we leveraged the power of deep learning to introduce a collaborative technique to train multiple Noise2Noise generators simultaneously and learn the image representation from LDCT images. We presented three models using this Collaborative Network (CN) principle employing two generators (CN2G), three generators (CN3G), and hybrid three generators (HCN3G) consisting of BSS denoiser with one of the CN generators. The CN3G model showed better performance than the CN2G model in terms of denoised image quality at the expense of an additional LDCT image. The HCN3G model took the advantages of both these models by managing to train three collaborative generators using only two LDCT images by leveraging our first proposed method using blind source separation (BSS) and block matching 3-D (BM3D) filter. By using these multi-frame techniques, we can reduce the radiation dosage quite significantly without losing significant image details, especially for low-contrast areas. Amongst our all methods, the HCN3G model performs the best in terms of PSNR, SSIM, and material noise characteristics, while CN2G and CN3G perform better in terms of contrast difference. In HCN3G model, we have combined two of our methods in a single technique. In addition, we have introduced Collaborative Network (CN) and collaborative loss terms in the L2 losses calculation in our second method which is a significant contribution of this research study
    corecore