1,016 research outputs found

    Invariant Scattering Transform for Medical Imaging

    Full text link
    Invariant scattering transform introduces new area of research that merges the signal processing with deep learning for computer vision. Nowadays, Deep Learning algorithms are able to solve a variety of problems in medical sector. Medical images are used to detect diseases brain cancer or tumor, Alzheimer's disease, breast cancer, Parkinson's disease and many others. During pandemic back in 2020, machine learning and deep learning has played a critical role to detect COVID-19 which included mutation analysis, prediction, diagnosis and decision making. Medical images like X-ray, MRI known as magnetic resonance imaging, CT scans are used for detecting diseases. There is another method in deep learning for medical imaging which is scattering transform. It builds useful signal representation for image classification. It is a wavelet technique; which is impactful for medical image classification problems. This research article discusses scattering transform as the efficient system for medical image analysis where it's figured by scattering the signal information implemented in a deep convolutional network. A step by step case study is manifested at this research work.Comment: 11 pages, 8 figures and 1 tabl

    Invariant Scattering Transform for Medical Imaging

    Full text link
    Over the years, the Invariant Scattering Transform (IST) technique has become popular for medical image analysis, including using wavelet transform computation using Convolutional Neural Networks (CNN) to capture patterns' scale and orientation in the input signal. IST aims to be invariant to transformations that are common in medical images, such as translation, rotation, scaling, and deformation, used to improve the performance in medical imaging applications such as segmentation, classification, and registration, which can be integrated into machine learning algorithms for disease detection, diagnosis, and treatment planning. Additionally, combining IST with deep learning approaches has the potential to leverage their strengths and enhance medical image analysis outcomes. This study provides an overview of IST in medical imaging by considering the types of IST, their application, limitations, and potential scopes for future researchers and practitioners

    Computer-Assisted Algorithms for Ultrasound Imaging Systems

    Get PDF
    Ultrasound imaging works on the principle of transmitting ultrasound waves into the body and reconstructs the images of internal organs based on the strength of the echoes. Ultrasound imaging is considered to be safer, economical and can image the organs in real-time, which makes it widely used diagnostic imaging modality in health-care. Ultrasound imaging covers the broad spectrum of medical diagnostics; these include diagnosis of kidney, liver, pancreas, fetal monitoring, etc. Currently, the diagnosis through ultrasound scanning is clinic-centered, and the patients who are in need of ultrasound scanning has to visit the hospitals for getting the diagnosis. The services of an ultrasound system are constrained to hospitals and did not translate to its potential in remote health-care and point-of-care diagnostics due to its high form factor, shortage of sonographers, low signal to noise ratio, high diagnostic subjectivity, etc. In this thesis, we address these issues with an objective of making ultrasound imaging more reliable to use in point-of-care and remote health-care applications. To achieve the goal, we propose (i) computer-assisted algorithms to improve diagnostic accuracy and assist semi-skilled persons in scanning, (ii) speckle suppression algorithms to improve the diagnostic quality of ultrasound image, (iii) a reliable telesonography framework to address the shortage of sonographers, and (iv) a programmable portable ultrasound scanner to operate in point-of-care and remote health-care applications

    Single-pixel, single-photon three-dimensional imaging

    Get PDF
    The 3D recovery of a scene is a crucial task with many real-life applications such as self-driving vehicles, X-ray tomography and virtual reality. The recent development of time-resolving detectors sensible to single photons allowed the recovery of the 3D information at high frame rate with unprecedented capabilities. Combined with a timing system, single-photon sensitive detectors allow the 3D image recovery by measuring the Time-of-Flight (ToF) of the photons scattered back by the scene with a millimetre depth resolution. Current ToF 3D imaging techniques rely on scanning detection systems or multi-pixel sensor. Here, we discuss an approach to simplify the hardware complexity of the current 3D imaging ToF techniques using a single-pixel, single-photon sensitive detector and computational imaging algorithms. The 3D imaging approaches discussed in this thesis do not require mechanical moving parts as in standard Lidar systems. The single-pixel detector allows to reduce the pixel complexity to a single unit and offers several advantages in terms of size, flexibility, wavelength range and cost. The experimental results demonstrate the 3D image recovery of hidden scenes with a subsecond acquisition time, allowing also non-line-of-sight scenes 3D recovery in real-time. We also introduce the concept of intelligent Lidar, a 3D imaging paradigm based uniquely on the temporal trace of the return photons and a data-driven 3D retrieval algorithm

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Detecting microcalcification clusters in digital mammograms: Study for inclusion into computer aided diagnostic prompting system

    Full text link
    Among signs of breast cancer encountered in digital mammograms radiologists point to microcalcification clusters (MCCs). Their detection is a challenging problem from both medical and image processing point of views. This work presents two concurrent methods for MCC detection, and studies their possible inclusion to a computer aided diagnostic prompting system. One considers Wavelet Domain Hidden Markov Tree (WHMT) for modeling microcalcification edges. The model is used for differentiation between MC and non-MC edges based on the weighted maximum likelihood (WML) values. The classification of objects is carried out using spatial filters. The second method employs SUSAN edge detector in the spatial domain for mammogram segmentation. Classification of objects as calcifications is carried out using another set of spatial filters and Feedforward Neural Network (NN). A same distance filter is employed in both methods to find true clusters. The analysis of two methods is performed on 54 image regions from the mammograms selected randomly from DDSM database, including benign and cancerous cases as well as cases which can be classified as hard cases from both radiologists and the computer perspectives. WHMT/WML is able to detect 98.15% true positive (TP) MCCs under 1.85% of false positives (FP), whereas the SUSAN/NN method achieves 94.44% of TP at the cost of 1.85% for FP. The comparison of these two methods suggests WHMT/WML for the computer aided diagnostic prompting. It also certifies the low false positive rates for both methods, meaning less biopsy tests per patient

    Characterization of breast tissues in density and effective atomic number basis via spectral X-ray computed tomography

    Full text link
    Differentiation of breast tissues is challenging in X-ray imaging because tissues might share similar or even the same linear attenuation coefficients μ\mu. Spectral computed tomography (CT) allows for more quantitative characterization in terms of tissue density and effective atomic number by exploiting the energy dependence of μ\mu. In this work, 5 mastectomy samples and a phantom with inserts mimicking breast soft tissues were evaluated in a retrospective study. The samples were imaged at three monochromatic energy levels in the range of 24 - 38 keV at 5 mGy per scan using a propagation-based phase-contrast setup at SYRMEP beamline at the Italian national synchrotron Elettra. A custom-made algorithm incorporating CT reconstructions of an arbitrary number of spectral energy channels was developed to extract the density and effective atomic number of adipose, fibro-glandular, pure glandular, tumor, and skin from regions selected by a radiologist. Preliminary results suggest that, via spectral CT, it is possible to enhance tissue differentiation. It was found that adipose, fibro-glandular and tumorous tissues have average effective atomic numbers (5.94 ±\pm 0.09, 7.03 ±\pm 0.012, and 7.40 ±\pm 0.10) and densities (0.90 ±\pm 0.02, 0.96 ±\pm 0.02, and 1.07 ±\pm 0.03 g/cm3^{3}) and can be better distinguished if both quantitative values are observed together.Comment: 26 pages, 7 figures, submitted to Physics in Medicine and Biolog

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications
    corecore