262 research outputs found

    An Evolved Wavelet Library Based on Genetic Algorithm

    Get PDF
    As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR

    DIGITAL IMAGE PROCESSING FOR ULTRASONIC THERAPY AND TENDINOUS INJURY

    Get PDF
    In this master\u27s thesis, several digital image processing techniques are explored for potential in evaluation of Brightness mode (B-mode) ultrasound images. Currently, many processing techniques are utilized during ultrasound visualization in cardiovascular applications, mammography, and three-dimensional ultrasound systems. However, approaches that serve to aid the clinician in diagnostic assessment of tendinous and ligamentous injuries are more limited. Consequently, the methods employed below are aimed at reducing the dependence on clinician judgment alone to assess the healing stage and mechanical properties of tendinous injuries. Initial focus concentrated on the use of entropy in texture analysis to relate a tendon\u27s appearance in an ultrasound image to its mechanical integrity. Confounding effects such as motion artifacts and region of interest selection by the user limited the applicability of small regions selected for analysis, but general trends were observed when the entire visualized tendon or superficial background region was selected. Entropy calculations suggested a significant change in texture pattern for tendinous regions compared to the selected background regions. In order to reduce the impact of motion artifacts and dependence of the texture analysis on manual identification of regions of interest, a Matlab® script was developed intended to isolate the tendinous regions of interest for further analysis. Methods for segmentation employed relied on a moving window Fourier Transform to compare local parameters in the image to a predefined window of tendinous tissue. Further assessment of each local region benefited from parameterization of the local window\u27s properties that focused on capturing indicators of mean pixel intensity, local variation in pixel intensity, and local directionality consistency derived from the spatial frequency patterns observed in the Fourier Transforms via comparison by the circular Earth Mover\u27s Distance. Results of the segmentation algorithm developed indicated the presence of directional consistency within the tendinous regions, and changes in the speckle pattern were observed for the image derived from mean intensity and local pixel intensity variation. However, non-tendinous regions were also identified for their directional consistency, limiting the applicability of the current process in tendinous region isolation. The results obtained for calculations of the circular Earth Mover\u27s Distance improved slightly with the inclusion of temporal averaging and image registration, but still require improvement before implementation in clinical applications can be realized

    The Application of Tomographic Reconstruction Techniques to Ill-Conditioned Inverse Problems in Atmospheric Science and Biomedical Imaging

    Get PDF
    A methodology is presented for creating tomographic reconstructions from various projection data, and the relevance of the results to applications in atmospheric science and biomedical imaging is analyzed. The fundamental differences between transform and iterative methods are described and the properties of the imaging configurations are addressed. The presented results are particularly suited for highly ill-conditioned inverse problems in which the imaging data are restricted as a result of poor angular coverage, limited detector arrays, or insufficient access to an imaging region. The class of reconstruction algorithms commonly used in sparse tomography, the algebraic reconstruction techniques, is presented, analyzed, and compared. These algorithms are iterative in nature and their accuracy depends significantly on the initialization of the algorithm, the so-called initial guess. A considerable amount of research was conducted into novel initialization techniques as a means of improving the accuracy. The main body of this paper is comprised of three smaller papers, which describe the application of the presented methods to atmospheric and medical imaging modalities. The first paper details the measurement of mesospheric airglow emissions at two camera sites operated by Utah State University. Reconstructions of vertical airglow emission profiles are presented, including three-dimensional models of the layer formed using a novel fanning technique. The second paper describes the application of the method to the imaging of polar mesospheric clouds (PMCs) by NASA’s Aeronomy of Ice in the Mesosphere (AIM) satellite. The contrasting elements of straight-line and diffusive tomography are also discussed in the context of ill-conditioned imaging problems. A number of developing modalities in medical tomography use near-infrared light, which interacts strongly with biological tissue and results in significant optical scattering. In order to perform tomography on the diffused signal, simulations must be incorporated into the algorithm, which describe the sporadic photon migration. The third paper presents a novel Monte Carlo technique derived from the optical scattering solution for spheroidal particles designed to mimic mitochondria and deformed cell nuclei. Simulated results of optical diffusion are presented. The potential for improving existing imaging modalities through continual development of sparse tomography and optical scattering methods is discussed

    Texture representation using wavelet filterbanks

    Get PDF
    Texture analysis is a fundamental issue in image analysis and computer vision. While considerable research has been carried out in the texture analysis domain, problems relating to texture representation have been addressed only partially and active research is continuing. The vast majority of algorithms for texture analysis make either an explicit or implicit assumption that all images are captured under the same measurement conditions, such as orientation and illumination. These assumptions are often unrealistic in many practical applications;This dissertation addresses the viewpoint-invariance problem in texture classification by introducing a rotated wavelet filterbank. The proposed filterbank, in conjunction with a standard wavelet filterbank, provides better freedom of orientation tuning for texture analysis. This allows one to obtain texture features that are invariant with respect to texture rotation and linear grayscale transformation. In this study, energy estimates of channel outputs that are commonly used as texture features in texture classification are transformed into a set of viewpoint-invariant features. Texture properties that have a physical connection with human perception are taken into account in the transformation of the energy estimates;Experiments using natural texture image sets that have been used for evaluating other successful approaches were conducted in order to facilitate comparison. We observe that the proposed feature set outperformed methods proposed by others in the past. A channel selection method is also proposed to minimize the computational complexity and improve performance in a texture segmentation algorithm. Results demonstrating the validity of the approach are presented using experimental ultrasound tendon images

    Machine learned boundary definitions for an expert's tracing assistant in image processing

    Get PDF
    Department Head: Anton Willem Bohm.Includes bibliographical references (pages 178-184).Most image processing work addressing boundary definition tasks embeds the assumption that an edge in an image corresponds to the boundary of interest in the world. In straightforward imagery this is true, however it is not always the case. There are images in which edges are indistinct or obscure, and these images can only be segmented by a human expert. The work in this dissertation addresses the range of imagery between the two extremes of those straightforward images and those requiring human guidance to appropriately segment. By freeing systems of a priori edge definitions and building in a mechanism to learn the boundary definitions needed, systems can do better and be more broadly applicable. This dissertation presents the construction of such a boundary-learning system and demonstrates the validity of this premise on real data. A framework was created for the task in which expert-provided boundary exemplars are used to create training data, which in turn are used by a neural network to learn the task and replicate the expert's boundary tracing behavior. This is the framework for the Expert's Tracing Assistant (ETA) system. For a representative set of nine structures in the Visible Human imagery, ETA was compared and contrasted to two state-of-the-art, user guided methods--Intelligent Scissors (IS) and Active Contour Models (ACM). Each method was used to define a boundary, and the distances between these boundaries and an expert's ground truth were compared. Across independent trials, there will be a natural variation in an expert's boundary tracing, and this degree of variation served as a benchmark against which these three methods were compared. For simple structural boundaries, all the methods were equivalent. However, in more difficult cases, ETA was shown to significantly better replicate the expert's boundary than either IS or ACM. In these cases, where the expert's judgement was most called into play to bound the structure, ACM and IS could not adapt to the boundary character used by the expert while ETA could

    Diffusion Models for Medical Image Analysis: A Comprehensive Survey

    Full text link
    Denoising diffusion models, a class of generative models, have garnered immense interest lately in various deep-learning problems. A diffusion probabilistic model defines a forward diffusion stage where the input data is gradually perturbed over several steps by adding Gaussian noise and then learns to reverse the diffusion process to retrieve the desired noise-free data from noisy data samples. Diffusion models are widely appreciated for their strong mode coverage and quality of the generated samples despite their known computational burdens. Capitalizing on the advances in computer vision, the field of medical imaging has also observed a growing interest in diffusion models. To help the researcher navigate this profusion, this survey intends to provide a comprehensive overview of diffusion models in the discipline of medical image analysis. Specifically, we introduce the solid theoretical foundation and fundamental concepts behind diffusion models and the three generic diffusion modelling frameworks: diffusion probabilistic models, noise-conditioned score networks, and stochastic differential equations. Then, we provide a systematic taxonomy of diffusion models in the medical domain and propose a multi-perspective categorization based on their application, imaging modality, organ of interest, and algorithms. To this end, we cover extensive applications of diffusion models in the medical domain. Furthermore, we emphasize the practical use case of some selected approaches, and then we discuss the limitations of the diffusion models in the medical domain and propose several directions to fulfill the demands of this field. Finally, we gather the overviewed studies with their available open-source implementations at https://github.com/amirhossein-kz/Awesome-Diffusion-Models-in-Medical-Imaging.Comment: Second revision: including more papers and further discussion

    Wavelet-Neural Network Based Image Compression System for Colour Images

    Get PDF
    There are many images used by human being, such as medical, satellite, telescope, painting, and graphic or animation generated by computer images. In order to use these images practically, image compression method has an essential role for transmission and storage purposes. In this research, a wavelet based image compression technique is used. There are various wavelet filters available. The selection of filters has considerable impact on the compression performance. The filter which is suitable for one image may not be the best for another. The image characteristics are expected to be parameters that can be used to select the available wavelet filter. The main objective of this research is to develop an automatic wavelet-based colour image compression system using neural network. The system should select the appropriate wavelet for the image compression based on the image features. In order to reach the main goal, this study observes the cause-effect relation of image features on the wavelet codec (compression-decompression) performance. The images are compressed by applying different families of wavelets. Statistical hypothesis testing by non parametric test is used to establish the cause-effect relation between image features and the wavelet codec performance measurements. The image features used are image gradient, namely image activity measurement (IAM) and spatial frequency (SF) values of each colour component. This research is also carried out to select the most appropriate wavelet for colour image compression, based on certain image features using artificial neural network (ANN) as a tool. The IAM and SF values are used as the input; therefore, the wavelet filters are used as the output or target in the network training. This research has asserted that there are the cause-effect relations between image features and the wavelet codec performance measurements. Furthermore, the study reveals that the parameters in this investigation can be used for the selection of appropriate wavelet filters. An automatic wavelet-based colour image compression system using neural network is developed. The system can give considerably good results

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes
    • …
    corecore