556,501 research outputs found

    Performance Analysis of Set Partitioning in Hierarchical Trees (spiht) Algorithm for a Family of Wavelets Used in Color Image Compression

    Get PDF
    With the spurt in the amount of data (Image, video, audio, speech, & text) available on the net, there is a huge demand for memory & bandwidth savings. One has to achieve this, by maintaining the quality & fidelity of the data acceptable to the end user. Wavelet transform is an important and practical tool for data compression. Set partitioning in hierarchal trees (SPIHT) is a widely used compression algorithm for wavelet transformed images. Among all wavelet transform and zero-tree quantization based image compression algorithms SPIHT has become the benchmark state-of-the-art algorithm because it is simple to implement & yields good results. In this paper we present a comparative study of various wavelet families for image compression with SPIHT algorithm. We have conducted experiments with Daubechies, Coiflet, Symlet, Bi-orthogonal, Reverse Bi-orthogonal and Demeyer wavelet types. The resulting image quality is measured objectively, using peak signal-to-noise ratio (PSNR), and subjectively, using perceived image quality (human visual perception, HVP for short). The resulting reduction in the image size is quantified by compression ratio (CR)

    MathWeb: A Concurrent Image Analysis Tool Suite for Multi-spectral Data Fusion

    Get PDF
    This paper describes a preliminary approach to the fusion of multi-spectral image data for the analysis of cervical cancer. The long-term goal of this research is to define spectral signatures and automatically detect cancer cell structures. The approach combines a multi-spectral microscope with an image analysis tool suite, MathWeb. The tool suite incorporates a concurrent Principal Component Transform (PCT) that is used to fuse the multi-spectral data. This paper describes the general approach and the concurrent PCT algorithm. The algorithm is evaluated from both the perspective of image quality and performance scalability

    Integration of HeartSmart Kids into Clinical Practice: A Quality Improvement Project

    Get PDF
    Presented to the Faculty of the University of Alaska, Anchorage in partial fulfillment of requirements for the degree of MASTER OF SCIENCE, FAMILY PRACTICE NURSEIn 2009, the Centers for Medicare & Medicaid (CMS), established “Meaningful Use” regulations through an incentive program, as part of the American Recovery and Reinvestment Act of 2009 (Gance-Cleveland, Gilbert, Gilbert, Dandreaux, & Russell, 2014). Meaningful Use (MU) is tied to reimbursement and focuses on how the Electronic Health Record (EHR) is being used (Center for Disease Control and Prevention, 2012). The goal of MU is to transform the use of the EHR from a documentation tool, to a data reservoir which allows for meaningful reviews and interpretations of the quality of care (Gance-Cleveland et al, 2014).Project / Background / Significance / Review of Literature / Problem Overview / Problem Statement / Purpose / Design / Method / Plan Do Study Act (PDSA) / Ethical Considerations / Significance to Nursing / Dissemination / Conclusion

    Undersampling reconstruction in parallel and single coil imaging with COMPaS -- COnvolutional Magnetic Resonance Image Prior with Sparsity regularization

    Full text link
    Purpose: To propose COMPaS, a learning-free Convolutional Network, that combines Deep Image Prior (DIP) with transform-domain sparsity constraints to reconstruct undersampled Magnetic Resonance Imaging (MRI) data without previous training of the network. Methods: COMPaS uses a U-Net as DIP for undersampledMRdata in the image domain. Reconstruction is constrained by data fidelity to k-space measurements and transform-domain sparsity, such as Total Variation (TV) or Wavelet transform sparsity. Two-dimensional MRI data from the public FastMRI dataset with Cartesian undersampling in phase-encoding direction were reconstructed for different acceleration rates (R) from R = 2 to R = 8 for single coil and multicoil data. Performance of the proposed architecture was compared to Parallel Imaging with Compressed Sensing (PICS). Results: COMPaS outperforms standard PICS algorithms by reducing ghosting artifacts and yielding higher quantitative reconstruction quality metrics in multicoil imaging settings and especially in single coil k-space reconstruction. Furthermore, COMPaS can reconstruct multicoil data without explicit knowledge of coil sensitivity profiles. Conclusion: COMPaS utilizes a training-free convolutional network as a DIP in MRI reconstruction and transforms it with transform-domain sparsity regularization. It is a competitive algorithm for parallel imaging and a novel tool for accelerating single coil MRI.Comment: 13 pages, 8 figures, 2 table

    Template matching method for the analysis of interstellar cloud structure

    Full text link
    The structure of interstellar medium can be characterised at large scales in terms of its global statistics (e.g. power spectra) and at small scales by the properties of individual cores. Interest has been increasing in structures at intermediate scales, resulting in a number of methods being developed for the analysis of filamentary structures. We describe the application of the generic template-matching (TM) method to the analysis of maps. Our aim is to show that it provides a fast and still relatively robust way to identify elongated structures or other image features. We present the implementation of a TM algorithm for map analysis. The results are compared against rolling Hough transform (RHT), one of the methods previously used to identify filamentary structures. We illustrate the method by applying it to Herschel surface brightness data. The performance of the TM method is found to be comparable to that of RHT but TM appears to be more robust regarding the input parameters, for example, those related to the selected spatial scales. Small modifications of TM enable one to target structures at different size and intensity levels. In addition to elongated features, we demonstrate the possibility of using TM to also identify other types of structures. The TM method is a viable tool for data quality control, exploratory data analysis, and even quantitative analysis of structures in image data.Comment: 12 pages, accepted to A&

    Eyes in the sky, smart techs on the ground

    Get PDF
    Unmanned aerial systems (UAS) – or drone-based technologies – have the potential to transform smallholder farming and help increase crop production. As a tool of precision agriculture, UAS provide farmers with realtime, actionable data on their land, crops and livestock, and help maximise input efficiency, minimise environmental impacts, optimise produce quality, and minimise risks. CTA is assisting African start-ups in acquiring the capacity to deliver UAS services to smallholders, under the project Transforming Africa’s agriculture; eyes in the sky, smart techs on the ground

    Strategic information quality utilizing the House of Quality

    Get PDF
    Living in the Age of Knowledge means living in search of innovation. That is, quality information and high valued added knowledge that can lead companies and individuals to the spotlight in a highly competitive and globalized world.  Information is considered to be raw material for creating knowledge, which in turn, adds value to organizations, promotes innovation and puts the spotlight on organizations. For this reason, the objective of this study is to use a tool to analyze the quality of organizational strategic information in two phases.  In the first phase, data and sources will be assessed; and in the second phase of the tool, strategic information and information guidance practices will be analyzed.  Thereby, the study provides analyses of an organization a step-in advance, so as to improve its processes and tools to truly transform its strategic information into competitive advantage

    Image Denoising Using Digital Image Curvelet

    Get PDF
    Image reconstruction is one of the most important areas of image processing. As many scientific experiments result in datasets corrupted with noise, either because of the data acquisition process or because of environmental effects, denoising is necessary which a first pre-processing step in analyzing such datasets. There are several different approaches to denoise images. Despite similar visual effects, there are subtle differences between denoising, de-blurring, smoothing and restoration. Although the discrete wavelet transform (DWT) is a powerful tool in image processing, it has three serious disadvantages: shift sensitivity, poor directionality and lack of phase information. To overcome these disadvantages, a method is proposed which is based on Curvelet transforms which has very high degree of directional specificity. Allows the transform to provide approximate shift invariance and directionally selective filters while preserving the usual properties of perfect reconstruction and computational efficiency with good well-balanced frequency responses where as these properties are lacking in the traditional wavelet transform.Curvelet reconstructions exhibit higher perceptual quality than Wavelet based reconstructions, offering visually sharper images and in particular higher quality recovery of edges and of faint linear and curve linear features. The Curvelet reconstruction does not contain the quantity of disturbing artifacts along edges that we see in wavelet reconstruction. Digital Implementations of newly developed multiscale representation systems namely Curvelets, Ridgelet and Contourlets transforms are used for denoising the image. We apply these digital transforms to the problem of restoring an image from noisy data and compare our results with those obtained from well established methods based on the thresholding of Wavelet Coefficients. Keywords: Curvelets Transform, Discrete Wavelet Transform, Ridgelet Transform, Peak signal to Noise Ratio (PSNR), Mean Square Error (MSE)

    Undecimated Wavelet Transform for Word Embedded Semantic Marginal Autoencoder in Security improvement and Denoising different Languages

    Full text link
    By combining the undecimated wavelet transform within a Word Embedded Semantic Marginal Autoencoder (WESMA), this research study provides a novel strategy for improving security measures and denoising multiple languages. The incorporation of these strategies is intended to address the issues of robustness, privacy, and multilingualism in data processing applications. The undecimated wavelet transform is used as a feature extraction tool to identify prominent language patterns and structural qualities in the input data. The proposed system may successfully capture significant information while preserving the temporal and geographical links within the data by employing this transform. This improves security measures by increasing the system's ability to detect abnormalities, discover hidden patterns, and distinguish between legitimate content and dangerous threats. The Word Embedded Semantic Marginal Autoencoder also functions as an intelligent framework for dimensionality and noise reduction. The autoencoder effectively learns the underlying semantics of the data and reduces noise components by exploiting word embeddings and semantic context. As a result, data quality and accuracy are increased in following processing stages. The suggested methodology is tested using a diversified dataset that includes several languages and security scenarios. The experimental results show that the proposed approach is effective in attaining security enhancement and denoising capabilities across multiple languages. The system is strong in dealing with linguistic variances, producing consistent outcomes regardless of the language used. Furthermore, incorporating the undecimated wavelet transform considerably improves the system's ability to efficiently address complex security concern
    • …
    corecore