392 research outputs found

    Image compression based on 2D Discrete Fourier Transform and matrix minimization algorithm

    Get PDF
    In the present era of the internet and multimedia, image compression techniques are essential to improve image and video performance in terms of storage space, network bandwidth usage, and secure transmission. A number of image compression methods are available with largely differing compression ratios and coding complexity. In this paper we propose a new method for compressing high-resolution images based on the Discrete Fourier Transform (DFT) and Matrix Minimization (MM) algorithm. The method consists of transforming an image by DFT yielding the real and imaginary components. A quantization process is applied to both components independently aiming at increasing the number of high frequency coefficients. The real component matrix is separated into Low Frequency Coefficients (LFC) and High Frequency Coefficients (HFC). Finally, the MM algorithm followed by arithmetic coding is applied to the LFC and HFC matrices. The decompression algorithm decodes the data in reverse order. A sequential search algorithm is used to decode the data from the MM matrix. Thereafter, all decoded LFC and HFC values are combined into one matrix followed by the inverse DFT. Results demonstrate that the proposed method yields high compression ratios over 98% for structured light images with good image reconstruction. Moreover, it is shown that the proposed method compares favorably with the JPEG technique based on compression ratios and image quality

    Joint image encryption and compression schemes based on hexa-coding

    Get PDF
    This research proposes a new image compression and encryption method depend on a modified JPEG technique combined with the Hexa-Coding algorithm. The compression algorithm starts by dividing an image into 8x8 blocks, then DCT (Discrete Cosine Transform) is applied to all blocks independently followed by uniform quantization. Additionally, the size of blocks is reduced by eliminating insignificant coefficients, and then Arithmetic coding is applied to compress residual coefficients. Finally, Hexa-encoding is applied to the compressed data to further reduce compression size as well as provide encryption. The encryption is accomplished based on five different random keys. The decompression uses a searching method called FMSA (Fast Matching Search Algorithm) which is used for decoding the previously compressed data, followed by Arithmetic decoding) to retrieve residual coefficients. These residuals are padded with zeros to rebuild the original 8x8 blocks. Finally, inverse DCT is applied to reconstruct approximately the original image. The experimental results showed that our proposed image compression and decompression has achieved up to 99% compression ratio while maintaining high visual image quality compared with the JPEG technique

    Low energy consumption in manet network

    Get PDF
    The aim of this paper is design and develop energy efficient MANET network in wireless networks. One of the most significant and effective protocol based on low energy consumption and number of Ad-hoc is MANET as remote directing convention source nodes forward in network simulator. Less number of nodes in the network would give low energy usage or consumption as the nodes in the network exceeds or increases that will also increase the energy consumption in the network. The designed MANET system is tried with 9, 12, 15 and 18 number of nodes in a system using network simulation-2 (NS-2). Henceforth source node needs to restart over and over which brings about low energy consumption use and use, ectiveness is less and packet space is additionally less and throughput is likewise less and more start to finish delay. Arrangement of this issue in MANET convention which is advanced as the node doesn't advance when demand arrived at their first it checked there is low energy consumption (battery lifetime) and until the node energy consumption is more noteworthy than the limit. Designed MANET examinations of the energy consumption and node energy consumption by maintaining a strategic distance from the low number of nodes in a network. By contrasting energy consumption and node it demonstrates that MANET is far superior to existing framework 802.11 protocol convention based on battery lifetime, energy consumption, throughput, and power transmission. We have performed a comparison between EEM and AODV routing protocol considering different measuring parameters

    Quadtree partitioning scheme of color image based

    Get PDF
    Image segmentation is an essential complementary process in digital image processing and computer vision, but mostly utilizes simple segmentation techniques, such as fixed partitioning scheme and global thresholding techniques due to their simplicity and popularity, in spite of their inefficiency. This paper introduces a new split-merge segmentation process for a quadtree scheme of colour images, based on exploiting the spatial and spectral information embedded within the bands and between bands, respectively. The results show that this technique is efficient in terms of quality of segmentation and time, which can be used in standard techniques as alternative to a fixed partitioning scheme

    Buckling of Nonprismatic Column on Varying Elastic Foundation with Arbitrary Boundary Conditions

    Get PDF
    Citation: Ahmad A. Ghadban, Ahmed H. Al-Rahmani, Hayder A. Rasheed, and Mohammed T. Albahttiti, “Buckling of Nonprismatic Column on Varying Elastic Foundation with Arbitrary Boundary Conditions,” Mathematical Problems in Engineering, vol. 2017, Article ID 5976098, 14 pages, 2017. doi:10.1155/2017/5976098Buckling of nonprismatic single columns with arbitrary boundary conditions resting on a nonuniform elastic foundation may be considered as the most generalized treatment of the subject. The buckling differential equation for such columns is extremely difficult to solve analytically. Thus, the authors propose a numerical approach by discretizing the column into a finite number of segments. Each segment has constants  (modulus of elasticity),  (moment of inertia), and  (subgrade stiffness). Next, an exact analytical solution is derived for each prismatic segment resting on uniform elastic foundation. These segments are then assembled in a matrix from which the critical buckling load is obtained. The derived formulation accounts for different end boundary conditions. Validation is performed by benchmarking the present results against analytical solutions found in the literature, showing excellent agreement. After validation, more examples are solved to illustrate the power and flexibility of the proposed method. Overall, the proposed method provides reasonable results, and the examples solved demonstrate the versatility of the developed approach and some of its many possible applications

    Image Compression for Quality 3D Reconstruction

    Get PDF
    A 3D mesh can be reconstructed from multiple viewpoint images or from a single structured light image. Lossy compression of such images by standard techniques such as JPEG at high compression ratios lead to 3D reconstruction being adversely affected by artifacts and missing vertices. In this paper we demonstrate an improved algorithm capable of high compression ratios without adversely affecting 3D reconstruction and with minimum data loss. The compression algorithm starts by applying block DCT over the input image, and the transformed data being quantized using an optimized quantization matrix. The quantized coefficients of each block are arranged as a 1D array and saved with other block’s data in a larger matrix of coefficients. The DC coefficients are subject to a first order difference whose values are referred to as residual array. The AC coefficients are reduced by eliminating zeros and saving the non-zero values in a reduced coefficients array using a mask of 0 (for a block of zeros) and 1 (for a block of non-zeros). Finally, arithmetic coding is applied to both coefficients and residual arrays. At decompression stage, the coefficients matrix is regenerated by scanning the coefficients array and examining the headers to substitute zero and non-zero data. This matrix is then added to the residual array to obtain the original DC values. The IDCT is then applied to obtain the original image. The proposed algorithm has been tested with images of varying sizes in the context of 3D reconstruction. Results demonstrate that our proposed algorithm is superior to traditional JPEG at higher compression ratios with high perceptual quality of images and the ability to reconstruct the 3D models more effectively, both for structured light images and for sequences of multiple viewpoint images

    Quick sequential search algorithm used to decode high-frequency matrices

    Get PDF
    This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms

    Primary Wound Closure Versus Delayed Primary Wound Closure in Complicated Appendicitis in Tikrit Teaching Hospital

    Get PDF
    Background: Two methods are used routinely for wound management following an appendectomy: delayed primary closure, which involves packing an open wound for 4-5 days followed by wound closure, and primary closure, which means closing the wound at the time of surgery. Primary closure has the potential benefit of rapid wound healing associated with the elimination of painful and time-consuming dressing, as well as a reduction in overall hospital costs.                                                                                                                                Objective: To compare the incidence of wound infection after primary wound closure and delayed primary closure in patients with complicated appendicitis. Patients and Methods: A total of 78 patients with complicated appendicitis (gangrenous, perforated, and abscess) admitted to surgical wards in Tikrit Teaching Hospital for a period of 12 months (January 2013 to January 2014). Males were 45 and females were 33, their ages ranged from 17-55 years were included in the study. The patients were adult males and non-pregnant adult females whom underwent appendectomy for complicated appendicitis.              Results: Age and gender were not significant factors affecting wound closure type (p=0.772 and p=0.942 respectively). The mean period of symptoms duration in patients with delayed primary closure was significantly (p=0.037) longer (5.9±3.11 days), than that with primary closure (2.81±2.07 days). Duration of surgery was significantly (p=0.021) longer (37.98±6.7 minutes) for delayed primary closure, than for primary closure (22.71±8.11 minutes). The mean duration of stay per hospital was significantly (p=0.030) shorter (6.84±1.71 days) for delayed primary closure than for primary closure (8.7±0.94 days).                                             Conclusion: A better overall results related to the strategy of delayed primary closure despite the relative longer time of surgery and greater efforts done by the surgeon. This will give benefits to both the patient and surgeon.                                                                                    
    corecore