460 research outputs found

    Image compression based on 2D Discrete Fourier Transform and matrix minimization algorithm

    Get PDF
    In the present era of the internet and multimedia, image compression techniques are essential to improve image and video performance in terms of storage space, network bandwidth usage, and secure transmission. A number of image compression methods are available with largely differing compression ratios and coding complexity. In this paper we propose a new method for compressing high-resolution images based on the Discrete Fourier Transform (DFT) and Matrix Minimization (MM) algorithm. The method consists of transforming an image by DFT yielding the real and imaginary components. A quantization process is applied to both components independently aiming at increasing the number of high frequency coefficients. The real component matrix is separated into Low Frequency Coefficients (LFC) and High Frequency Coefficients (HFC). Finally, the MM algorithm followed by arithmetic coding is applied to the LFC and HFC matrices. The decompression algorithm decodes the data in reverse order. A sequential search algorithm is used to decode the data from the MM matrix. Thereafter, all decoded LFC and HFC values are combined into one matrix followed by the inverse DFT. Results demonstrate that the proposed method yields high compression ratios over 98% for structured light images with good image reconstruction. Moreover, it is shown that the proposed method compares favorably with the JPEG technique based on compression ratios and image quality

    Joint image encryption and compression schemes based on hexa-coding

    Get PDF
    This research proposes a new image compression and encryption method depend on a modified JPEG technique combined with the Hexa-Coding algorithm. The compression algorithm starts by dividing an image into 8x8 blocks, then DCT (Discrete Cosine Transform) is applied to all blocks independently followed by uniform quantization. Additionally, the size of blocks is reduced by eliminating insignificant coefficients, and then Arithmetic coding is applied to compress residual coefficients. Finally, Hexa-encoding is applied to the compressed data to further reduce compression size as well as provide encryption. The encryption is accomplished based on five different random keys. The decompression uses a searching method called FMSA (Fast Matching Search Algorithm) which is used for decoding the previously compressed data, followed by Arithmetic decoding) to retrieve residual coefficients. These residuals are padded with zeros to rebuild the original 8x8 blocks. Finally, inverse DCT is applied to reconstruct approximately the original image. The experimental results showed that our proposed image compression and decompression has achieved up to 99% compression ratio while maintaining high visual image quality compared with the JPEG technique

    Experimental Study About Effect Iron Slag on The Shear Strength of Reinforced Concrete Beams Without Shear Reinforcement

    Get PDF
    In the last decades, researchers have become more interested in the environmental aspect of investing materials harmful to the environment in various aspects of life, including construction. Slag is considered as one of the industrial wastes that are harmful to the environment. There are many studies on the use of slag as a substitute for building materials because it adds an excellent property to concrete. In this study, the iron slag was used as a partial replacement of coarse aggregate and fine aggregate in different proportions, the study conducts to know its effects on the shear strength of reinforced concrete beams. Ten R.C. beams without shear reinforcement (1100×100×200) mm were studied using iron slag as a coarse aggregate with a ratio of 10%, 20%, 30% and 40%, as a fine aggregate with a ratio of 10%, 20% and 30%, and partial substitute for coarse aggregate by 7.5% and partial substitute for fine aggregate by 7.5% in the same model, finally, partial substitute for coarse aggregate by 15% and partial substitute for fine aggregate by 15% in the same model. According to the test results, it was found that the optimal proportion of replacing slag with coarse aggregate is 20%, which resulted in a 24.58% increase in peak load. While the optimal proportion of replacing slag with fine aggregate by 20%, resulting in a 24.3% increase in peak load.

    Low energy consumption in manet network

    Get PDF
    The aim of this paper is design and develop energy efficient MANET network in wireless networks. One of the most significant and effective protocol based on low energy consumption and number of Ad-hoc is MANET as remote directing convention source nodes forward in network simulator. Less number of nodes in the network would give low energy usage or consumption as the nodes in the network exceeds or increases that will also increase the energy consumption in the network. The designed MANET system is tried with 9, 12, 15 and 18 number of nodes in a system using network simulation-2 (NS-2). Henceforth source node needs to restart over and over which brings about low energy consumption use and use, ectiveness is less and packet space is additionally less and throughput is likewise less and more start to finish delay. Arrangement of this issue in MANET convention which is advanced as the node doesn't advance when demand arrived at their first it checked there is low energy consumption (battery lifetime) and until the node energy consumption is more noteworthy than the limit. Designed MANET examinations of the energy consumption and node energy consumption by maintaining a strategic distance from the low number of nodes in a network. By contrasting energy consumption and node it demonstrates that MANET is far superior to existing framework 802.11 protocol convention based on battery lifetime, energy consumption, throughput, and power transmission. We have performed a comparison between EEM and AODV routing protocol considering different measuring parameters

    A new generation for intelligent anti-internet worm early system detection

    Get PDF
    Worm requires host computer with an address on the Internet and any of several vulnerabilities to create a big threat environment.We propose intelligent early system detection mechanism for detecting internet worm.The mechanism is combined of three techniques: Failure Connection Detection (FCD) which concerns with detecting the internet worm and stealthy worm in which computer infected by the worm by using Artificial Immune System; and the Traffic Signature Detection (TSD) which responsible for detecting traffic signature for the worm; and the DNA Filtering Detection (DNAFD) which converts traffic signature to DNA signature and sending it to all computer that connected with the router to create a firewall for new worms.Our proposed algorithm can detect difficult stealthy internet worm in addition to detecting unknown internet worm

    Using Altman and Sherrod Z- Score Models to Detect Financial Failure for the Banks Listed on the Iraqi Stock Exchange (ISE) Between 2009 – 2013

    Get PDF
    Purpose: The Purpose of the study was to examine the validity of the Altman Z- Score and Sherrod Z- Score models in financial failure prediction. To achieve the study's goal, references from various authors who have reviewed this topic were used.   Theoretical framework: The study highlights the importance of analyzing and delving into the various notions of financial failure and distress. When it comes to potential effects on the wealth of creditors, stockholders, and society as a whole, academics and researchers consider a company's distress and bankruptcy to be the most important issue to be studied. In order to maintain the goal of company survival and continuity before the disaster happens, many academics started looking for a method to identify and forecast distress and failure.   Design/methodology/approach: Altman Z-score and Sherrod Z- score employed a multi-discriminant model to predict the financial position of ten ISE banks between 2009 - 2013. Z- Score models from Altman and Sherrod were used to determine whether the banks listed on the ISE are exposed to failing financially. Ten banks out of the forty - six banks listed on the ISE were selected. The study only used secondary data obtained from the chosen banks' financial statements in ISE.   Findings: Based on Altman's Z- score model, the study examines that certain banks are particularly exposed to failure. In contrast, the Sherrod Z- Score model indicates that the chosen banks have some issues, but they are minor, and the risk of bankruptcy is low.   Research, scientific and social implications: By using a failure prediction model, it is possible to determine the likelihood that banks will experience financial failure in the future. Investors could use this information to guide their decision-making going forward.   Originality/value: The value and importance of research related to the study of financial failure prediction models in Iraqi commercial banks. The research also seeks to explain financial failure models and the extent to which investors benefit from these models

    Quadtree partitioning scheme of color image based

    Get PDF
    Image segmentation is an essential complementary process in digital image processing and computer vision, but mostly utilizes simple segmentation techniques, such as fixed partitioning scheme and global thresholding techniques due to their simplicity and popularity, in spite of their inefficiency. This paper introduces a new split-merge segmentation process for a quadtree scheme of colour images, based on exploiting the spatial and spectral information embedded within the bands and between bands, respectively. The results show that this technique is efficient in terms of quality of segmentation and time, which can be used in standard techniques as alternative to a fixed partitioning scheme

    Mathematical Modelling of the Relationship between Two Different Temperament Classifications: During the Covid-19 Pandemic

    Get PDF
    In medicine, it is well known that healthy individuals have different physical and mental characteristics. Ancient Indian medicine, Ayurveda and the Persian-Arabic traditional Unani medicine has two distinct approaches for the classification of human subjects according to their temperaments. The individual temperament is an important foundation for personalized medicine, which can help in the prevention and treatment of many diseases including COVID-19. This paper attempts to explore the relationship of the utmost important concepts of these systems called individual temperament named as Prakruti in Ayurveda and Mizaj in Unani practice using mathematical modelling. The results of mathematical modelling can be adopted expediently for the development of algorithms that can be applied in medical informatics. For this, a significant literature review has been carried out. Based on the previous researchers' reviews the essential parameters have been identified for making the relationship and hypothesis were framed. The mathematical modelling was adopted to propose the existence of the relationship between the parameters of such an ancient and rich medicine systems. The hypotheses are validated through the mathematic driven model. Doi: 10.28991/esj-2021-01258 Full Text: PD

    Image Compression for Quality 3D Reconstruction

    Get PDF
    A 3D mesh can be reconstructed from multiple viewpoint images or from a single structured light image. Lossy compression of such images by standard techniques such as JPEG at high compression ratios lead to 3D reconstruction being adversely affected by artifacts and missing vertices. In this paper we demonstrate an improved algorithm capable of high compression ratios without adversely affecting 3D reconstruction and with minimum data loss. The compression algorithm starts by applying block DCT over the input image, and the transformed data being quantized using an optimized quantization matrix. The quantized coefficients of each block are arranged as a 1D array and saved with other block’s data in a larger matrix of coefficients. The DC coefficients are subject to a first order difference whose values are referred to as residual array. The AC coefficients are reduced by eliminating zeros and saving the non-zero values in a reduced coefficients array using a mask of 0 (for a block of zeros) and 1 (for a block of non-zeros). Finally, arithmetic coding is applied to both coefficients and residual arrays. At decompression stage, the coefficients matrix is regenerated by scanning the coefficients array and examining the headers to substitute zero and non-zero data. This matrix is then added to the residual array to obtain the original DC values. The IDCT is then applied to obtain the original image. The proposed algorithm has been tested with images of varying sizes in the context of 3D reconstruction. Results demonstrate that our proposed algorithm is superior to traditional JPEG at higher compression ratios with high perceptual quality of images and the ability to reconstruct the 3D models more effectively, both for structured light images and for sequences of multiple viewpoint images

    Quick sequential search algorithm used to decode high-frequency matrices

    Get PDF
    This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms
    • …
    corecore