23 research outputs found

    A new lossless method of Huffman coding for text data compression and decompression process with FPGA implementation

    Get PDF
    Digital compression for reducing data size is important because of bandwidth restriction. Compression technique is also named source coding. It defines the process of compressed data using less number of bits than uncompressed form. Compression is the technique for decreasing the amount of information used to represent data without decreasing the quality of the text. It also decreases the number of bits needed to storage or transmission in different media. Compression is a method that makes keeping of data easier for a large size of information. In this study, proposed Huffman design includes encoder and decoder based on new binary tree for improving usage of memory for text compression. A saving percentage of approximately 4°.95% was achieved through the suggested way. In this research, Huffman encoder and decoder were created using Verilog HDL. Huffman design was achieved by using a binary tree. Model Sim simulator tool from Mentor Graphics was used for functional verification and simulation of the design modules. FPGA was used for Huffman implementation

    Noise Level Estimation for Digital Images Using Local Statistics and Its Applications to Noise Removal

    Get PDF
    In this paper, an automatic estimation of additive white Gaussian noise technique is proposed. This technique is built according to the local statistics of Gaussian noise. In the field of digital signal processing, estimation of the noise is considered as pivotal process that many signal processing tasks relies on. The main aim of this paper is to design a patch-based estimation technique in order to estimate the noise level in natural images and use it in blind image removal technique. The estimation processes is utilized selected patches which is most contaminated sub-pixels in the tested images sing principal component analysis (PCA). The performance of the suggested noise level estimation technique is shown its superior to state of the art noise estimation and noise removal algorithms, the proposed algorithm produces the best performance in most cases compared with the investigated techniques in terms of PSNR, IQI and the visual perception

    VLSI implementation of huffman design using FPGA with a comprehensive analysis of power restrictions

    Get PDF
    Lossless compression is important in Information hypothesis as well as today's IT field. Lossless design of Huffman is most to a large degree used in the compression arena. However, Huffman coding has some limitations where it depends on the stream of symbols appearing in a file. In fact, Huffman coding creates a code with very few bits for a symbol that has a very high probability of occurrence and an utmost number of bits for a symbol with a low probability of occurrence. In this work Hardware implementation of static Huffman coding for data compression using has been designed, this hardware contains both encoder and decoder-based hardware. The proposed systems Altera DE-2 Board have been used in order to implement the text data compression. The experiments with a simulated environment and the real-time implementation for FPGA with Synopsys power analysis show that constraint has been fulfilled and the target design of the buffer length is appropriate. Power consumption that achieved by the proposed algorithm was 0.0161 mW with frequency 20MHz.and 0.1426 mW with frequency 180MHz within the design limitations. The proposed design is implemented by using ASIC and FPGA design methodologies. In order to implement the encoder and decoder architectures, 130 nm standard cell libraries was used for ASIC implementation. The simulations are carried out by using Modelsim tool. The architecture of compression and decompression algorithm design has been created using Verilog HDL language. Quartus II 11.1 Web Edition (32-Bit). In addition, simulated using ModelSim-Altera 10.0c (Quartus II 11.1) Starter Edition. And it is implemented using Altera FPGA (DE2) for real time implementation

    Denosing of natural image based on non-linear threshold filtering using discrete wavelet transformation

    Get PDF
    The denoising (noise reduction) of a natural image contaminated with Additive and white noise of Gaussian model is an important preprocessing step for many visualization techniques and still a challenging problem for researchers. This paper treats with threshold estimation technique to reduce the noise in natural images by using on discrete wavelet transformation. Calculating the value of thresholding, the way it works in the algorithm (derivation of thresholding function) and the type of wavelet mother functions, are pivotal issues in the field of denoising based wavelet approach. In this study the result shows that the proposed denoising algorithm based on semi-soft threshold algorithm outperforms the traditional wavelet denoising techniques in terms of visual quality and subjective scales, where it preserved the edges, ridges details of the reconstructed image and the quality of visualization shape as well. The execution time was taken into consideration as well; it shows that the new algorithm presents competitive results compared with the standard methods such as Wiener filter, SureShrink, Oracle Shrink, BM3D and BayesShrink. To accomplish the denoising process, our algorithm was compared with the various the standard denoising algorithms that were mentioned earlier

    Noise level estimation for digital images using local statistics and its applications to noise removal

    Get PDF
    In this paper, an automatic estimation of additive white Gaussian noise technique is proposed.This technique is built according to the local statistics of Gaussian noise. In the field of digital signal processing, estimation of the noise is considered as pivotal process that many signal processing tasks relies on. The main aim of this paper is to design a patch-b ased estimation technique in order to estimate the noise level in natural images and use it in blind image removal technique. The estimation processes is utilized selected patches which is most contaminated sub-pixels in the tested images sing principal component analysis (PCA). The performance of the suggested noise level estimation technique is shown its superior to state of the art noise estimation and noise removal algorithms, the proposed algorithm produces the best performance in most cases compared with the investigated techniques in terms of PSNR, IQI and the visual perception

    Hand detection and segmentation using smart path tracking fingers as features and expert system classifier

    Get PDF
    Nowadays, hand gesture recognition (HGR) is getting popular due to several applications such as remote based control using a hand, and security for access control. One of the major problems of HGR is the accuracy lacking hand detection and segmentation. In this paper, a new algorithm of hand detection will be presented, which works by tracking fingers smartly based on the planned path. The tracking operation is accomplished by assuming a point at the top middle of the image containing the object then this point slides few pixels down to be a reference point then branching into two slopes: left and right. On these slopes, fingers will be scanned to extract flip-numbers, which are considered as features to be classified accordingly by utilizing the expert system. Experiments were conducted using 100 images for 10-individual containing hand inside a cluttered background by using Dataset of Leap Motion and Microsoft Kinect hand acquisitions. The recorded accuracy is depended on the complexity of the Flip-Number setting, which is achieved 96%, 84% and 81% in case 6, 7 and 8 Flip_Numbers respectively, in which this result reflects a high level of finite accuracy in comparing with existing techniques

    CMOS technology for increasing efficiency of clock gating techniques using tri-state buffer

    Get PDF
    Clock gating is an effective technique of decreasing dynamic power dissipation in synchronous design. One of the methods used to realize this goal is to mask the clock which goes to the unnecessary to use in specific time. This paper will present a comparative analysis of this clock gating technique in an 8-bit Arithmetic Logic Unit (ALU). The new clock gating method provides a solution to the problems in the existing techniques. The new proposed clock gating technique generating circuit uses tri-state buffer in a negative latch design, instead of OR gate logic. With the same function being performed, this circuit saves more power and reduces area used, irrespective of design performance. The minimum power gain realized 6.4 % percentage in total power consumption by executing 20 MHz frequency. It also used a 0.9 % occupation area. The proposed method was implemented by using ASIC design methodology, and 130 nm standard cell technology libraries were used for ASIC implementation. Furthermore, the architecture of the ALU was created using Verilog HDL language (32-Bit Quartus II 11.1 Web Edition). The simulation was carried out by using the Model Sim-Altera 10.0c (Quartus II 11.1 Starter Edition). Finally, the design will reduce complexity in hardware and similar clock power

    Recognition system for leaf images based on its leaf contour and centroid

    Get PDF
    The recognition of plants is directly associated to society's life. Leaves from plants are proved to be a feasible source of information used to identify plant species. The recognition system of leaves is accomplished automatically using the experts of human being. Unfortunately, it has their loopholes that are a time consuming processes and low-effectiveness progression. The leaves classification using predictable process is quite complicated, time complexity, and as a result of using very long-termed in botanical science for non-experts that make it more irritated operation. Thus, the prompt developments in digital images, computer vision and object detection and recognition systems encourage scientists to work towards plant species recognition according to image processing technology. In this study, an image processing algorithm in order to find out the shape structure of tested plants is presented. This technique exploits the variant to scaling shift, spin technique, scaling approach, and filtering processes. The leaf contours of the same plants are computed using Support Victor Machine (SVM) where the similar sequences of the same contours usually carry the same features while the different plants sequences have different contours. In this regard, SVM classifier is exploited to be applied as a classifier to the plant's leaf. In the Experiment part, the finding was taken from Flavia dataset and it demonstrated that the suggested technique has high recognition efficiency compared to state of the art methods and is shows better quality images especially in complicated features of digital images such as ridges, edges, lines, curves and complicated contours

    Sequential parameterizing affine projection (SPAP) windowing length for acoustic echo cancellation on speech accents identification

    Get PDF
    Echo cancellation has always in the preprocessing steps before the signals are converted to feature vectors and pattern classification. This is always the correct flow of speech identification. Therefore, in order to get the best cleaned signal, the usage of adaptive echo cancellation removed the echo and also the noise which deteriorates the signals and final results during classification process. The concepts of windowing length may improve the cleaned signals acquired after the noise or echo cancellation process is done. By proposing the preconfigured windowing length through sequential technique, the results is giving improvement from normal length of 200ms to 400ms whereby the results of Word Error Rate (WER), Equal Error Rate (EER) and accuracies can be viewed with increases around 5-10% of percentage values compared with echoed signal and reduced the WER and EER too with applying of the sequential parameterization (SPAP) technique

    Classification of eye abnormality using statistical parameters in texture features of corneal arcus image

    Get PDF
    The corneal arcus (CA), is the white-gray sediments, exist within the iris-limbus like a circle ring, caused by the occurrence of lipid disorder, in the bloodstream. This sign shows, the indication to diseases such as the coronary heart disease, diabetes, and hypertension. This paper demonstrates the classification of the CA as an indicator of hyperlipidemia. The experiment, uses two sets of sample data, consisting of the normal and abnormal eyes (i.e., CA), for classifies each group. The step for this classification, begin with the normalization of the eye images (as part of pre-processing), to achieve the region of interest (ROI). The next process is to extract the image texture using the grey level co-occurrence matrix (GLCM) technique, and calculate the extraction of the image texture using the statistical method. These features, then will be fed into the classifier, as the input for several processes, namely as the data training, data testing and validation data. In these experiments, we have obtained the excellent result using the proposed framework. This proves that, by using a Bayesian regularization (BR) classifier, the results of this classification given by the sensitivity (94%), specificity (100%), and accuracy (97.78%). Applications/Improvements: Based on the results obtained, the proposed system is successfully to classify the images with the CA signs. This show that, this proposed method can be applied to identify the presence of the hypercholesterolemia in a non-invasive test, to classify and detect the image of the CA
    corecore