363 research outputs found

    Work design improvement at Miroad Rubber Industries Sdn. Bhd.

    Get PDF
    Erul Food Industries known as Salaiport Industry is a family-owned company and was established on July 2017. Salaiport Industry apparently moved to a new place at Pedas, Negeri Sembilan. Previously, Salaiport Industry operated in-house located at Pagoh, Johor. This small company major business is producing frozen smoked beef, smoked quail, smoke catfish and smoked duck. The main frozen product is smoked beef. The frozen smoked meat produced by Salaiport Industry is depending on customer demands. Usually the company produce 40 kg to 60 kg a day and operated between for four days until five days. Therefore, the company produce approximately around 80 kg to 120 kg per week. The company usually take 2 days for 1 complete cycle for the production as the first day the company will only receive the meat from the supplier and freeze the meat for use of tomorrow

    Exclusive-or preprocessing and dictionary coding of continuous-tone images.

    Get PDF
    The field of lossless image compression studies the various ways to represent image data in the most compact and efficient manner possible that also allows the image to be reproduced without any loss. One of the most efficient strategies used in lossless compression is to introduce entropy reduction through decorrelation. This study focuses on using the exclusive-or logic operator in a decorrelation filter as the preprocessing phase of lossless image compression of continuous-tone images. The exclusive-or logic operator is simply and reversibly applied to continuous-tone images for the purpose of extracting differences between neighboring pixels. Implementation of the exclusive-or operator also does not introduce data expansion. Traditional as well as innovative prediction methods are included for the creation of inputs for the exclusive-or logic based decorrelation filter. The results of the filter are then encoded by a variation of the Lempel-Ziv-Welch dictionary coder. Dictionary coding is selected for the coding phase of the algorithm because it does not require the storage of code tables or probabilities and because it is lower in complexity than other popular options such as Huffman or Arithmetic coding. The first modification of the Lempel-Ziv-Welch dictionary coder is that image data can be read in a sequence that is linear, 2-dimensional, or an adaptive combination of both. The second modification of the dictionary coder is that the coder can instead include multiple, dynamically chosen dictionaries. Experiments indicate that the exclusive-or operator based decorrelation filter when combined with a modified Lempel-Ziv-Welch dictionary coder provides compression comparable to algorithms that represent the current standard in lossless compression. The proposed algorithm provides compression performance that is below the Context-Based, Adaptive, Lossless Image Compression (CALIC) algorithm by 23%, below the Low Complexity Lossless Compression for Images (LOCO-I) algorithm by 19%, and below the Portable Network Graphics implementation of the Deflate algorithm by 7%, but above the Zip implementation of the Deflate algorithm by 24%. The proposed algorithm uses the exclusive-or operator in the modeling phase and uses modified Lempel-Ziv-Welch dictionary coding in the coding phase to form a low complexity, reversible, and dynamic method of lossless image compression

    Compression of Textual Column-Oriented Data

    Get PDF
    Column-oriented data are well suited for compression. Since values of the same column are stored contiguously on disk, the information entropy is lower if compared to the physical data organization of conventional databases. There are many useful light-weight compression techniques targeted at specific data types and domains, like integers and small lists of distinct values, respectively. However, compression of textual values formed by skewed and high-cardinality words is usually restricted to variations of the LZ compression algorithm. So far there are no empirical evaluations that verify how other sophisticated compression methods address columnar data that store text. In this paper we shed a light on this subject by revisiting concepts of those algorithms. We also analyse how they behave in terms of compression and speed when dealing with textual columns where values appear in adjacent positions

    An N-Square Approach for Reduced Complexity Non-Binary Encoding

    Get PDF
    There is always a need for the compression of data to facilitate its easy transmission and storage. Several lossy and lossless techniques have been developed in the past few decades. Lossless techniques allow compression without any loss of information. In this paper, we propose a new algorithm for lossless compression. Our experimental results show that the proposed algorithm performs compression in lesser iterations than the existing Non-Binary Huffman coding without affecting the average number of digits required to represent the symbols, thereby reducing the complexity involved during the compression process

    Optimized Adaptive Huffmann Coding For Paper Reduction in OFDM Systems

    Get PDF
    The main defect of OFDM systems is its high peak-to-average power ratio (PAPR). To decrease PAPR, Adaptive Huffman coding is essential. Encoding is transferred by two encoding techniques Huffman coding and Adaptive Huffman coding at the transmitter side. Mapping is done by QAM 16 and PSK 16.The PAPR results of Huffman and adaptive Huffman coding with QAM 16 and PSK 16 is compared. Simulation results shows that the Adaptive Huffman coding along with QAM 16 produces fruitful results in comparison with Huffman coding and adaptive Huffman coding with PSK 16

    More Efficient Algorithms and Analyses for Unequal Letter Cost Prefix-Free Coding

    Full text link
    There is a large literature devoted to the problem of finding an optimal (min-cost) prefix-free code with an unequal letter-cost encoding alphabet of size. While there is no known polynomial time algorithm for solving it optimally there are many good heuristics that all provide additive errors to optimal. The additive error in these algorithms usually depends linearly upon the largest encoding letter size. This paper was motivated by the problem of finding optimal codes when the encoding alphabet is infinite. Because the largest letter cost is infinite, the previous analyses could give infinite error bounds. We provide a new algorithm that works with infinite encoding alphabets. When restricted to the finite alphabet case, our algorithm often provides better error bounds than the best previous ones known.Comment: 29 pages;9 figures

    Binary image compression using run length encoding and multiple scanning techniques

    Get PDF
    While run length encoding is a popular technique for binary image compression, a raster (line by line) scanning technique is almost always assumed and scant attention has been given to the possibilities of using other techniques to scan an image as it is encoded. This thesis looks at five different image scanning techniques and how their relation ship to image features and scanning density (resolution) affects the overall compression that can be achieved with run length encoding. This thesis also compares the performance of run length encoding with an application of Huffman coding for binary image compression. To realize these goals a complete system of computer routines, the Image, Scanning and Compression (ISC) System has been developed and is now avail able for continued research in the area of binary image compression
    • …
    corecore