1,372 research outputs found

    Multiple bottlenecks sorting criterion at initial sequence in solving permutation flow shop scheduling problem

    Get PDF
    This paper proposes a heuristic that introduces the application of bottleneck-based concept at the beginning of an initial sequence determination with the objective of makespan minimization. Earlier studies found that the scheduling activity become complicated when dealing with machine, m greater than 2, known as non-deterministic polynomial-time hardness (NP-hard). To date, the Nawaz-Enscore-Ham (NEH) algorithm is still recognized as the best heuristic in solving makespan problem in scheduling environment. Thus, this study treated the NEH heuristic as the highest ranking and most suitable heuristic for evaluation purpose since it is the best performing heuristic in makespan minimization. This study used the bottleneck-based approach to identify the critical processing machine which led to high completion time. In this study, an experiment involving machines (m =4) and n-job (n = 6, 10, 15, 20) was simulated in Microsoft Excel Simple Programming to solve the permutation flowshop scheduling problem. The overall computational results demonstrated that the bottleneck machine M4 performed the best in minimizing the makespan for all data set of problems

    Zerotree design for image compression: toward weighted universal zerotree coding

    Get PDF
    We consider the problem of optimal, data-dependent zerotree design for use in weighted universal zerotree codes for image compression. A weighted universal zerotree code (WUZC) is a data compression system that replaces the single, data-independent zerotree of Said and Pearlman (see IEEE Transactions on Circuits and Systems for Video Technology, vol.6, no.3, p.243-50, 1996) with an optimal collection of zerotrees for good image coding performance across a wide variety of possible sources. We describe the weighted universal zerotree encoding and design algorithms but focus primarily on the problem of optimal, data-dependent zerotree design. We demonstrate the performance of the proposed algorithm by comparing, at a variety of target rates, the performance of a Said-Pearlman style code using the standard zerotree to the performance of the same code using a zerotree designed with our algorithm. The comparison is made without entropy coding. The proposed zerotree design algorithm achieves, on a collection of combined text and gray-scale images, up to 4 dB performance improvement over a Said-Pearlman zerotree

    Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding

    Get PDF
    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties

    Best Basis for Joint Representation: the Median of Marginal Best Bases for Low Cost Information Exchanges in Distributed Signal Representation

    Get PDF
    International audienceThe paper addresses the selection of the best representations for distributed and/or dependent signals. Given an indexed tree structured library of bases and a semi-collaborative distribution scheme associated with minimum information exchange (emission and reception of one single index corresponding to a marginal best basis), the paper proposes the median basis computed on a set of best marginal bases for joint representation or fusion of distributed/dependent signals. The paper provides algorithms for computing this median basis with respect to standard tree structured libraries of bases such as wavelet packet bases or cosine trees. These algorithms are effective when an additive information cost is under consideration. Experimental results performed on distributed signal compression confirms worthwhile properties for the median of marginal best bases with respect to the ideal best joint basis, the latter being underdetermined in practice, except when a full collaboration scheme is under consideration

    QUALITY-DRIVEN CROSS LAYER DESIGN FOR MULTIMEDIA SECURITY OVER RESOURCE CONSTRAINED WIRELESS SENSOR NETWORKS

    Get PDF
    The strong need for security guarantee, e.g., integrity and authenticity, as well as privacy and confidentiality in wireless multimedia services has driven the development of an emerging research area in low cost Wireless Multimedia Sensor Networks (WMSNs). Unfortunately, those conventional encryption and authentication techniques cannot be applied directly to WMSNs due to inborn challenges such as extremely limited energy, computing and bandwidth resources. This dissertation provides a quality-driven security design and resource allocation framework for WMSNs. The contribution of this dissertation bridges the inter-disciplinary research gap between high layer multimedia signal processing and low layer computer networking. It formulates the generic problem of quality-driven multimedia resource allocation in WMSNs and proposes a cross layer solution. The fundamental methodologies of multimedia selective encryption and stream authentication, and their application to digital image or video compression standards are presented. New multimedia selective encryption and stream authentication schemes are proposed at application layer, which significantly reduces encryption/authentication complexity. In addition, network resource allocation methodologies at low layers are extensively studied. An unequal error protection-based network resource allocation scheme is proposed to achieve the best effort media quality with integrity and energy efficiency guarantee. Performance evaluation results show that this cross layer framework achieves considerable energy-quality-security gain by jointly designing multimedia selective encryption/multimedia stream authentication and communication resource allocation

    Adaptive Variable Degree-k Zero-Trees for Re-Encoding of Perceptually Quantized Wavelet-Packet Transformed Audio and High Quality Speech

    Full text link
    A fast, efficient and scalable algorithm is proposed, in this paper, for re-encoding of perceptually quantized wavelet-packet transform (WPT) coefficients of audio and high quality speech and is called "adaptive variable degree-k zero-trees" (AVDZ). The quantization process is carried out by taking into account some basic perceptual considerations, and achieves good subjective quality with low complexity. The performance of the proposed AVDZ algorithm is compared with two other zero-tree-based schemes comprising: 1- Embedded Zero-tree Wavelet (EZW) and 2- The set partitioning in hierarchical trees (SPIHT). Since EZW and SPIHT are designed for image compression, some modifications are incorporated in these schemes for their better matching to audio signals. It is shown that the proposed modifications can improve their performance by about 15-25%. Furthermore, it is concluded that the proposed AVDZ algorithm outperforms these modified versions in terms of both output average bit-rates and computation times.Comment: 30 pages (Double space), 15 figures, 5 tables, ISRN Signal Processing (in Press

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems
    corecore