3,135 research outputs found

    Multiple bottlenecks sorting criterion at initial sequence in solving permutation flow shop scheduling problem

    Get PDF
    This paper proposes a heuristic that introduces the application of bottleneck-based concept at the beginning of an initial sequence determination with the objective of makespan minimization. Earlier studies found that the scheduling activity become complicated when dealing with machine, m greater than 2, known as non-deterministic polynomial-time hardness (NP-hard). To date, the Nawaz-Enscore-Ham (NEH) algorithm is still recognized as the best heuristic in solving makespan problem in scheduling environment. Thus, this study treated the NEH heuristic as the highest ranking and most suitable heuristic for evaluation purpose since it is the best performing heuristic in makespan minimization. This study used the bottleneck-based approach to identify the critical processing machine which led to high completion time. In this study, an experiment involving machines (m =4) and n-job (n = 6, 10, 15, 20) was simulated in Microsoft Excel Simple Programming to solve the permutation flowshop scheduling problem. The overall computational results demonstrated that the bottleneck machine M4 performed the best in minimizing the makespan for all data set of problems

    Improving fusion of surveillance images in sensor networks using independent component analysis

    Get PDF

    A reduced-reference perceptual image and video quality metric based on edge preservation

    Get PDF
    In image and video compression and transmission, it is important to rely on an objective image/video quality metric which accurately represents the subjective quality of processed images and video sequences. In some scenarios, it is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one. For instance, for quality improvement of video transmission through closed-loop optimisation, the video quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image/video sequence-prior to compression and transmission-is not usually available at the receiver side, and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art RR metric. © 2012 Martini et al

    3D medical volume segmentation using hybrid multiresolution statistical approaches

    Get PDF
    This article is available through the Brunel Open Access Publishing Fund. Copyright © 2010 S AlZu’bi and A Amira.3D volume segmentation is the process of partitioning voxels into 3D regions (subvolumes) that represent meaningful physical entities which are more meaningful and easier to analyze and usable in future applications. Multiresolution Analysis (MRA) enables the preservation of an image according to certain levels of resolution or blurring. Because of multiresolution quality, wavelets have been deployed in image compression, denoising, and classification. This paper focuses on the implementation of efficient medical volume segmentation techniques. Multiresolution analysis including 3D wavelet and ridgelet has been used for feature extraction which can be modeled using Hidden Markov Models (HMMs) to segment the volume slices. A comparison study has been carried out to evaluate 2D and 3D techniques which reveals that 3D methodologies can accurately detect the Region Of Interest (ROI). Automatic segmentation has been achieved using HMMs where the ROI is detected accurately but suffers a long computation time for its calculations

    Low Bit-rate Color Video Compression using Multiwavelets in Three Dimensions

    Get PDF
    In recent years, wavelet-based video compressions have become a major focus of research because of the advantages that it provides. More recently, a growing thrust of studies explored the use of multiple scaling functions and multiple wavelets with desirable properties in various fields, from image de-noising to compression. In term of data compression, multiple scaling functions and wavelets offer a greater flexibility in coefficient quantization at high compression ratio than a comparable single wavelet. The purpose of this research is to investigate the possible improvement of scalable wavelet-based color video compression at low bit-rates by using three-dimensional multiwavelets. The first part of this work included the development of the spatio-temporal decomposition process for multiwavelets and the implementation of an efficient 3-D SPIHT encoder/decoder as a common platform for performance evaluation of two well-known multiwavelet systems against a comparable single wavelet in low bitrate color video compression. The second part involved the development of a motion-compensated 3-D compression codec and a modified SPIHT algorithm designed specifically for this codec by incorporating an advantage in the design of 2D SPIHT into the 3D SPIHT coder. In an experiment that compared their performances, the 3D motion-compensated codec with unmodified 3D SPIHT had gains of 0.3dB to 4.88dB over regular 2D wavelet-based motion-compensated codec using 2D SPIHT in the coding of 19 endoscopy sequences at 1/40 compression ratio. The effectiveness of the modified SPIHT algorithm was verified by the results of a second experiment in which it was used to re-encode 4 of the 19 sequences with lowest performance gains and improved them by 0.5dB to 1.0dB. The last part of the investigation examined the effect of multiwavelet packet on 3-D video compression as well as the effects of coding multiwavelet packets based on the frequency order and energy content of individual subbands

    The JPEG2000 still image compression standard

    Get PDF
    The development of standards (emerging and established) by the International Organization for Standardization (ISO), the International Telecommunications Union (ITU), and the International Electrotechnical Commission (IEC) for audio, image, and video, for both transmission and storage, has led to worldwide activity in developing hardware and software systems and products applicable to a number of diverse disciplines [7], [22], [23], [55], [56], [73]. Although the standards implicitly address the basic encoding operations, there is freedom and flexibility in the actual design and development of devices. This is because only the syntax and semantics of the bit stream for decoding are specified by standards, their main objective being the compatibility and interoperability among the systems (hardware/software) manufactured by different companies. There is, thus, much room for innovation and ingenuity. Since the mid 1980s, members from both the ITU and the ISO have been working together to establish a joint international standard for the compression of grayscale and color still images. This effort has been known as JPEG, the Join

    MRI On the Fly: Accelerating MRI Imaging Using LDA Classification with LDB Feature Extraction

    Get PDF
    To improve MRI acquisition time, we explored the uses of linear discriminant analysis (LDA), and local discriminant bases (LDB) for the task of classifying MRI images using a minimal set of signal acquisitions. Our algorithm has both off-line and on-line components. The off-line component uses the k-basis algorithm to partition a set of training images (all from a particular region of a patient) into classes. For each class, we find a basis by applying the best basis algorithm on the images in that class. We keep these bases to be used by the on-line process. We then apply LDB to the training set with the class assignments, determining the best discriminant basis for the set. We rank the basis coordinates according to discriminating power, and retain the top M coordinates for the on-line algorithm. We keep the top M coordinates, which index the basis functions with the most discriminating capability, for on-line purposes. Finally, we train LDA on these transformed coordinates, producing a classifier for the images. With the off-line requirements complete, we can take advantage of the simplicity and speed of the on-line mechanism to acquire an image in a similar region of the patient. We need acquire only the M important coordinates of the image in the discriminant basis to create a ``scout image.\u27\u27 This image, which can be acquired quickly since M is much much smaller than the number of measurements needed to fill in the values of the 256 by 256 pixels, is then sent through the map furnished by LDA which in turn assigns a class to the image. Returning to the list of bases that we kept from the k-bases algorithm, we find the optimal basis for the particular class at hand. We then acquire the image using that optimal basis, omitting the coefficients with the least truncation error. The complete image can then be quickly reconstructed using the inverse wavelet packet transform. The power of our algorithm is that the on-line task is fast and simple, while the computational complexity lies mostly in the off-line task that needs to be done only once for images in a certain region. In addition, our algorithm only makes use of the flexibility of MRI hardware, so no modifications in hardware design are needed

    Using wavelet packet transformation for image compression

    Get PDF
    V dneĆĄnĂ­ době stĂĄle roste potƙeba uchovĂĄvat a pƙenĂĄĆĄet digitĂĄlnĂ­ obrazovĂĄ data. K efektivnĂ­mu vyuĆŸitĂ­ pƙenosovĂœch a ĂșloĆŸnĂœch kapacit je nutnĂĄ komprese. Tato bakaláƙskĂĄ prĂĄce se zabĂœvĂĄ kompresnĂ­ metodou zaloĆŸenou na transformaci wavelet packet, kterĂĄ je odvozena od waveletovĂ© transformace. SoustƙedĂ­ se zejmĂ©na na vĂœběr nejlepĆĄĂ­ bĂĄze z ĂșplnĂ©ho waveletovĂ©ho stromu. PrĂĄce porovnĂĄvĂĄ 6 kritĂ©riĂ­ vĂœběru nejlepĆĄĂ­ a tĂ©měƙ nejlepĆĄĂ­ bĂĄze od R. R. Coifmana, M. V. Wickerhausera a C. Taswella pomocĂ­ programu vytvoƙenĂ©ho v prostƙedĂ­ Matlab. Program je vytvoƙen pouze pro testovacĂ­ a demonstrativnĂ­ Ășčely, a proto obsahuje jistĂĄ omezenĂ­ – zpracovĂĄnĂ­ pouze černobĂ­lĂ©ho obrazu a omezenĂ©ho rozliĆĄenĂ­. TestovacĂ­ obraz je po aplikaci algoritmu hledĂĄnĂ­ nejlepĆĄĂ­ bĂĄze s rĆŻznĂœmi kritĂ©rii prahovĂĄn stejnĂœm koeficientem. Pro porovnĂĄnĂ­ vĂœslednĂ© kvality je pouĆŸita stƙednĂ­ kvadratickĂĄ odchylka originĂĄlnĂ­ho a komprimovanĂ©ho obrazu. Za nejlepĆĄĂ­ kritĂ©rium hledĂĄnĂ­ nejlepĆĄĂ­ bĂĄze z hlediska obrazovĂ© kvality mĆŻĆŸeme povaĆŸovat tu funkci, kterĂĄ dosĂĄhne nejmenĆĄĂ­ stƙednĂ­ kvadratickĂ© odchylky. VĂœsledky ukazujĂ­, ĆŸe některĂ© z TaswellovĂœch funkcĂ­ pƙinĂĄĆĄĂ­ značnĂ© zlepĆĄenĂ­ vizuĂĄlnĂ­ kvality obrazu, za cenu menĆĄĂ­ho kompresnĂ­ho poměru oproti klasickĂ© Shannonově entropii.The need for storing and transferring digital image data is still growing nowadays. The Compression is necessary to achieve effective usage of transfer and storage capacity. This Bachelor’s Thesis is concerned with compression method based on wavelet packet transform, which is derived from wavelet transform. It is focused especially on best basis search algorithms from whole wavelet packet tree. There is a comparison between 6 best basis and near-best basis search criterions by R. R. Coifman, M. V. Wickerhauser and C. Taswell in this thesis, which is achieved by using Matlab environment based program. The program was created for testing and demonstrative purpose only and that’s why it contains certain limitation such as only black and white image processing and its limited resolution. There is applied constant threshold on testing image after the several best basis criterions application. The Mean Squared Error is used for comparing outcome quality of compressed and original image. The function that acquires minimal Mean Squared Error is considered the best best-basis search criterion in term of visual quality. The results show that some of the Taswell’s functions significantly improve the visual quality of the image at the price of worse compression ratio against common Shannon entropy.

    Fast watermarking of MPEG-1/2 streams using compressed-domain perceptual embedding and a generalized correlator detector

    Get PDF
    A novel technique is proposed for watermarking of MPEG-1 and MPEG-2 compressed video streams. The proposed scheme is applied directly in the domain of MPEG-1 system streams and MPEG-2 program streams (multiplexed streams). Perceptual models are used during the embedding process in order to avoid degradation of the video quality. The watermark is detected without the use of the original video sequence. A modified correlation-based detector is introduced that applies nonlinear preprocessing before correlation. Experimental evaluation demonstrates that the proposed scheme is able to withstand several common attacks. The resulting watermarking system is very fast and therefore suitable for copyright protection of compressed video
    • 

    corecore