6,792 research outputs found

    Improving mobile color 2D-barcode JPEG image readability using DCT coefficient distributions

    Get PDF
    Two dimensional (2D) barcodes are becoming a pervasive interface for mobile devices, such as camera smartphones. Often, only monochrome 2D-barcodes are used due to their robustness in an uncontrolled operating environment of smartphones. Nonetheless, we are seeing an emerging use of color 2D-barcodes for camera smartphones. Most smartphones capture and store such 2D-barcode images in the baseline JPEG format. As a lossy compression technique, JPEG does introduce a fair amount of error in the captured 2D-barcode images. In this paper, we analyzed the Discrete Cosine Transform (DCT) coefficient distributions of generalized 2D-barcodes using colored data cells, each comprising of 4, 8 and 10 colors. Using these DCT distributions, we improved the JPEG compression of such mobile barcode images. By altering the JPEG compression parameters based on the DCT coefficient distribution of the barcode images, our improved compression scheme produces JPEG images with higher PSNR value as compared to the baseline implementation. We have also applied our improved scheme to a 10 colors 2D-barcode system; and analyzed its performance in comparison to the default and alternative JPEG schemes. We have found that our improved scheme does provide a marked improvement for the successful decoding of the 10 colors 2D-barcode system

    Fast watermarking of MPEG-1/2 streams using compressed-domain perceptual embedding and a generalized correlator detector

    Get PDF
    A novel technique is proposed for watermarking of MPEG-1 and MPEG-2 compressed video streams. The proposed scheme is applied directly in the domain of MPEG-1 system streams and MPEG-2 program streams (multiplexed streams). Perceptual models are used during the embedding process in order to avoid degradation of the video quality. The watermark is detected without the use of the original video sequence. A modified correlation-based detector is introduced that applies nonlinear preprocessing before correlation. Experimental evaluation demonstrates that the proposed scheme is able to withstand several common attacks. The resulting watermarking system is very fast and therefore suitable for copyright protection of compressed video

    Joint temporal and contemporaneous aggregation of random-coefficient AR(1) processes with infinite variance

    Get PDF
    We discuss joint temporal and contemporaneous aggregation of NN independent copies of random-coefficient AR(1) process driven by i.i.d. innovations in the domain of normal attraction of an α\alpha-stable distribution, 0<α20< \alpha \le 2, as both NN and the time scale nn tend to infinity, possibly at a different rate. Assuming that the tail distribution function of the random autoregressive coefficient regularly varies at the unit root with exponent β>0\beta > 0, we show that, for β<max(α,1)\beta < \max (\alpha, 1), the joint aggregate displays a variety of stable and non-stable limit behaviors with stability index depending on α\alpha, β\beta and the mutual increase rate of NN and nn. The paper extends the results of Pilipauskait\.e and Surgailis (2014) from α=2\alpha = 2 to 0<α<20 < \alpha < 2

    No-reference quality assessment of H.264/AVC encoded video

    Get PDF
    WOS:000283952100005 (Nº de Acesso Web of Science)“Prémio Científico ISCTE-IUL 2011”This paper proposes a no-reference quality assessment metric for digital video subject to H.264/advanced video coding encoding. The proposed metric comprises two main steps: coding error estimation and perceptual weighting of this error. Error estimates are computed in the transform domain, assuming that discrete cosine transform (DCT) coefficients are corrupted by quantization noise. The DCT coefficient distributions are modeled using Cauchy or Laplace probability density functions, whose parameterization is performed using the quantized coefficient data and quantization steps. Parameter estimation is based on a maximum-likelihood estimation method combined with linear prediction. The linear prediction scheme takes advantage of the correlation between parameter values at neighbor DCT spatial frequencies. As for the perceptual weighting module, it is based on a spatiotemporal contrast sensitivity function applied to the DCT domain that compensates image plane movement by considering the movements of the human eye, namely smooth pursuit, natural drift, and saccadic movements. The video related inputs for the perceptual model are the motion vectors and the frame rate, which are also extracted from the encoded video. Subjective video quality assessment tests have been carried out in order to validate the results of the metric. A set of 11 video sequences, spanning a wide range of content, have been encoded at different bitrates and the outcome was subject to quality evaluation. Results show that the quality scores computed by the proposed algorithm are well correlated with the mean opinion scores associated to the subjective assessment

    Adaptive Non-uniform Compressive Sampling for Time-varying Signals

    Full text link
    In this paper, adaptive non-uniform compressive sampling (ANCS) of time-varying signals, which are sparse in a proper basis, is introduced. ANCS employs the measurements of previous time steps to distribute the sensing energy among coefficients more intelligently. To this aim, a Bayesian inference method is proposed that does not require any prior knowledge of importance levels of coefficients or sparsity of the signal. Our numerical simulations show that ANCS is able to achieve the desired non-uniform recovery of the signal. Moreover, if the signal is sparse in canonical basis, ANCS can reduce the number of required measurements significantly.Comment: 6 pages, 8 figures, Conference on Information Sciences and Systems (CISS 2017) Baltimore, Marylan
    corecore