999 research outputs found

    Design and evaluation of a data-dependent low-power 8x8 DCT/IDCT

    Get PDF
    Traditional fast Discrete Cosine Transform (DCT)/ Inverse DCT (IDCT) algorithms have focused on reducing arithmetic complexity and have fixed run-time complexities regardless of the input. Recently, data-dependent signal processing has been applied to the DCT/IDCT. These algorithms have variable run-time complexities. A new two-dimensional 8 x 8 low-power DCT/IDCT design is implemented using VHDL by applying the data-dependent signal-processing concept onto the traditional fixed-complexity fast DCT/IDCT algorithm. To reduce power, the design is based on Loeffler's fast algorithm, which uses a low number of multiplications. On top of that, zero bypassing, data segmentation, input truncation, and hardwired canonical sign-digit (CSD) multipliers are used to reduce the run-time computation, hence reduce the switching activities and the power. When synthesized using Canadian Microelectronic Corporation 3-V 0.35 om CMOSP technology, this FDCT/IDCT design consumes 122.7/124.9 mW with clock frequency of 40MHz and processing rate of 320M sample/sec. With technology scaling to 0.35 om technology, the proposed design features lower switching capacitance per sample, i.e. more power-efficient, than other previously reported high-performance FDCT/IDCT designs.* *This work is supported by National Sciences and Engineering Research Council of Canada (NSERC) post-graduate scholarship, and NSERC research grants

    Rate control and constant quality rate control for MPEG video compression and transcoding

    Get PDF
    The focus of this thesis is the design of rate-control (RC) algorithms for constant quality (CQ) video encoding and transcoding, where CQ is measured by the variance of quality in PSNR (peak signal-to-noise ratio). By modeling DCT coefficients as having Laplacian distributions, Laplacian rate/models are developed for MPEG-4 encoding and transcoding. These models accurately estimate the rate and distortion (in PSNR) of MPEG-4 compressed bitstreams. The rate model is applied to a CBR (constant bit rate) encoding algorithm. This algorithm offers a better or similar PSNR as compared to the Q2 [7] algorithm with a lower variation in bitrate. Thus, it outperforms Q2. These models are then applied to CQ video coding and transcoding. Most CBR control algorithms aim to produce a bitstream that meets a certain bitrate with the highest quality. Due to the non-stationary nature of video sequences, the quality of the compressed sequence changes over time, which is not desirable to end-users. To provide a solution to this problem, six CQ encoding algorithms are proposed: the first two are VBR (variable bit rate) algorithms with a fixed target quality (FTQ), the next two are CBR algorithms with FTQ, and the last two are CBR algorithms with a dynamic target quality (DTQ). Within each group of two, the quality is controlled either at the frame level (using the Laplacian rate/distortion model) or at the macroblock level (using the actual distortions). With the success of these algorithms, the CQ DTQ encoding algorithms are extended to MPEG-4 video transcoding (bitrate reduction with requantization). These CQ transcoding algorithms can handle the problems that are uniquely present in transcoders, such as the lack of the original sequence and requantization. Similar to their encoding counterparts, these CQ transcoding algorithms have an extra degree of freedom to balance the quality variation with the accuracy to the target bitrate and the average quality. Simulation results indicate that these algorithms offer lower PSNR variance while having similar/lower average PSNR and bitrate when compared with Q2T and TM5T (transcoding version of Q2 and TM5). Besides proposing MPEG-4 CQ RC algorithms, an MPEG-2 rate-control algorithm is also developed based on TM5. It aims at improving the subjective quality measured by using Watson's DVQ (digital video quality) metric. When compared with TM5, it provides a better DVQ. However, since Watson's DVQ metric is not a standard way to estimate the subjective quality, PSNR is still used in the rest of the researc

    Sparse Complementary Pairs with Additional Aperiodic ZCZ Property

    Full text link
    This paper presents a novel class of complex-valued sparse complementary pairs (SCPs), each consisting of a number of zero values and with additional zero-correlation zone (ZCZ) property for the aperiodic autocorrelations and crosscorrelations of the two constituent sequences. Direct constructions of SCPs and their mutually-orthogonal mates based on restricted generalized Boolean functions are proposed. It is shown that such SCPs exist with arbitrary lengths and controllable sparsity levels, making them a disruptive sequence candidate for modern low-complexity, low-latency, and low-storage signal processing applications

    Supervised Collective Classification for Crowdsourcing

    Full text link
    Crowdsourcing utilizes the wisdom of crowds for collective classification via information (e.g., labels of an item) provided by labelers. Current crowdsourcing algorithms are mainly unsupervised methods that are unaware of the quality of crowdsourced data. In this paper, we propose a supervised collective classification algorithm that aims to identify reliable labelers from the training data (e.g., items with known labels). The reliability (i.e., weighting factor) of each labeler is determined via a saddle point algorithm. The results on several crowdsourced data show that supervised methods can achieve better classification accuracy than unsupervised methods, and our proposed method outperforms other algorithms.Comment: to appear in IEEE Global Communications Conference (GLOBECOM) Workshop on Networking and Collaboration Issues for the Internet of Everythin

    Enhanced Cross Z-Complementary Set and Its Application in Generalized Spatial Modulation

    Full text link
    Generalized spatial modulation (GSM) is a novel multiple-antenna technique offering flexibility among spectral efficiency, energy efficiency, and the cost of RF chains. In this paper, a novel class of sequence sets, called enhanced cross Zcomplementary set (E-CZCS), is proposed for efficient training sequence design in broadband GSM systems. Specifically, an E-CZCS consists of multiple CZCSs possessing front-end and tail-end zero-correlation zones (ZCZs), whereby any two distinct CZCSs have a tail-end ZCZ when a novel type of cross-channel aperiodic correlation sums is considered. The theoretical upper bound on the ZCZ width is first derived, upon which optimal E-CZCSs with flexible parameters are constructed. For optimal channel estimation over frequency-selective channels, we introduce and evaluate a novel GSM training framework employing the proposed E-CZCSs

    LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning

    Get PDF
    Distributed learning systems have enabled training large-scale models over large amount of data in significantly shorter time. In this paper, we focus on decentralized distributed deep learning systems and aim to achieve differential privacy with good convergence rate and low communication cost. To achieve this goal, we propose a new learning algorithm LEASGD (Leader-Follower Elastic Averaging Stochastic Gradient Descent), which is driven by a novel Leader-Follower topology and a differential privacy model.We provide a theoretical analysis of the convergence rate and the trade-off between the performance and privacy in the private setting.The experimental results show that LEASGD outperforms state-of-the-art decentralized learning algorithm DPSGD by achieving steadily lower loss within the same iterations and by reducing the communication cost by 30%. In addition, LEASGD spends less differential privacy budget and has higher final accuracy result than DPSGD under private setting

    NMD-12: A New Machine-Learning Derived Screening Instrument to Detect Mild Cognitive Impairment and Dementia

    Get PDF
    Introduction Using machine learning techniques, we developed a brief questionnaire to aid neurologists and neuropsychologists in the screening of mild cognitive impairment (MCI) and dementia. Methods With the reduction of the survey size as a goal of this research, feature selection based on information gain was performed to rank the contribution of the 45 items corresponding to patient responses to the specified questions. The most important items were used to build the optimal screening model based on the accuracy, practicality, and interpretability. The diagnostic accuracy for discriminating normal cognition (NC), MCI, very mild dementia (VMD) and dementia was validated in the test group. Results The screening model (NMD-12) was constructed with the 12 items that were ranked the highest in feature selection. The receiver-operator characteristic (ROC) analysis showed that the area under the curve (AUC) in the test group was 0.94 for discriminating NC vs. MCI, 0.88 for MCI vs. VMD, 0.97 for MCI vs. dementia, and 0.96 for VMD vs. dementia, respectively. Discussion The NMD-12 model has been developed and validated in this study. It provides healthcare professionals with a simple and practical screening tool which accurately differentiates NC, MCI, VMD, and dementia
    corecore