1,059 research outputs found

    Design and evaluation of a data-dependent low-power 8x8 DCT/IDCT

    Get PDF
    Traditional fast Discrete Cosine Transform (DCT)/ Inverse DCT (IDCT) algorithms have focused on reducing arithmetic complexity and have fixed run-time complexities regardless of the input. Recently, data-dependent signal processing has been applied to the DCT/IDCT. These algorithms have variable run-time complexities. A new two-dimensional 8 x 8 low-power DCT/IDCT design is implemented using VHDL by applying the data-dependent signal-processing concept onto the traditional fixed-complexity fast DCT/IDCT algorithm. To reduce power, the design is based on Loeffler's fast algorithm, which uses a low number of multiplications. On top of that, zero bypassing, data segmentation, input truncation, and hardwired canonical sign-digit (CSD) multipliers are used to reduce the run-time computation, hence reduce the switching activities and the power. When synthesized using Canadian Microelectronic Corporation 3-V 0.35 om CMOSP technology, this FDCT/IDCT design consumes 122.7/124.9 mW with clock frequency of 40MHz and processing rate of 320M sample/sec. With technology scaling to 0.35 om technology, the proposed design features lower switching capacitance per sample, i.e. more power-efficient, than other previously reported high-performance FDCT/IDCT designs.* *This work is supported by National Sciences and Engineering Research Council of Canada (NSERC) post-graduate scholarship, and NSERC research grants

    Rate control and constant quality rate control for MPEG video compression and transcoding

    Get PDF
    The focus of this thesis is the design of rate-control (RC) algorithms for constant quality (CQ) video encoding and transcoding, where CQ is measured by the variance of quality in PSNR (peak signal-to-noise ratio). By modeling DCT coefficients as having Laplacian distributions, Laplacian rate/models are developed for MPEG-4 encoding and transcoding. These models accurately estimate the rate and distortion (in PSNR) of MPEG-4 compressed bitstreams. The rate model is applied to a CBR (constant bit rate) encoding algorithm. This algorithm offers a better or similar PSNR as compared to the Q2 [7] algorithm with a lower variation in bitrate. Thus, it outperforms Q2. These models are then applied to CQ video coding and transcoding. Most CBR control algorithms aim to produce a bitstream that meets a certain bitrate with the highest quality. Due to the non-stationary nature of video sequences, the quality of the compressed sequence changes over time, which is not desirable to end-users. To provide a solution to this problem, six CQ encoding algorithms are proposed: the first two are VBR (variable bit rate) algorithms with a fixed target quality (FTQ), the next two are CBR algorithms with FTQ, and the last two are CBR algorithms with a dynamic target quality (DTQ). Within each group of two, the quality is controlled either at the frame level (using the Laplacian rate/distortion model) or at the macroblock level (using the actual distortions). With the success of these algorithms, the CQ DTQ encoding algorithms are extended to MPEG-4 video transcoding (bitrate reduction with requantization). These CQ transcoding algorithms can handle the problems that are uniquely present in transcoders, such as the lack of the original sequence and requantization. Similar to their encoding counterparts, these CQ transcoding algorithms have an extra degree of freedom to balance the quality variation with the accuracy to the target bitrate and the average quality. Simulation results indicate that these algorithms offer lower PSNR variance while having similar/lower average PSNR and bitrate when compared with Q2T and TM5T (transcoding version of Q2 and TM5). Besides proposing MPEG-4 CQ RC algorithms, an MPEG-2 rate-control algorithm is also developed based on TM5. It aims at improving the subjective quality measured by using Watson's DVQ (digital video quality) metric. When compared with TM5, it provides a better DVQ. However, since Watson's DVQ metric is not a standard way to estimate the subjective quality, PSNR is still used in the rest of the researc

    Sparse Complementary Pairs with Additional Aperiodic ZCZ Property

    Full text link
    This paper presents a novel class of complex-valued sparse complementary pairs (SCPs), each consisting of a number of zero values and with additional zero-correlation zone (ZCZ) property for the aperiodic autocorrelations and crosscorrelations of the two constituent sequences. Direct constructions of SCPs and their mutually-orthogonal mates based on restricted generalized Boolean functions are proposed. It is shown that such SCPs exist with arbitrary lengths and controllable sparsity levels, making them a disruptive sequence candidate for modern low-complexity, low-latency, and low-storage signal processing applications

    Supervised Collective Classification for Crowdsourcing

    Full text link
    Crowdsourcing utilizes the wisdom of crowds for collective classification via information (e.g., labels of an item) provided by labelers. Current crowdsourcing algorithms are mainly unsupervised methods that are unaware of the quality of crowdsourced data. In this paper, we propose a supervised collective classification algorithm that aims to identify reliable labelers from the training data (e.g., items with known labels). The reliability (i.e., weighting factor) of each labeler is determined via a saddle point algorithm. The results on several crowdsourced data show that supervised methods can achieve better classification accuracy than unsupervised methods, and our proposed method outperforms other algorithms.Comment: to appear in IEEE Global Communications Conference (GLOBECOM) Workshop on Networking and Collaboration Issues for the Internet of Everythin

    Enhanced Cross Z-Complementary Set and Its Application in Generalized Spatial Modulation

    Full text link
    Generalized spatial modulation (GSM) is a novel multiple-antenna technique offering flexibility among spectral efficiency, energy efficiency, and the cost of RF chains. In this paper, a novel class of sequence sets, called enhanced cross Zcomplementary set (E-CZCS), is proposed for efficient training sequence design in broadband GSM systems. Specifically, an E-CZCS consists of multiple CZCSs possessing front-end and tail-end zero-correlation zones (ZCZs), whereby any two distinct CZCSs have a tail-end ZCZ when a novel type of cross-channel aperiodic correlation sums is considered. The theoretical upper bound on the ZCZ width is first derived, upon which optimal E-CZCSs with flexible parameters are constructed. For optimal channel estimation over frequency-selective channels, we introduce and evaluate a novel GSM training framework employing the proposed E-CZCSs

    An Interpretable Generalization Mechanism for Accurately Detecting Anomaly and Identifying Networking Intrusion Techniques

    Full text link
    Recent advancements in Intrusion Detection Systems (IDS), integrating Explainable AI (XAI) methodologies, have led to notable improvements in system performance via precise feature selection. However, a thorough understanding of cyber-attacks requires inherently explainable decision-making processes within IDS. In this paper, we present the Interpretable Generalization Mechanism (IG), poised to revolutionize IDS capabilities. IG discerns coherent patterns, making it interpretable in distinguishing between normal and anomalous network traffic. Further, the synthesis of coherent patterns sheds light on intricate intrusion pathways, providing essential insights for cybersecurity forensics. By experiments with real-world datasets NSL-KDD, UNSW-NB15, and UKM-IDS20, IG is accurate even at a low ratio of training-to-test. With 10%-to-90%, IG achieves Precision (PRE)=0.93, Recall (REC)=0.94, and Area Under Curve (AUC)=0.94 in NSL-KDD; PRE=0.98, REC=0.99, and AUC=0.99 in UNSW-NB15; and PRE=0.98, REC=0.98, and AUC=0.99 in UKM-IDS20. Notably, in UNSW-NB15, IG achieves REC=1.0 and at least PRE=0.98 since 40%-to-60%; in UKM-IDS20, IG achieves REC=1.0 and at least PRE=0.88 since 20%-to-80%. Importantly, in UKM-IDS20, IG successfully identifies all three anomalous instances without prior exposure, demonstrating its generalization capabilities. These results and inferences are reproducible. In sum, IG showcases superior generalization by consistently performing well across diverse datasets and training-to-test ratios (from 10%-to-90% to 90%-to-10%), and excels in identifying novel anomalies without prior exposure. Its interpretability is enhanced by coherent evidence that accurately distinguishes both normal and anomalous activities, significantly improving detection accuracy and reducing false alarms, thereby strengthening IDS reliability and trustworthiness

    LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning

    Get PDF
    Distributed learning systems have enabled training large-scale models over large amount of data in significantly shorter time. In this paper, we focus on decentralized distributed deep learning systems and aim to achieve differential privacy with good convergence rate and low communication cost. To achieve this goal, we propose a new learning algorithm LEASGD (Leader-Follower Elastic Averaging Stochastic Gradient Descent), which is driven by a novel Leader-Follower topology and a differential privacy model.We provide a theoretical analysis of the convergence rate and the trade-off between the performance and privacy in the private setting.The experimental results show that LEASGD outperforms state-of-the-art decentralized learning algorithm DPSGD by achieving steadily lower loss within the same iterations and by reducing the communication cost by 30%. In addition, LEASGD spends less differential privacy budget and has higher final accuracy result than DPSGD under private setting
    corecore