36 research outputs found

    Relay Assisted Cooperative OSTBC Communication with SNR Imbalance and Channel Estimation Errors

    Full text link
    In this paper, a two-hop relay assisted cooperative Orthogonal Space-Time Block Codes (OSTBC) transmission scheme is considered for the downlink communication of a cellular system, where the base station (BS) and the relay station (RS) cooperate and transmit data to the user equipment (UE) in a distributed fashion. We analyze the impact of the SNR imbalance between the BS-UE and RS-UE links, as well as the imperfect channel estimation at the UE receiver. The performance is analyzed in the presence of Rayleigh flat fading and our results show that the SNR imbalance does not impact the spatial diversity order. On the other hand, channel estimation errors have a larger impact on the system performance. Simulation results are then provided to confirm the analysis.Comment: 5 pages, 3 figures, IEEE 69th Vehicular Technology Conferenc

    Summary Statistic Privacy in Data Sharing

    Full text link
    We study a setting where a data holder wishes to share data with a receiver, without revealing certain summary statistics of the data distribution (e.g., mean, standard deviation). It achieves this by passing the data through a randomization mechanism. We propose summary statistic privacy, a metric for quantifying the privacy risk of such a mechanism based on the worst-case probability of an adversary guessing the distributional secret within some threshold. Defining distortion as a worst-case Wasserstein-1 distance between the real and released data, we prove lower bounds on the tradeoff between privacy and distortion. We then propose a class of quantization mechanisms that can be adapted to different data distributions. We show that the quantization mechanism's privacy-distortion tradeoff matches our lower bounds under certain regimes, up to small constant factors. Finally, we demonstrate on real-world datasets that the proposed quantization mechanisms achieve better privacy-distortion tradeoffs than alternative privacy mechanisms

    Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding

    Full text link
    This work aims at decreasing the end-to-end generation latency of large language models (LLMs). One of the major causes of the high generation latency is the sequential decoding approach adopted by almost all state-of-the-art LLMs. In this work, motivated by the thinking and writing process of humans, we propose "Skeleton-of-Thought" (SoT), which guides LLMs to first generate the skeleton of the answer, and then conducts parallel API calls or batched decoding to complete the contents of each skeleton point in parallel. Not only does SoT provide considerable speed-up (up to 2.39x across 11 different LLMs), but it can also potentially improve the answer quality on several question categories in terms of diversity and relevance. SoT is an initial attempt at data-centric optimization for efficiency, and reveal the potential of pushing LLMs to think more like a human for answer quality.Comment: Technical report, work in progres

    The physio-biochemical characterization reflected different calcium utilization efficiency between the sensitive and tolerant peanut accessions under calcium deficiency

    Get PDF
    Peanut yield in southern China is usually limited by calcium deficiency in soil. Most previous studies have found that small-seed varieties showed higher tolerance than large-seed varieties (e.g. Virginia type) under calcium deficiency, however, our preliminary research found that sensitive varieties also existed in small-seed counterparts. Few studies have been conducted to characterize low-calcium tolerance among small-seed germplasms with genetic diversity, and the differences in physiological characteristics between sensitive and tolerant varieties has not been reported yet. Thus, in order to better understand such differences, the current study firstly collected and characterized a diversity germplasm panel consisting of 50 small-seed peanut genotypes via a 2-year field trial, followed by the physiological characterization in sensitive (HN032) and tolerant (HN035) peanut genotypes under calcium deficiency. As a result, the adverse effects brought by calcium deficiency on calcium uptake and distribution in HN032 was much larger than HN035. In details, calcium uptake in the aboveground part (leaves and stems) was reduced by 16.17% and 33.66%, while in the underground part (roots and pods), it was reduced by 13.69% and 68.09% under calcium deficiency for HN035 and HN032, respectively; The calcium distribution rate in the pods of HN035 was 2.74 times higher than HN032. The utilization efficiency of calcium in the pods of HN035 was 1.68 and 1.37 times than that of HN032 under calcium deficiency and sufficiency, respectively. In addition, under calcium deficiency conditions, the activities of antioxidant enzymes SOD, POD, and CAT, as well as the MDA content, were significantly increased in the leaves of HN032, peanut yield was significantly reduced by 22.75%. However, there were no significant changes in the activities of antioxidant enzymes, MDA content, and peanut yield in HN035. Therefore, higher calcium absorption and utilization efficiency may be the key factors maintaining peanut yield in calcium-deficient conditions for tolerant genotypes. This study lays a solid foundation for selecting low-calcium tolerant varieties in future peanut breeding

    Selective Pre-training for Private Fine-tuning

    Full text link
    Suppose we want to train text prediction models in email clients or word processors. The models must preserve the privacy of user data and adhere to a specific fixed size to meet memory and inference time requirements. We introduce a generic framework to solve this problem. Specifically, we are given a public dataset DpubD_\text{pub} and a private dataset DprivD_\text{priv} corresponding to a downstream task TT. How should we pre-train a fixed-size model MM on DpubD_\text{pub} and fine-tune it on DprivD_\text{priv} such that performance of MM with respect to TT is maximized and MM satisfies differential privacy with respect to DprivD_\text{priv}? We show that pre-training on a {\em subset} of dataset DpubD_\text{pub} that brings the public distribution closer to the private distribution is a crucial ingredient to maximize the transfer learning abilities of MM after pre-training, especially in the regimes where model sizes are relatively small. Besides performance improvements, our framework also shows that with careful pre-training and private fine-tuning, {\em smaller models} can match the performance of much larger models, highlighting the promise of differentially private training as a tool for model compression and efficiency
    corecore