42 research outputs found

    Relay Assisted Cooperative OSTBC Communication with SNR Imbalance and Channel Estimation Errors

    Full text link
    In this paper, a two-hop relay assisted cooperative Orthogonal Space-Time Block Codes (OSTBC) transmission scheme is considered for the downlink communication of a cellular system, where the base station (BS) and the relay station (RS) cooperate and transmit data to the user equipment (UE) in a distributed fashion. We analyze the impact of the SNR imbalance between the BS-UE and RS-UE links, as well as the imperfect channel estimation at the UE receiver. The performance is analyzed in the presence of Rayleigh flat fading and our results show that the SNR imbalance does not impact the spatial diversity order. On the other hand, channel estimation errors have a larger impact on the system performance. Simulation results are then provided to confirm the analysis.Comment: 5 pages, 3 figures, IEEE 69th Vehicular Technology Conferenc

    Summary Statistic Privacy in Data Sharing

    Full text link
    We study a setting where a data holder wishes to share data with a receiver, without revealing certain summary statistics of the data distribution (e.g., mean, standard deviation). It achieves this by passing the data through a randomization mechanism. We propose summary statistic privacy, a metric for quantifying the privacy risk of such a mechanism based on the worst-case probability of an adversary guessing the distributional secret within some threshold. Defining distortion as a worst-case Wasserstein-1 distance between the real and released data, we prove lower bounds on the tradeoff between privacy and distortion. We then propose a class of quantization mechanisms that can be adapted to different data distributions. We show that the quantization mechanism's privacy-distortion tradeoff matches our lower bounds under certain regimes, up to small constant factors. Finally, we demonstrate on real-world datasets that the proposed quantization mechanisms achieve better privacy-distortion tradeoffs than alternative privacy mechanisms

    Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding

    Full text link
    This work aims at decreasing the end-to-end generation latency of large language models (LLMs). One of the major causes of the high generation latency is the sequential decoding approach adopted by almost all state-of-the-art LLMs. In this work, motivated by the thinking and writing process of humans, we propose "Skeleton-of-Thought" (SoT), which guides LLMs to first generate the skeleton of the answer, and then conducts parallel API calls or batched decoding to complete the contents of each skeleton point in parallel. Not only does SoT provide considerable speed-up (up to 2.39x across 11 different LLMs), but it can also potentially improve the answer quality on several question categories in terms of diversity and relevance. SoT is an initial attempt at data-centric optimization for efficiency, and reveal the potential of pushing LLMs to think more like a human for answer quality.Comment: Technical report, work in progres

    Efficiently Computing Similarities to Private Datasets

    Full text link
    Many methods in differentially private model training rely on computing the similarity between a query point (such as public or synthetic data) and private data. We abstract out this common subroutine and study the following fundamental algorithmic problem: Given a similarity function ff and a large high-dimensional private dataset X⊂RdX \subset \mathbb{R}^d, output a differentially private (DP) data structure which approximates ∑x∈Xf(x,y)\sum_{x \in X} f(x,y) for any query yy. We consider the cases where ff is a kernel function, such as f(x,y)=e−∥x−y∥22/σ2f(x,y) = e^{-\|x-y\|_2^2/\sigma^2} (also known as DP kernel density estimation), or a distance function such as f(x,y)=∥x−y∥2f(x,y) = \|x-y\|_2, among others. Our theoretical results improve upon prior work and give better privacy-utility trade-offs as well as faster query times for a wide range of kernels and distance functions. The unifying approach behind our results is leveraging `low-dimensional structures' present in the specific functions ff that we study, using tools such as provable dimensionality reduction, approximation theory, and one-dimensional decomposition of the functions. Our algorithms empirically exhibit improved query times and accuracy over prior state of the art. We also present an application to DP classification. Our experiments demonstrate that the simple methodology of classifying based on average similarity is orders of magnitude faster than prior DP-SGD based approaches for comparable accuracy.Comment: To appear at ICLR 202

    An Unsupervised Machine Learning Scheme for Index-Based CSI Feedback in Wi-Fi

    Full text link
    With the ever-increasing demand for high-speed wireless data transmission, beamforming techniques have been proven to be crucial in improving the data rate and the signal-to-noise ratio (SNR) at the receiver. However, they require feedback mechanisms that need an overhead of information and increase the system complexity, potentially challenging the efficiency and capacity of modern wireless networks. This paper investigates novel index-based feedback mechanisms that aim at reducing the beamforming feedback overhead in Wi-Fi links. The proposed methods mitigate the overhead by generating a set of candidate beamforming vectors using an unsupervised learning-based framework. The amount of feedback information required is thus reduced by using the index of the candidate as feedback instead of transmitting the entire beamforming matrix. We explore several methods that consider different representations of the data in the candidate set. In particular, we propose five different ways to generate and represent the candidate sets that consider the covariance matrices of the channel, serialize the feedback matrix, and account for the effective distance, among others. Additionally, we also discuss the implications of using partial information in the compressed beamforming feedback on the link performance and compare it with the newly proposed index-based methods. Extensive IEEE 802.11 standard-compliant simulation results show that the proposed methods effectively minimize the feedback overhead, enhancing the throughput while maintaining an adequate link performance

    Enhanced Index-Based Feedback Overhead Reduction for WLANs

    Full text link
    Compressed beamforming algorithm is used in the current Wi-Fi standard to reduce the beamforming feedback overhead (BFO). However, with each new amendment of the standard the number of supported antennas in Wi-Fi devices increases, leading to increased BFO and hampering the throughput despite using compressed beamforming. In this paper, a novel index-based method is presented to reduce the BFO in Wi-Fi links. In particular, a k-means clustering-based approach is presented to generate candidate beamforming feedback matrices, thereby reducing the BFO to only the index of the said candidate matrices. With extensive simulation results, we compare the newly proposed method with the IEEE 802.11be baseline and our previously published index-based method. We show approximately 54% gain in throughput at high signal-to-noise (SNR) against the IEEE 802.11be baseline. Our comparison also shows approximately 4 dB gain compared to our previously published method at the packet-error-rate (PER) of 0.01 using MCS index 11. Additionally, we also discuss the impact of the distance metric chosen for clustering as well as candidate selection on the link performance
    corecore