54 research outputs found
A tight upper bound on channel capacity for visible light communications
Since the optical wireless channel in visible light communication (VLC) systems is subject to the non-negativity of the signal and the average optical power, the classic Shannon channel capacity formula is not applicable to VLC systems. To derive a simple closed-form upper bound on channel capacity, sphere packing argument method has been applied previously. However, there is an obvious gap between the existing sphere-packing upper bounds and the lower bounds at high optical signal-to-noise-ratios (OSNRs), which is mainly caused by the inaccurate mathematical approximation of the intrinsic volumes of the simplex. In this letter, a tight sphere-packing upper bound is derived with a new approximation method. Numerical results demonstrate that compared to the existing sphere-packing upper bounds, our proposed upper bound is tighter at high OSNRs
Gaussian Differential Privacy on Riemannian Manifolds
We develop an advanced approach for extending Gaussian Differential Privacy
(GDP) to general Riemannian manifolds. The concept of GDP stands out as a
prominent privacy definition that strongly warrants extension to manifold
settings, due to its central limit properties. By harnessing the power of the
renowned Bishop-Gromov theorem in geometric analysis, we propose a Riemannian
Gaussian distribution that integrates the Riemannian distance, allowing us to
achieve GDP in Riemannian manifolds with bounded Ricci curvature. To the best
of our knowledge, this work marks the first instance of extending the GDP
framework to accommodate general Riemannian manifolds, encompassing curved
spaces, and circumventing the reliance on tangent space summaries. We provide a
simple algorithm to evaluate the privacy budget on any one-dimensional
manifold and introduce a versatile Markov Chain Monte Carlo (MCMC)-based
algorithm to calculate on any Riemannian manifold with constant
curvature. Through simulations on one of the most prevalent manifolds in
statistics, the unit sphere , we demonstrate the superior utility of our
Riemannian Gaussian mechanism in comparison to the previously proposed
Riemannian Laplace mechanism for implementing GDP
An optimal scaling scheme for DCO-OFDM based visible light communications
DC-biased optical orthogonal frequency-division multiplexing (DCO-OFDM) is widely used in visible light communication (VLC) systems to provide high data rate transmission. As intensity modulation with direct detection (IM/DD) is employed to modulate the OFDM signal, scale up the amplitude of the signal can increase the effective transmitted electrical power whereas more signals are likely to be clipped due to the limited dynamic range of LEDs, resulting in severe clipping distortion. Thus, it is crucial to scale the signal to find a tradeoff between the effective electrical power and the clipping distortion. In this paper, an optimal scaling scheme is proposed to maximize the received signal-to-noise-plus-distortion ratio (SNDR) with the constraint of the radiated optical power in a practical scenario where DC bias is fixed for a desired dimming level. Simulation results show that the system with the optimal scaling factor outperforms that with fixed scaling factor under different equivalent noise power in terms of the bit error ratio (BER) performance
Online Local Differential Private Quantile Inference via Self-normalization
Based on binary inquiries, we developed an algorithm to estimate population
quantiles under Local Differential Privacy (LDP). By self-normalizing, our
algorithm provides asymptotically normal estimation with valid inference,
resulting in tight confidence intervals without the need for nuisance
parameters to be estimated. Our proposed method can be conducted fully online,
leading to high computational efficiency and minimal storage requirements with
space. We also proved an optimality result by an elegant
application of one central limit theorem of Gaussian Differential Privacy (GDP)
when targeting the frequently encountered median estimation problem. With
mathematical proof and extensive numerical testing, we demonstrate the validity
of our algorithm both theoretically and experimentally
Interpreting Distributional Reinforcement Learning: A Regularization Perspective
Distributional reinforcement learning~(RL) is a class of state-of-the-art
algorithms that estimate the whole distribution of the total return rather than
only its expectation. Despite the remarkable performance of distributional RL,
a theoretical understanding of its advantages over expectation-based RL remains
elusive. In this paper, we attribute the superiority of distributional RL to
its regularization effect in terms of the value distribution information
regardless of its expectation. Firstly, by leverage of a variant of the gross
error model in robust statistics, we decompose the value distribution into its
expectation and the remaining distribution part. As such, the extra benefit of
distributional RL compared with expectation-based RL is mainly interpreted as
the impact of a \textit{risk-sensitive entropy regularization} within the
Neural Fitted Z-Iteration framework. Meanwhile, we establish a bridge between
the risk-sensitive entropy regularization of distributional RL and the vanilla
entropy in maximum entropy RL, focusing specifically on actor-critic
algorithms. It reveals that distributional RL induces a corrected reward
function and thus promotes a risk-sensitive exploration against the intrinsic
uncertainty of the environment. Finally, extensive experiments corroborate the
role of the regularization effect of distributional RL and uncover mutual
impacts of different entropy regularization. Our research paves a way towards
better interpreting the efficacy of distributional RL algorithms, especially
through the lens of regularization
M-estimation in Low-rank Matrix Factorization: a General Framework
Many problems in science and engineering can be reduced to the recovery of an unknown large matrix from a small number of random linear measurements. Matrix factorization arguably is the most popular approach for low-rank matrix recovery. Many methods have been proposed using different loss functions, for example the most widely used L_2 loss, more robust choices such as L_1 and Huber loss, quantile and expectile loss for skewed data. All of them can be unified into the framework of M-estimation. In this paper, we present a general framework of low-rank matrix factorization based on M-estimation in statistics. The framework mainly involves two steps: firstly we apply Nesterov's smoothing technique to obtain an optimal smooth approximation for non-smooth loss function, such as L_1 and quantile loss; secondly we exploit an alternative updating scheme along with Nesterov's momentum method at each step to minimize the smoothed loss function. Strong theoretical convergence guarantee has been developed for the general framework, and extensive numerical experiments have been conducted to illustrate the performance of proposed algorithm
Learning Privately over Distributed Features: An ADMM Sharing Approach
Distributed machine learning has been widely studied in order to handle exploding amount of data. In this paper, we study an important yet less visited distributed learning problem where features are inherently distributed or vertically partitioned among multiple parties, and sharing of raw data or model parameters among parties is prohibited due to privacy concerns. We propose an ADMM sharing framework to approach risk minimization over distributed features, where each party only needs to share a single value for each sample in the training process, thus minimizing the data leakage risk. We introduce a novel differentially private ADMM sharing algorithm and bound the privacy guarantee with carefully designed noise perturbation. The experiments based on a prototype system shows that the proposed ADMM algorithms converge efficiently in a robust fashion, demonstrating advantage over gradient-based methods especially for data set with high dimensional features
Marker assisted pyramiding of two brown planthopper resistance genes, Bph3 and Bph27 (t), into elite rice Cultivars
Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving
With widening deployments of natural language processing (NLP) in daily life, inherited social biases from NLP models have become more severe and problematic. Previous studies have shown that word embeddings trained on human-generated corpora have strong gender biases that can produce discriminative results in downstream tasks.
Previous debiasing methods focus mainly on modeling bias and only implicitly consider semantic information while completely overlooking the complex underlying causal structure among bias and semantic components. To address these issues, we propose a novel methodology that leverages a causal inference framework to effectively remove gender bias. The proposed method allows us to construct and analyze the complex causal mechanisms facilitating gender information flow while retaining oracle semantic information within word embeddings. Our comprehensive experiments show that the proposed method achieves state-of-the-art results in gender-debiasing tasks. In addition, our methods yield better performance in word similarity evaluation and various extrinsic downstream NLP tasks
- …