586 research outputs found
Detection for 5G-NOMA: An Online Adaptive Machine Learning Approach
Non-orthogonal multiple access (NOMA) has emerged as a promising radio access
technique for enabling the performance enhancements promised by the
fifth-generation (5G) networks in terms of connectivity, low latency, and high
spectrum efficiency. In the NOMA uplink, successive interference cancellation
(SIC) based detection with device clustering has been suggested. In the case of
multiple receive antennas, SIC can be combined with the minimum mean-squared
error (MMSE) beamforming. However, there exists a tradeoff between the NOMA
cluster size and the incurred SIC error. Larger clusters lead to larger errors
but they are desirable from the spectrum efficiency and connectivity point of
view. We propose a novel online learning based detection for the NOMA uplink.
In particular, we design an online adaptive filter in the sum space of linear
and Gaussian reproducing kernel Hilbert spaces (RKHSs). Such a sum space design
is robust against variations of a dynamic wireless network that can deteriorate
the performance of a purely nonlinear adaptive filter. We demonstrate by
simulations that the proposed method outperforms the MMSE-SIC based detection
for large cluster sizes.Comment: Accepted at ICC 201
Distributed Adaptive Learning with Multiple Kernels in Diffusion Networks
We propose an adaptive scheme for distributed learning of nonlinear functions
by a network of nodes. The proposed algorithm consists of a local adaptation
stage utilizing multiple kernels with projections onto hyperslabs and a
diffusion stage to achieve consensus on the estimates over the whole network.
Multiple kernels are incorporated to enhance the approximation of functions
with several high and low frequency components common in practical scenarios.
We provide a thorough convergence analysis of the proposed scheme based on the
metric of the Cartesian product of multiple reproducing kernel Hilbert spaces.
To this end, we introduce a modified consensus matrix considering this specific
metric and prove its equivalence to the ordinary consensus matrix. Besides, the
use of hyperslabs enables a significant reduction of the computational demand
with only a minor loss in the performance. Numerical evaluations with synthetic
and real data are conducted showing the efficacy of the proposed algorithm
compared to the state of the art schemes.Comment: Double-column 15 pages, 10 figures, submitted to IEEE Trans. Signal
Processin
Early stopping and non-parametric regression: An optimal data-dependent stopping rule
The strategy of early stopping is a regularization technique based on
choosing a stopping time for an iterative algorithm. Focusing on non-parametric
regression in a reproducing kernel Hilbert space, we analyze the early stopping
strategy for a form of gradient-descent applied to the least-squares loss
function. We propose a data-dependent stopping rule that does not involve
hold-out or cross-validation data, and we prove upper bounds on the squared
error of the resulting function estimate, measured in either the and
norm. These upper bounds lead to minimax-optimal rates for various
kernel classes, including Sobolev smoothness classes and other forms of
reproducing kernel Hilbert spaces. We show through simulation that our stopping
rule compares favorably to two other stopping rules, one based on hold-out data
and the other based on Stein's unbiased risk estimate. We also establish a
tight connection between our early stopping strategy and the solution path of a
kernel ridge regression estimator.Comment: 29 pages, 4 figure
On-line regression competitive with reproducing kernel Hilbert spaces
We consider the problem of on-line prediction of real-valued labels, assumed
bounded in absolute value by a known constant, of new objects from known
labeled objects. The prediction algorithm's performance is measured by the
squared deviation of the predictions from the actual labels. No stochastic
assumptions are made about the way the labels and objects are generated.
Instead, we are given a benchmark class of prediction rules some of which are
hoped to produce good predictions. We show that for a wide range of
infinite-dimensional benchmark classes one can construct a prediction algorithm
whose cumulative loss over the first N examples does not exceed the cumulative
loss of any prediction rule in the class plus O(sqrt(N)); the main differences
from the known results are that we do not impose any upper bound on the norm of
the considered prediction rules and that we achieve an optimal leading term in
the excess loss of our algorithm. If the benchmark class is "universal" (dense
in the class of continuous functions on each compact set), this provides an
on-line non-stochastic analogue of universally consistent prediction in
non-parametric statistics. We use two proof techniques: one is based on the
Aggregating Algorithm and the other on the recently developed method of
defensive forecasting.Comment: 37 pages, 1 figur
- …