327,842 research outputs found
Constrained Bayesian Active Learning of Interference Channels in Cognitive Radio Networks
In this paper, a sequential probing method for interference constraint
learning is proposed to allow a centralized Cognitive Radio Network (CRN)
accessing the frequency band of a Primary User (PU) in an underlay cognitive
scenario with a designed PU protection specification. The main idea is that the
CRN probes the PU and subsequently eavesdrops the reverse PU link to acquire
the binary ACK/NACK packet. This feedback indicates whether the probing-induced
interference is harmful or not and can be used to learn the PU interference
constraint. The cognitive part of this sequential probing process is the
selection of the power levels of the Secondary Users (SUs) which aims to learn
the PU interference constraint with a minimum number of probing attempts while
setting a limit on the number of harmful probing-induced interference events or
equivalently of NACK packet observations over a time window. This constrained
design problem is studied within the Active Learning (AL) framework and an
optimal solution is derived and implemented with a sophisticated, accurate and
fast Bayesian Learning method, the Expectation Propagation (EP). The
performance of this solution is also demonstrated through numerical simulations
and compared with modified versions of AL techniques we developed in earlier
work.Comment: 14 pages, 6 figures, submitted to IEEE JSTSP Special Issue on Machine
Learning for Cognition in Radio Communications and Rada
Generative Adversarial Positive-Unlabelled Learning
In this work, we consider the task of classifying binary positive-unlabeled
(PU) data. The existing discriminative learning based PU models attempt to seek
an optimal reweighting strategy for U data, so that a decent decision boundary
can be found. However, given limited P data, the conventional PU models tend to
suffer from overfitting when adapted to very flexible deep neural networks. In
contrast, we are the first to innovate a totally new paradigm to attack the
binary PU task, from perspective of generative learning by leveraging the
powerful generative adversarial networks (GAN). Our generative
positive-unlabeled (GenPU) framework incorporates an array of discriminators
and generators that are endowed with different roles in simultaneously
producing positive and negative realistic samples. We provide theoretical
analysis to justify that, at equilibrium, GenPU is capable of recovering both
positive and negative data distributions. Moreover, we show GenPU is
generalizable and closely related to the semi-supervised classification. Given
rather limited P data, experiments on both synthetic and real-world dataset
demonstrate the effectiveness of our proposed framework. With infinite
realistic and diverse sample streams generated from GenPU, a very flexible
classifier can then be trained using deep neural networks.Comment: 8 page
Principled analytic classifier for positive-unlabeled learning via weighted integral probability metric
We consider the problem of learning a binary classifier from only positive
and unlabeled observations (called PU learning). Recent studies in PU learning
have shown superior performance theoretically and empirically. However, most
existing algorithms may not be suitable for large-scale datasets because they
face repeated computations of a large Gram matrix or require massive
hyperparameter optimization. In this paper, we propose a computationally
efficient and theoretically grounded PU learning algorithm. The proposed PU
learning algorithm produces a closed-form classifier when the hypothesis space
is a closed ball in reproducing kernel Hilbert space. In addition, we establish
upper bounds of the estimation error and the excess risk. The obtained
estimation error bound is sharper than existing results and the derived excess
risk bound has an explicit form, which vanishes as sample sizes increase.
Finally, we conduct extensive numerical experiments using both synthetic and
real datasets, demonstrating improved accuracy, scalability, and robustness of
the proposed algorithm.Comment: 32 pages; Accepted for ACML 201
- …
