12,039 research outputs found
Analysis of Spectrum Occupancy Using Machine Learning Algorithms
In this paper, we analyze the spectrum occupancy using different machine
learning techniques. Both supervised techniques (naive Bayesian classifier
(NBC), decision trees (DT), support vector machine (SVM), linear regression
(LR)) and unsupervised algorithm (hidden markov model (HMM)) are studied to
find the best technique with the highest classification accuracy (CA). A
detailed comparison of the supervised and unsupervised algorithms in terms of
the computational time and classification accuracy is performed. The classified
occupancy status is further utilized to evaluate the probability of secondary
user outage for the future time slots, which can be used by system designers to
define spectrum allocation and spectrum sharing policies. Numerical results
show that SVM is the best algorithm among all the supervised and unsupervised
classifiers. Based on this, we proposed a new SVM algorithm by combining it
with fire fly algorithm (FFA), which is shown to outperform all other
algorithms.Comment: 21 pages, 6 figure
Machine learning techniques applied to multiband spectrum sensing in cognitive radios
This research received funding of the Mexican National Council of Science and Technology (CONACYT), Grant (no. 490180). Also, this work was supported by the Program for Professional Development Teacher (PRODEP).In this work, three specific machine learning techniques (neural networks, expectation maximization and k-means) are applied to a multiband spectrum sensing technique for cognitive radios. All of them have been used as a classifier using the approximation coefficients from a Multiresolution Analysis in order to detect presence of one or multiple primary users in a wideband spectrum. Methods were tested on simulated and real signals showing a good performance. The results presented of these three methods are effective options for detecting primary user transmission on the multiband spectrum. These methodologies work for 99% of cases under simulated signals of SNR higher than 0 dB and are feasible in the case of real signalsPeer ReviewedPostprint (published version
Deep Learning Meets Cognitive Radio: Predicting Future Steps
Learning the channel occupancy patterns to reuse
the underutilised spectrum frequencies without interfering with
the incumbent is a promising approach to overcome the spectrum
limitations. In this work we proposed a Deep Learning (DL)
approach to learn the channel occupancy model and predict its
availability in the next time slots. Our results show that the
proposed DL approach outperforms existing works by 5%. We
also show that our proposed DL approach predicts the availability
of channels accurately for more than one time slot
Ultrafast processing of pixel detector data with machine learning frameworks
Modern photon science performed at high repetition rate free-electron laser
(FEL) facilities and beyond relies on 2D pixel detectors operating at
increasing frequencies (towards 100 kHz at LCLS-II) and producing rapidly
increasing amounts of data (towards TB/s). This data must be rapidly stored for
offline analysis and summarized in real time. While at LCLS all raw data has
been stored, at LCLS-II this would lead to a prohibitive cost; instead,
enabling real time processing of pixel detector raw data allows reducing the
size and cost of online processing, offline processing and storage by orders of
magnitude while preserving full photon information, by taking advantage of the
compressibility of sparse data typical for LCLS-II applications. We
investigated if recent developments in machine learning are useful in data
processing for high speed pixel detectors and found that typical deep learning
models and autoencoder architectures failed to yield useful noise reduction
while preserving full photon information, presumably because of the very
different statistics and feature sets between computer vision and radiation
imaging. However, we redesigned in Tensorflow mathematically equivalent
versions of the state-of-the-art, "classical" algorithms used at LCLS. The
novel Tensorflow models resulted in elegant, compact and hardware agnostic
code, gaining 1 to 2 orders of magnitude faster processing on an inexpensive
consumer GPU, reducing by 3 orders of magnitude the projected cost of online
analysis at LCLS-II. Computer vision a decade ago was dominated by hand-crafted
filters; their structure inspired the deep learning revolution resulting in
modern deep convolutional networks; similarly, our novel Tensorflow filters
provide inspiration for designing future deep learning architectures for
ultrafast and efficient processing and classification of pixel detector images
at FEL facilities.Comment: 9 pages, 9 figure
- …