10 research outputs found
Location-Verification and Network Planning via Machine Learning Approaches
In-region location verification (IRLV) in wireless networks is the problem of
deciding if user equipment (UE) is transmitting from inside or outside a
specific physical region (e.g., a safe room). The decision process exploits the
features of the channel between the UE and a set of network access points
(APs). We propose a solution based on machine learning (ML) implemented by a
neural network (NN) trained with the channel features (in particular, noisy
attenuation values) collected by the APs for various positions both inside and
outside the specific region. The output is a decision on the UE position
(inside or outside the region). By seeing IRLV as an hypothesis testing
problem, we address the optimal positioning of the APs for minimizing either
the area under the curve (AUC) of the receiver operating characteristic (ROC)
or the cross entropy (CE) between the NN output and ground truth (available
during the training). In order to solve the minimization problem we propose a
twostage particle swarm optimization (PSO) algorithm. We show that for a long
training and a NN with enough neurons the proposed solution achieves the
performance of the Neyman-Pearson (N-P) lemma.Comment: Accepted for Workshop on Machine Learning for Communications, June 07
2019, Avignon, Franc
Machine Learning For In-Region Location Verification In Wireless Networks
In-region location verification (IRLV) aims at verifying whether a user is
inside a region of interest (ROI). In wireless networks, IRLV can exploit the
features of the channel between the user and a set of trusted access points. In
practice, the channel feature statistics is not available and we resort to
machine learning (ML) solutions for IRLV. We first show that solutions based on
either neural networks (NNs) or support vector machines (SVMs) and typical loss
functions are Neyman-Pearson (N-P)-optimal at learning convergence for
sufficiently complex learning machines and large training datasets . Indeed,
for finite training, ML solutions are more accurate than the N-P test based on
estimated channel statistics. Then, as estimating channel features outside the
ROI may be difficult, we consider one-class classifiers, namely auto-encoders
NNs and one-class SVMs, which however are not equivalent to the generalized
likelihood ratio test (GLRT), typically replacing the N-P test in the one-class
problem. Numerical results support the results in realistic wireless networks,
with channel models including path-loss, shadowing, and fading
Cooperative Authentication in Underwater Acoustic Sensor Networks
With the growing use of underwater acoustic communications (UWAC) for both
industrial and military operations, there is a need to ensure communication
security. A particular challenge is represented by underwater acoustic networks
(UWANs), which are often left unattended over long periods of time. Currently,
due to physical and performance limitations, UWAC packets rarely include
encryption, leaving the UWAN exposed to external attacks faking legitimate
messages. In this paper, we propose a new algorithm for message authentication
in a UWAN setting. We begin by observing that, due to the strong spatial
dependency of the underwater acoustic channel, an attacker can attempt to mimic
the channel associated with the legitimate transmitter only for a small set of
receivers, typically just for a single one. Taking this into account, our
scheme relies on trusted nodes that independently help a sink node in the
authentication process. For each incoming packet, the sink fuses beliefs
evaluated by the trusted nodes to reach an authentication decision. These
beliefs are based on estimated statistical channel parameters, chosen to be the
most sensitive to the transmitter-receiver displacement. Our simulation results
show accurate identification of an attacker's packet. We also report results
from a sea experiment demonstrating the effectiveness of our approach.Comment: Author version of paper accepted for publication in the IEEE
Transactions on Wireless Communication
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig