55,597 research outputs found
Location-Verification and Network Planning via Machine Learning Approaches
In-region location verification (IRLV) in wireless networks is the problem of
deciding if user equipment (UE) is transmitting from inside or outside a
specific physical region (e.g., a safe room). The decision process exploits the
features of the channel between the UE and a set of network access points
(APs). We propose a solution based on machine learning (ML) implemented by a
neural network (NN) trained with the channel features (in particular, noisy
attenuation values) collected by the APs for various positions both inside and
outside the specific region. The output is a decision on the UE position
(inside or outside the region). By seeing IRLV as an hypothesis testing
problem, we address the optimal positioning of the APs for minimizing either
the area under the curve (AUC) of the receiver operating characteristic (ROC)
or the cross entropy (CE) between the NN output and ground truth (available
during the training). In order to solve the minimization problem we propose a
twostage particle swarm optimization (PSO) algorithm. We show that for a long
training and a NN with enough neurons the proposed solution achieves the
performance of the Neyman-Pearson (N-P) lemma.Comment: Accepted for Workshop on Machine Learning for Communications, June 07
2019, Avignon, Franc
Machine Learning and Location Verification in Vehicular Networks
Location information will play a very important role in emerging wireless
networks such as Intelligent Transportation Systems, 5G, and the Internet of
Things. However, wrong location information can result in poor network
outcomes. It is therefore critical to verify all location information before
further utilization in any network operation. In recent years, a number of
information-theoretic Location Verification Systems (LVSs) have been formulated
in attempts to optimally verify the location information supplied by network
users. Such LVSs, however, are somewhat limited since they rely on knowledge of
a number of channel parameters for their operation. To overcome such
limitations, in this work we introduce a Machine Learning based LVS (ML-LVS).
This new form of LVS can adapt itself to changing environments without knowing
the channel parameters. Here, for the first time, we use real-world data to
show how our ML-LVS can outperform information-theoretic LVSs. We demonstrate
this improved performance within the context of vehicular networks using
Received Signal Strength (RSS) measurements at multiple verifying base
stations. We also demonstrate the validity of the ML-LVS even in scenarios
where a sophisticated adversary optimizes her attack location.Comment: 5 pages, 3 figure
Machine Learning For In-Region Location Verification In Wireless Networks
In-region location verification (IRLV) aims at verifying whether a user is
inside a region of interest (ROI). In wireless networks, IRLV can exploit the
features of the channel between the user and a set of trusted access points. In
practice, the channel feature statistics is not available and we resort to
machine learning (ML) solutions for IRLV. We first show that solutions based on
either neural networks (NNs) or support vector machines (SVMs) and typical loss
functions are Neyman-Pearson (N-P)-optimal at learning convergence for
sufficiently complex learning machines and large training datasets . Indeed,
for finite training, ML solutions are more accurate than the N-P test based on
estimated channel statistics. Then, as estimating channel features outside the
ROI may be difficult, we consider one-class classifiers, namely auto-encoders
NNs and one-class SVMs, which however are not equivalent to the generalized
likelihood ratio test (GLRT), typically replacing the N-P test in the one-class
problem. Numerical results support the results in realistic wireless networks,
with channel models including path-loss, shadowing, and fading
Certified Reinforcement Learning with Logic Guidance
This paper proposes the first model-free Reinforcement Learning (RL)
framework to synthesise policies for unknown, and continuous-state Markov
Decision Processes (MDPs), such that a given linear temporal property is
satisfied. We convert the given property into a Limit Deterministic Buchi
Automaton (LDBA), namely a finite-state machine expressing the property.
Exploiting the structure of the LDBA, we shape a synchronous reward function
on-the-fly, so that an RL algorithm can synthesise a policy resulting in traces
that probabilistically satisfy the linear temporal property. This probability
(certificate) is also calculated in parallel with policy learning when the
state space of the MDP is finite: as such, the RL algorithm produces a policy
that is certified with respect to the property. Under the assumption of finite
state space, theoretical guarantees are provided on the convergence of the RL
algorithm to an optimal policy, maximising the above probability. We also show
that our method produces ''best available'' control policies when the logical
property cannot be satisfied. In the general case of a continuous state space,
we propose a neural network architecture for RL and we empirically show that
the algorithm finds satisfying policies, if there exist such policies. The
performance of the proposed framework is evaluated via a set of numerical
examples and benchmarks, where we observe an improvement of one order of
magnitude in the number of iterations required for the policy synthesis,
compared to existing approaches whenever available.Comment: This article draws from arXiv:1801.08099, arXiv:1809.0782
- …