395 research outputs found
On the Design of a Novel Joint Network-Channel Coding Scheme for the Multiple Access Relay Channel
This paper proposes a novel joint non-binary network-channel code for the
Time-Division Decode-and-Forward Multiple Access Relay Channel (TD-DF-MARC),
where the relay linearly combines -- over a non-binary finite field -- the
coded sequences from the source nodes. A method based on an EXIT chart analysis
is derived for selecting the best coefficients of the linear combination.
Moreover, it is shown that for different setups of the system, different
coefficients should be chosen in order to improve the performance. This
conclusion contrasts with previous works where a random selection was
considered. Monte Carlo simulations show that the proposed scheme outperforms,
in terms of its gap to the outage probabilities, the previously published joint
network-channel coding approaches. Besides, this gain is achieved by using very
short-length codewords, which makes the scheme particularly attractive for
low-latency applications.Comment: 28 pages, 9 figures; Submitted to IEEE Journal on Selected Areas in
Communications - Special Issue on Theories and Methods for Advanced Wireless
Relays, 201
Plausible Counterfactuals: Auditing Deep Learning Classifiers with Realistic Adversarial Examples
The last decade has witnessed the proliferation of Deep Learning models in
many applications, achieving unrivaled levels of predictive performance.
Unfortunately, the black-box nature of Deep Learning models has posed
unanswered questions about what they learn from data. Certain application
scenarios have highlighted the importance of assessing the bounds under which
Deep Learning models operate, a problem addressed by using assorted approaches
aimed at audiences from different domains. However, as the focus of the
application is placed more on non-expert users, it results mandatory to provide
the means for him/her to trust the model, just like a human gets familiar with
a system or process: by understanding the hypothetical circumstances under
which it fails. This is indeed the angular stone for this research work: to
undertake an adversarial analysis of a Deep Learning model. The proposed
framework constructs counterfactual examples by ensuring their plausibility,
e.g. there is a reasonable probability that a human could generate them without
resorting to a computer program. Therefore, this work must be regarded as
valuable auditing exercise of the usable bounds a certain model is constrained
within, thereby allowing for a much greater understanding of the capabilities
and pitfalls of a model used in a real application. To this end, a Generative
Adversarial Network (GAN) and multi-objective heuristics are used to furnish a
plausible attack to the audited model, efficiently trading between the
confusion of this model, the intensity and plausibility of the generated
counterfactual. Its utility is showcased within a human face classification
task, unveiling the enormous potential of the proposed framework.Comment: 7 pages, 5 figures. Accepted for its presentation at WCCI 202
Beamwidth Optimization in Millimeter Wave Small Cell Networks with Relay Nodes: A Swarm Intelligence Approach
Millimeter wave (mmWave) communications have been postulated as one of the
most disruptive technologies for future 5G systems. Among mmWave bands the
60-GHz radio technology is specially suited for ultradense small cells and
mobile data offloading scenarios. Many challenges remain to be addressed in
mmWave communications but among them deafness, or misalignment between
transmitter and receivers beams, and interference management lie among the most
prominent ones. In the recent years, scenarios considering negligible
interference on mmWave resource allocation have been rather common in
literature. To this end, interestingly, many open issues still need to be
addressed such as the applicability of noise-limited regime for mmWave.
Furthermore, in mmWave the beam-steering mechanism imposes a forced silence
period, in the course of which no data can be conveyed, that should not be
neglected in throughput/delay calculations. This paper introduces mmWave
enabled Small Cell Networks (SCNs) with relaying capabilities where as a result
of a coordinated meta-heuristically optimized beamwidth/alignment-delay
approach overall system throughput is optimized. Simulations have been conveyed
for three transmitter densities under TDMA and naive 'all-on' scheduling
producing average per node throughput increments of up to 248%. The paper
further elaborates on the off-balancing impact of alignment delay and
time-multiplexing strategies by illustrating how the foreseen transition that
increasing the number of transmitters produces in the regime of a fixed-node
size SCN in downlink operation fades out by a poor choice in the scheduling
strategy.Comment: 6 pages, 4 figures, European Wireless 2016 Conferenc
On the Transferability of Knowledge among Vehicle Routing Problems by using Cellular Evolutionary Multitasking
Multitasking optimization is a recently introduced paradigm, focused on the
simultaneous solving of multiple optimization problem instances (tasks). The
goal of multitasking environments is to dynamically exploit existing
complementarities and synergies among tasks, helping each other through the
transfer of genetic material. More concretely, Evolutionary Multitasking (EM)
regards to the resolution of multitasking scenarios using concepts inherited
from Evolutionary Computation. EM approaches such as the well-known
Multifactorial Evolutionary Algorithm (MFEA) are lately gaining a notable
research momentum when facing with multiple optimization problems. This work is
focused on the application of the recently proposed Multifactorial Cellular
Genetic Algorithm (MFCGA) to the well-known Capacitated Vehicle Routing Problem
(CVRP). In overall, 11 different multitasking setups have been built using 12
datasets. The contribution of this research is twofold. On the one hand, it is
the first application of the MFCGA to the Vehicle Routing Problem family of
problems. On the other hand, equally interesting is the second contribution,
which is focused on the quantitative analysis of the positive genetic
transferability among the problem instances. To do that, we provide an
empirical demonstration of the synergies arisen between the different
optimization tasks.Comment: 8 pages, 1 figure, paper accepted for presentation in the 23rd IEEE
International Conference on Intelligent Transportation Systems 2020 (IEEE
ITSC 2020
Stream Learning in Energy IoT Systems: A Case Study in Combined Cycle Power Plants
The prediction of electrical power produced in combined cycle power plants is a key challenge in the electrical power and energy systems field. This power production can vary depending on environmental variables, such as temperature, pressure, and humidity. Thus, the business problem is how to predict the power production as a function of these environmental conditions, in order to maximize the profit. The research community has solved this problem by applying Machine Learning techniques, and has managed to reduce the computational and time costs in comparison with the traditional thermodynamical analysis. Until now, this challenge has been tackled from a batch learning perspective, in which data is assumed to be at rest, and where models do not continuously integrate new information into already constructed models. We present an approach closer to the Big Data and Internet of Things paradigms, in which data are continuously arriving and where models learn incrementally, achieving significant enhancements in terms of data processing (time, memory and computational costs), and obtaining competitive performances. This work compares and examines the hourly electrical power prediction of several streaming regressors, and discusses about the best technique in terms of time processing and predictive performance to be applied on this streaming scenario.This work has been partially supported by the EU project iDev40. This project has received funding
from the ECSEL Joint Undertaking (JU) under grant agreement No 783163. The JU receives support from the
European Union’s Horizon 2020 research and innovation programme and Austria, Germany, Belgium, Italy,
Spain, Romania. It has also been supported by the Basque Government (Spain) through the project VIRTUAL
(KK-2018/00096), and by Ministerio de EconomÃa y Competitividad of Spain (Grant Ref. TIN2017-85887-C2-2-P)
A Comparison of Modelling Approaches for the Long-term Estimation of Origin Destination Matrices in Bike Sharing Systems
Micro-mobility services have gained popularity in the last years, becoming a relevant part of the transportation
network in a plethora of cities. This has given rise to a fruitful research area, covering from the impact and
relationships of these transportation modes with preexisting ones to the different ways for estimating the demand of
such services in order to guarantee the quality of service. Within this domain, docked bike sharing systems constitute
an interesting surrogate for understanding the mobility of the whole city, as origin-destination matrices can be obtained
straightforward from the information available at the docking stations. This work elaborates on the characterization of such
origin-destination matrices, providing an essential set of insights on how to estimate their behavior in the long-term. To do so, the
main non-mobility features that affect mobility are studied and used to train different machine learning algorithms to produce
viable mobility patterns. The case study performed over real data captured by the bike sharing system of Bilbao (Spain)
reveals that, by virtue of a properly selected set of features and the adoption of specialized modeling algorithms, reliable
long-term estimations of such origin-destination matrices can be effectively achieved
- …