66,789 research outputs found
Distributed Hybrid Simulation of the Internet of Things and Smart Territories
This paper deals with the use of hybrid simulation to build and compose
heterogeneous simulation scenarios that can be proficiently exploited to model
and represent the Internet of Things (IoT). Hybrid simulation is a methodology
that combines multiple modalities of modeling/simulation. Complex scenarios are
decomposed into simpler ones, each one being simulated through a specific
simulation strategy. All these simulation building blocks are then synchronized
and coordinated. This simulation methodology is an ideal one to represent IoT
setups, which are usually very demanding, due to the heterogeneity of possible
scenarios arising from the massive deployment of an enormous amount of sensors
and devices. We present a use case concerned with the distributed simulation of
smart territories, a novel view of decentralized geographical spaces that,
thanks to the use of IoT, builds ICT services to manage resources in a way that
is sustainable and not harmful to the environment. Three different simulation
models are combined together, namely, an adaptive agent-based parallel and
distributed simulator, an OMNeT++ based discrete event simulator and a
script-language simulator based on MATLAB. Results from a performance analysis
confirm the viability of using hybrid simulation to model complex IoT
scenarios.Comment: arXiv admin note: substantial text overlap with arXiv:1605.0487
Modeling the Internet of Things: a simulation perspective
This paper deals with the problem of properly simulating the Internet of
Things (IoT). Simulating an IoT allows evaluating strategies that can be
employed to deploy smart services over different kinds of territories. However,
the heterogeneity of scenarios seriously complicates this task. This imposes
the use of sophisticated modeling and simulation techniques. We discuss novel
approaches for the provision of scalable simulation scenarios, that enable the
real-time execution of massively populated IoT environments. Attention is given
to novel hybrid and multi-level simulation techniques that, when combined with
agent-based, adaptive Parallel and Distributed Simulation (PADS) approaches,
can provide means to perform highly detailed simulations on demand. To support
this claim, we detail a use case concerned with the simulation of vehicular
transportation systems.Comment: Proceedings of the IEEE 2017 International Conference on High
Performance Computing and Simulation (HPCS 2017
RWTH ASR Systems for LibriSpeech: Hybrid vs Attention -- w/o Data Augmentation
We present state-of-the-art automatic speech recognition (ASR) systems
employing a standard hybrid DNN/HMM architecture compared to an attention-based
encoder-decoder design for the LibriSpeech task. Detailed descriptions of the
system development, including model design, pretraining schemes, training
schedules, and optimization approaches are provided for both system
architectures. Both hybrid DNN/HMM and attention-based systems employ
bi-directional LSTMs for acoustic modeling/encoding. For language modeling, we
employ both LSTM and Transformer based architectures. All our systems are built
using RWTHs open-source toolkits RASR and RETURNN. To the best knowledge of the
authors, the results obtained when training on the full LibriSpeech training
set, are the best published currently, both for the hybrid DNN/HMM and the
attention-based systems. Our single hybrid system even outperforms previous
results obtained from combining eight single systems. Our comparison shows that
on the LibriSpeech 960h task, the hybrid DNN/HMM system outperforms the
attention-based system by 15% relative on the clean and 40% relative on the
other test sets in terms of word error rate. Moreover, experiments on a reduced
100h-subset of the LibriSpeech training corpus even show a more pronounced
margin between the hybrid DNN/HMM and attention-based architectures.Comment: Proceedings of INTERSPEECH 201
A Survey on Compiler Autotuning using Machine Learning
Since the mid-1990s, researchers have been trying to use machine-learning
based approaches to solve a number of different compiler optimization problems.
These techniques primarily enhance the quality of the obtained results and,
more importantly, make it feasible to tackle two main compiler optimization
problems: optimization selection (choosing which optimizations to apply) and
phase-ordering (choosing the order of applying optimizations). The compiler
optimization space continues to grow due to the advancement of applications,
increasing number of compiler optimizations, and new target architectures.
Generic optimization passes in compilers cannot fully leverage newly introduced
optimizations and, therefore, cannot keep up with the pace of increasing
options. This survey summarizes and classifies the recent advances in using
machine learning for the compiler optimization field, particularly on the two
major problems of (1) selecting the best optimizations and (2) the
phase-ordering of optimizations. The survey highlights the approaches taken so
far, the obtained results, the fine-grain classification among different
approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our
Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated
quarterly here (Send me your new published papers to be added in the
subsequent version) History: Received November 2016; Revised August 2017;
Revised February 2018; Accepted March 2018
- …