20,904 research outputs found
Enabling stream processing for people-centric IoT based on the fog computing paradigm
The world of machine-to-machine (M2M) communication is gradually moving from vertical single purpose solutions to multi-purpose and collaborative applications interacting across industry verticals, organizations and people - A world of Internet of Things (IoT). The dominant approach for delivering IoT applications relies on the development of cloud-based IoT platforms that collect all the data generated by the sensing elements and centrally process the information to create real business value. In this paper, we present a system that follows the Fog Computing paradigm where the sensor resources, as well as the intermediate layers between embedded devices and cloud computing datacenters, participate by providing computational, storage, and control. We discuss the design aspects of our system and present a pilot deployment for the evaluating the performance in a real-world environment. Our findings indicate that Fog Computing can address the ever-increasing amount of data that is inherent in an IoT world by effective communication among all elements of the architecture
Information Centric Networking in the IoT: Experiments with NDN in the Wild
This paper explores the feasibility, advantages, and challenges of an
ICN-based approach in the Internet of Things. We report on the first NDN
experiments in a life-size IoT deployment, spread over tens of rooms on several
floors of a building. Based on the insights gained with these experiments, the
paper analyses the shortcomings of CCN applied to IoT. Several interoperable
CCN enhancements are then proposed and evaluated. We significantly decreased
control traffic (i.e., interest messages) and leverage data path and caching to
match IoT requirements in terms of energy and bandwidth constraints. Our
optimizations increase content availability in case of IoT nodes with
intermittent activity. This paper also provides the first experimental
comparison of CCN with the common IoT standards 6LoWPAN/RPL/UDP.Comment: 10 pages, 10 figures and tables, ACM ICN-2014 conferenc
Remote Cell Growth Sensing Using Self-Sustained Bio-Oscillations
A smart sensor system for cell culture real-time supervision is proposed, allowing for a significant reduction in human effort applied to this type of assay. The approach converts the cell culture under test into a suitable “biological” oscillator. The system enables the remote acquisition and management of the “biological” oscillation signals through a secure web interface. The indirectly observed biological properties are cell growth and cell number, which are straightforwardly related to the measured bio-oscillation signal parameters, i.e., frequency and amplitude. The sensor extracts the information without complex circuitry for acquisition and measurement, taking advantage of the microcontroller features. A discrete prototype for sensing and remote monitoring is presented along with the experimental results obtained from the performed measurements, achieving the expected performance and outcomes
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability
Internet-of-Things (IoT) envisions an intelligent infrastructure of networked
smart devices offering task-specific monitoring and control services. The
unique features of IoT include extreme heterogeneity, massive number of
devices, and unpredictable dynamics partially due to human interaction. These
call for foundational innovations in network design and management. Ideally, it
should allow efficient adaptation to changing environments, and low-cost
implementation scalable to massive number of devices, subject to stringent
latency constraints. To this end, the overarching goal of this paper is to
outline a unified framework for online learning and management policies in IoT
through joint advances in communication, networking, learning, and
optimization. From the network architecture vantage point, the unified
framework leverages a promising fog architecture that enables smart devices to
have proximity access to cloud functionalities at the network edge, along the
cloud-to-things continuum. From the algorithmic perspective, key innovations
target online approaches adaptive to different degrees of nonstationarity in
IoT dynamics, and their scalable model-free implementation under limited
feedback that motivates blind or bandit approaches. The proposed framework
aspires to offer a stepping stone that leads to systematic designs and analysis
of task-specific learning and management schemes for IoT, along with a host of
new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive
and Scalable Communication Network
A resource management scheme for multi-user GFDM with adaptive modulation in frequency selective fading channels
The topic is "Low-latency communication for machine-type communication in LTE-A" and need to be specified in more detail.This final project focus on designing and evaluating a resource management scheme for a multi-user generalized frequency division multiplexing (GFDM) system, when a frequency selective fading channel and adaptive modulation is used. GFDM with adaptive subcarrier, sub-symbol and power allocation are considered. Assuming that the transmitter has a perfect knowledge of the instantaneous channel gains for all users, I propose a multi-user GFDM subcarrier, sub-symbol and power allocation algorithm to minimize the total transmit power. This work analyzes the performance of using a specific set of parameters for aligning GFDM with long term evolution (LTE) grid. The results show that the performance of the proposed algorithm using GFDM is closer to the performance of using OFDM and outperforms multiuser GFDM systems with static frequency division multiple access (FDMA) techniques which employ fixed subcarrier allocation schemes. The advantage between GFDM and OFDM is that the latency of the system can be reduced by a factor of 15 if independent demodulation is considered.El objetivo de este proyecto final es el de diseñar y evaluar un esquema para administrar los recursos de un sistema multi-usuario donde se utiliza generalized frequency division multiplexing (GFDM), cuando el canal es de frequencia de desvanecimiento selectivo y se utiliza modulación adaptiva. Consideramos un sistema GFDM con subportadora, sub-símbolo i asignación de potencia adaptiva. Asumiendo que el transmisor conoce perfectamente el estado del canal para todos los usuarios, propongo un algoritmo que asigna los recursos de forma que la potencia total de transmisión es mínima. Este trabajo analiza la eficiencia de utilizar un grupo de parámetros concretos para alinear el sistema GFDM con el sistema de LTE. Los resultados muestran que el comportamiento del algoritmo en GFDM es muy similar al de OFDM, pero mucho mayor que cuando se compara con sistemas de asignación de recursos estáticos.L’objectiu d’aquest projecte final es dissenyar i avaluar un esquema per administrar els recursos per a un sistema multi-usuari fent servir generalized frequency division multiplexing (GFDM), quan el canal es de freqüència esvaniment selectiu i es fa servir modulació adaptativa. Considerem un sistema GFDM amb subportadora, sub-símbol i assignació de potencia adaptativa. Assumint que el transmissor coneix perfectament l’estat del canal per tots els usuaris, proposo un algoritme que assigna els recursos de forma que la potencia total de transmissió es la mínima. Aquest treball analitza l’eficiència de fer servir un grup de paràmetres concrets per tal d’alinear el sistema GFDM amb el sistema de LTE. Els resultats mostren que el comportament de l’algoritme en GFDM es molt similar al de OFDM i que millora bastant els resultats quan el comparem amb sistemes d’assignament de recursos estàtics
- …