67,214 research outputs found
Reliability for Emergency Applications in Internet of Things
International audienceThis paper addresses the Internet of Things (IoT) paradigm, which is gaining substantial ground in modern wireless telecommunications. The IoT describes a vision where heterogeneous objects like computers, sensors, Radio-Frequency IDentification (RFID)tags or mobile phones are able to communicate and cooperate efficiently to achieve common goals thanks to a common IP addressing scheme. This paper focuses on the reliability of emergency applications under IoT technology. These applications' success is contingent upon the delivery of high-priority events from many scattered objects to one or more objects without packet loss. Thus, the network has to be selfadaptiveand resilient to errors by providing efficient mechanisms for information distribution especially in the multi-hop scenario. As future perspective, we propose a lightweight and energy efficientjoint mechanism, called AJIA (Adaptive Joint protocol based on Implicit ACK), for packet loss recovery and route quality evaluation in theIoT. In this protocol, we use the overhearing feature, characterizing the wireless channels, as an implicit ACK mechanism. In addition, the protocol allows for an adaptive selection of the routing path based on the link quality
Tokens Shuffling Approach for Privacy, Security, and Reliability in IoHT under a Pandemic
Privacy and security are unavoidable challenges in the future of smart health services and
systems. Several approaches for preserving privacy have been provided in the Internet of Health
Things (IoHT) applications. However, with the emergence of COVID-19, the healthcare centers
needed to track, collect, and share more critical data such as the location of those infected and
monitor social distancing. Unfortunately, the traditional privacy-preserving approaches failed to
deal effectively with emergency circumstances. In the proposed research, we introduce a Tokens
Shuffling Approach (TSA) to preserve collected data’s privacy, security, and reliability during the
pandemic without the need to trust a third party or service providers. TSA depends on a smartphone
application and the proposed protocol to collect and share data reliably and safely. TSA depends
on a proposed algorithm for swapping the identities temporarily between cooperated users and
then hiding the identities by employing fog nodes. The fog node manages the cooperation process
between users in a specific area to improve the system’s performance. Finally, TSA uses blockchain to
save data reliability, ensure data integrity, and facilitate access. The results prove that TSA performed
better than traditional approaches regarding data privacy and the performance level. Further, we
noticed that it adapted better during emergency circumstances. Moreover, TSA did not affect the
accuracy of the collected data or its related statistics. On the contrary, TSA will not affect the quality
of primary healthcare services
Context-Awareness Enhances 5G Multi-Access Edge Computing Reliability
The fifth generation (5G) mobile telecommunication network is expected to
support Multi- Access Edge Computing (MEC), which intends to distribute
computation tasks and services from the central cloud to the edge clouds.
Towards ultra-responsive, ultra-reliable and ultra-low-latency MEC services,
the current mobile network security architecture should enable a more
decentralized approach for authentication and authorization processes. This
paper proposes a novel decentralized authentication architecture that supports
flexible and low-cost local authentication with the awareness of context
information of network elements such as user equipment and virtual network
functions. Based on a Markov model for backhaul link quality, as well as a
random walk mobility model with mixed mobility classes and traffic scenarios,
numerical simulations have demonstrated that the proposed approach is able to
achieve a flexible balance between the network operating cost and the MEC
reliability.Comment: Accepted by IEEE Access on Feb. 02, 201
Foggy clouds and cloudy fogs: a real need for coordinated management of fog-to-cloud computing systems
The recent advances in cloud services technology are fueling a plethora of information technology innovation, including networking, storage, and computing. Today, various flavors have evolved of IoT, cloud computing, and so-called fog computing, a concept referring to capabilities of edge devices and users' clients to compute, store, and exchange data among each other and with the cloud. Although the rapid pace of this evolution was not easily foreseeable, today each piece of it facilitates and enables the deployment of what we commonly refer to as a smart scenario, including smart cities, smart transportation, and smart homes. As most current cloud, fog, and network services run simultaneously in each scenario, we observe that we are at the dawn of what may be the next big step in the cloud computing and networking evolution, whereby services might be executed at the network edge, both in parallel and in a coordinated fashion, as well as supported by the unstoppable technology evolution. As edge devices become richer in functionality and smarter, embedding capacities such as storage or processing, as well as new functionalities, such as decision making, data collection, forwarding, and sharing, a real need is emerging for coordinated management of fog-to-cloud (F2C) computing systems. This article introduces a layered F2C architecture, its benefits and strengths, as well as the arising open and research challenges, making the case for the real need for their coordinated management. Our architecture, the illustrative use case presented, and a comparative performance analysis, albeit conceptual, all clearly show the way forward toward a new IoT scenario with a set of existing and unforeseen services provided on highly distributed and dynamic compute, storage, and networking resources, bringing together heterogeneous and commodity edge devices, emerging fogs, as well as conventional clouds.Peer ReviewedPostprint (author's final draft
Internet of Things Cloud: Architecture and Implementation
The Internet of Things (IoT), which enables common objects to be intelligent
and interactive, is considered the next evolution of the Internet. Its
pervasiveness and abilities to collect and analyze data which can be converted
into information have motivated a plethora of IoT applications. For the
successful deployment and management of these applications, cloud computing
techniques are indispensable since they provide high computational capabilities
as well as large storage capacity. This paper aims at providing insights about
the architecture, implementation and performance of the IoT cloud. Several
potential application scenarios of IoT cloud are studied, and an architecture
is discussed regarding the functionality of each component. Moreover, the
implementation details of the IoT cloud are presented along with the services
that it offers. The main contributions of this paper lie in the combination of
the Hypertext Transfer Protocol (HTTP) and Message Queuing Telemetry Transport
(MQTT) servers to offer IoT services in the architecture of the IoT cloud with
various techniques to guarantee high performance. Finally, experimental results
are given in order to demonstrate the service capabilities of the IoT cloud
under certain conditions.Comment: 19pages, 4figures, IEEE Communications Magazin
Security for the Industrial IoT: The Case for Information-Centric Networking
Industrial production plants traditionally include sensors for monitoring or
documenting processes, and actuators for enabling corrective actions in cases
of misconfigurations, failures, or dangerous events. With the advent of the
IoT, embedded controllers link these `things' to local networks that often are
of low power wireless kind, and are interconnected via gateways to some cloud
from the global Internet. Inter-networked sensors and actuators in the
industrial IoT form a critical subsystem while frequently operating under harsh
conditions. It is currently under debate how to approach inter-networking of
critical industrial components in a safe and secure manner.
In this paper, we analyze the potentials of ICN for providing a secure and
robust networking solution for constrained controllers in industrial safety
systems. We showcase hazardous gas sensing in widespread industrial
environments, such as refineries, and compare with IP-based approaches such as
CoAP and MQTT. Our findings indicate that the content-centric security model,
as well as enhanced DoS resistance are important arguments for deploying
Information Centric Networking in a safety-critical industrial IoT. Evaluation
of the crypto efforts on the RIOT operating system for content security reveal
its feasibility for common deployment scenarios.Comment: To be published at IEEE WF-IoT 201
A Priority-based Fair Queuing (PFQ) Model for Wireless Healthcare System
Healthcare is a very active research area, primarily due to the increase in the elderly population that leads to increasing number of emergency situations that require urgent actions. In recent years some of wireless networked medical devices were equipped with different sensors to measure and report on vital signs of patient remotely. The most important sensors are Heart Beat Rate (ECG), Pressure and Glucose sensors. However, the strict requirements and real-time nature of medical applications dictate the extreme importance and need for appropriate Quality of Service (QoS), fast and accurate delivery of a patient’s measurements in reliable e-Health ecosystem.
As the elderly age and older adult population is increasing (65 years and above) due to the advancement in medicine and medical care in the last two decades; high QoS and reliable e-health ecosystem has become a major challenge in Healthcare especially for patients who require continuous monitoring and attention. Nevertheless, predictions have indicated that elderly population will be approximately 2 billion in developing countries by 2050 where availability of medical staff shall be unable to cope with this growth and emergency cases that need immediate intervention. On the other side, limitations in communication networks capacity, congestions and the humongous increase of devices, applications and IOT using the available communication networks add extra layer of challenges on E-health ecosystem such as time constraints, quality of measurements and signals reaching healthcare centres.
Hence this research has tackled the delay and jitter parameters in E-health M2M wireless communication and succeeded in reducing them in comparison to current available models. The novelty of this research has succeeded in developing a new Priority Queuing model ‘’Priority Based-Fair Queuing’’ (PFQ) where a new priority level and concept of ‘’Patient’s Health Record’’ (PHR) has been developed and
integrated with the Priority Parameters (PP) values of each sensor to add a second level of priority. The results and data analysis performed on the PFQ model under different scenarios simulating real M2M E-health environment have revealed that the PFQ has outperformed the results obtained from simulating the widely used current models such as First in First Out (FIFO) and Weight Fair Queuing (WFQ).
PFQ model has improved transmission of ECG sensor data by decreasing delay and jitter in emergency cases by 83.32% and 75.88% respectively in comparison to FIFO and 46.65% and 60.13% with respect to WFQ model. Similarly, in pressure sensor the improvements were 82.41% and 71.5% and 68.43% and 73.36% in comparison to FIFO and WFQ respectively. Data transmission were also improved in the Glucose sensor by 80.85% and 64.7% and 92.1% and 83.17% in comparison to FIFO and WFQ respectively. However, non-emergency cases data transmission using PFQ model was negatively impacted and scored higher rates than FIFO and WFQ since PFQ tends to give higher priority to emergency cases.
Thus, a derivative from the PFQ model has been developed to create a new version namely “Priority Based-Fair Queuing-Tolerated Delay” (PFQ-TD) to balance the data transmission between emergency and non-emergency cases where tolerated delay in emergency cases has been considered. PFQ-TD has succeeded in balancing fairly this issue and reducing the total average delay and jitter of emergency and non-emergency cases in all sensors and keep them within the acceptable allowable standards. PFQ-TD has improved the overall average delay and jitter in emergency and non-emergency cases among all sensors by 41% and 84% respectively in comparison to PFQ model
- …