276 research outputs found

    Study and application of machine learning techniques to the deployment of services on 5G optical networks

    Get PDF
    The vision of the future 5G corresponds to a highly heterogeneous network at different levels; the increment in the number of services requests for the 5G networks imposes several technical challenges. In the 5G context, in the recent years, several machine learning-based approaches have been demonstrated as useful tools for making easier the networks’ management, by considering that different unexpected events could make that the services cannot be satisfied at the moment they are requested. Such approaches are usually referred as cognitive network management. There are too many parameters inside the 5G network affecting each layer of the network; the virtualization and abstraction of the services is a crucial part for a satisfactory service deployment, being the monitoring and control of the different planes the two keys inside the cognitive network management. In this project it has been addressed the implementation of a simulated data collector as well as the study of several machine learning-based approaches. This way, possible future performance can be predicted, giving to the system the ability to change the initial parameters and to adapt the network to future demands

    A Scalable Cluster-based Infrastructure for Edge-computing Services

    Get PDF
    In this paper we present a scalable and dynamic intermediary infrastruc- ture, SEcS (acronym of BScalable Edge computing Services’’), for developing and deploying advanced Edge computing services, by using a cluster of heterogeneous machines. Our goal is to address the challenges of the next-generation Internet services: scalability, high availability, fault-tolerance and robustness, as well as programmability and quick prototyping. The system is written in Java and is based on IBM’s Web Based Intermediaries (WBI) [71] developed at IBM Almaden Research Center

    The 6G Architecture Landscape:European Perspective

    Get PDF

    Optimization and Management of Large-scale Scientific Workflows in Heterogeneous Network Environments: From Theory to Practice

    Get PDF
    Next-generation computation-intensive scientific applications feature large-scale computing workflows of various structures, which can be modeled as simple as linear pipelines or as complex as Directed Acyclic Graphs (DAGs). Supporting such computing workflows and optimizing their end-to-end network performance are crucial to the success of scientific collaborations that require fast system response, smooth data flow, and reliable distributed operation.We construct analytical cost models and formulate a class of workflow mapping problems with different mapping objectives and network constraints. The difficulty of these mapping problems essentially arises from the topological matching nature in the spatial domain, which is further compounded by the resource sharing complicacy in the temporal dimension. We provide detailed computational complexity analysis and design optimal or heuristic algorithms with rigorous correctness proof or performance analysis. We decentralize the proposed mapping algorithms and also investigate these optimization problems in unreliable network environments for fault tolerance.To examine and evaluate the performance of the workflow mapping algorithms before actual deployment and implementation, we implement a simulation program that simulates the execution dynamics of distributed computing workflows. We also develop a scientific workflow automation and management platform based on an existing workflow engine for experimentations in real environments. The performance superiority of the proposed mapping solutions are illustrated by extensive simulation-based comparisons with existing algorithms and further verified by large-scale experiments on real-life scientific workflow applications through effective system implementation and deployment in real networks

    Smartphone as an Edge for Context-Aware Real-Time Processing for Personal e-Health

    Get PDF
    The medical domain is facing an ongoing challenge of how patients can share their health information and timeline with healthcare providers. This involves secure sharing, diverse data types, and formats reported by healthcare-related devices. A multilayer framework can address these challenges in the context of the Internet of Medical Things (IoMT). This framework utilizes smartphone sensors, external services, and medical devices that measure vital signs and communicate such real-time data with smartphones. The smartphone serves as an “edge device” to visualize, analyze, store, and report context- aware data to the cloud layer. Focusing on medical device connectivity, mobile security, data collection, and interoperability for frictionless data processing allows for building context-aware personal medical records (PMRs). These PMRs are then securely transmitted through a communication protocol, Message Queuing Telemetry Transport (MQTT), to be then utilized by authorized medical staff and healthcare institutions. MQTT is a lightweight, intuitive, and easy-to-use messaging protocol suitable for IoMT systems. Consequently, these PMRs are to be further processed in a cloud computing platform, Amazon Web Services (AWS). Through AWS and its services, architecting a customized data pipeline from the mobile user to the cloud allows displaying of useful analytics to healthcare stakeholders, secure storage, and SMS notifications. Our results demonstrate that this framework preserves the patient’s health-related timeline and shares this information with professionals. Through a serverless Business intelligence interactive dashboard generated from AWS QuickSight, further querying and data filtering techniques are applied to the PMRs which identify key metrics and trends

    Mobile Oriented Future Internet (MOFI)

    Get PDF
    This Special Issue consists of seven papers that discuss how to enhance mobility management and its associated performance in the mobile-oriented future Internet (MOFI) environment. The first two papers deal with the architectural design and experimentation of mobility management schemes, in which new schemes are proposed and real-world testbed experimentations are performed. The subsequent three papers focus on the use of software-defined networks (SDN) for effective service provisioning in the MOFI environment, together with real-world practices and testbed experimentations. The remaining two papers discuss the network engineering issues in newly emerging mobile networks, such as flying ad-hoc networks (FANET) and connected vehicular networks

    Distributed detection, localization, and estimation in time-critical wireless sensor networks

    Get PDF
    In this thesis the problem of distributed detection, localization, and estimation (DDLE) of a stationary target in a fusion center (FC) based wireless sensor network (WSN) is considered. The communication process is subject to time-critical operation, restricted power and bandwidth (BW) resources operating over a shared communication channel Buffering from Rayleigh fading and phase noise. A novel algorithm is proposed to solve the DDLE problem consisting of two dependent stages: distributed detection and distributed estimation. The WSN performs distributed detection first and based on the global detection decision the distributed estimation stage is performed. The communication between the SNs and the FC occurs over a shared channel via a slotted Aloha MAC protocol to conserve BW. In distributed detection, hard decision fusion is adopted, using the counting rule (CR), and sensor censoring in order to save power and BW. The effect of Rayleigh fading on distributed detection is also considered and accounted for by using distributed diversity combining techniques where the diversity combining is among the sensor nodes (SNs) in lieu of having the processing done at the FC. Two distributed techniques are proposed: the distributed maximum ratio combining (dMRC) and the distributed Equal Gain Combining (dEGC). Both techniques show superior detection performance when compared to conventional diversity combining procedures that take place at the FC. In distributed estimation, the segmented distributed localization and estimation (SDLE) framework is proposed. The SDLE enables efficient power and BW processing. The SOLE hinges on the idea of introducing intermediate parameters that are estimated locally by the SNs and transmitted to the FC instead of the actual measurements. This concept decouples the main problem into a simpler set of local estimation problems solved at the SNs and a global estimation problem solved at the FC. Two algorithms are proposed for solving the local problem: a nonlinear least squares (NLS) algorithm using the variable projection (VP) method and a simpler gird search (GS) method. Also, Four algorithms are proposed to solve the global problem: NLS, GS, hyperspherical intersection method (HSI), and robust hyperspherical intersection (RHSI) method. Thus, the SDLE can be solved through local and global algorithm combinations. Five combinations are tied: NLS2 (NLS-NLS), NLS-HSI, NLS-RHSI, GS2, and GS-N LS. It turns out that the last algorithm combination delivers the best localization and estimation performance. In fact , the target can be localized with less than one meter error. The SNs send their local estimates to the FC over a shared channel using the slotted-Aloha MAC protocol, which suits WSNs since it requires only one channel. However, Aloha is known for its relatively high medium access or contention delay given the medium access probability is poorly chosen. This fact significantly hinders the time-critical operation of the system. Hence, multi-packet reception (MPR) is used with slotted Aloha protocol, in which several channels are used for contention. The contention delay is analyzed for slotted Aloha with and without MPR. More specifically, the mean and variance have been analytically computed and the contention delay distribution is approximated. Having theoretical expressions for the contention delay statistics enables optimizing both the medium access probability and the number of MPR channels in order to strike a trade-off between delay performance and complexity

    The Acceptance of Using Information Technology for Disaster Risk Management: A Systematic Review

    Get PDF
    The numbers of natural disaster events are continuously affecting human and the world economics. For coping with disaster, several sectors try to develop the frameworks, systems, technologies and so on. However, there are little researches focusing on the usage behavior of Information Technology (IT) for disaster risk management (DRM). Therefore, this study investigates the affecting factors on the intention to use IT for mitigating disaster’s impacts. This study conducted a systematic review with the academic researches during 2011-2018. Two important factors from the Technology Acceptance Model (TAM) and others are used in describing individual behavior. In order to investigate the potential factors, the technology platforms are divided into nine types. According to the findings, computer software such as GIS applications are frequently used for simulation and spatial data analysis. Social media is preferred among the first choices during disaster events in order to communicate about situations and damages. Finally, we found five major potential factors which are Perceived Usefulness (PU), Perceived Ease of Use (PEOU), information accessibility, social influence, and disaster knowledge. Among them, the most essential one of using IT for disaster management is PU, while PEOU and information accessibility are more important in the web platforms
    • …
    corecore