890 research outputs found
Internet of things
Manual of Digital Earth / Editors: Huadong Guo, Michael F. Goodchild, Alessandro Annoni .- Springer, 2020 .- ISBN: 978-981-32-9915-3Digital Earth was born with the aim of replicating the real world within the digital world. Many efforts have been made to observe and sense the Earth, both from space (remote sensing) and by using in situ sensors. Focusing on the latter, advances in Digital Earth have established vital bridges to exploit these sensors and their networks by taking location as a key element. The current era of connectivity envisions that everything is connected to everything. The concept of the Internet of Things(IoT)emergedasaholisticproposaltoenableanecosystemofvaried,heterogeneous networked objects and devices to speak to and interact with each other. To make the IoT ecosystem a reality, it is necessary to understand the electronic components, communication protocols, real-time analysis techniques, and the location of the objects and devices. The IoT ecosystem and the Digital Earth (DE) jointly form interrelated infrastructures for addressing today’s pressing issues and complex challenges. In this chapter, we explore the synergies and frictions in establishing an efficient and permanent collaboration between the two infrastructures, in order to adequately address multidisciplinary and increasingly complex real-world problems. Although there are still some pending issues, the identified synergies generate optimism for a true collaboration between the Internet of Things and the Digital Earth
Secured and Cooperative Publish/Subscribe Scheme in Autonomous Vehicular Networks
In order to save computing power yet enhance safety, there is a strong
intention for autonomous vehicles (AVs) in future to drive collaboratively by
sharing sensory data and computing results among neighbors. However, the
intense collaborative computing and data transmissions among unknown others
will inevitably introduce severe security concerns. Aiming at addressing
security concerns in future AVs, in this paper, we develop SPAD, a secured
framework to forbid free-riders and {promote trustworthy data dissemination} in
collaborative autonomous driving. Specifically, we first introduce a
publish/subscribe framework for inter-vehicle data transmissions{. To defend
against free-riding attacks,} we formulate the interactions between publisher
AVs and subscriber AVs as a vehicular publish/subscribe game, {and incentivize
AVs to deliver high-quality data by analyzing the Stackelberg equilibrium of
the game. We also design a reputation evaluation mechanism in the game} to
identify malicious AVs {in disseminating fake information}. {Furthermore, for}
lack of sufficient knowledge on parameters of {the} network model and user cost
model {in dynamic game scenarios}, a two-tier reinforcement learning based
algorithm with hotbooting is developed to obtain the optimal {strategies of
subscriber AVs and publisher AVs with free-rider prevention}. Extensive
simulations are conducted, and the results validate that our SPAD can
effectively {prevent free-riders and enhance the dependability of disseminated
contents,} compared with conventional schemes
Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures
One of the significant shifts of the next-generation computing technologies will certainly be in
the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD
landmark, evolved as a widely deployed BD operating system. Its new features include
federation structure and many associated frameworks, which provide Hadoop 3.x with the
maturity to serve different markets. This dissertation addresses two leading issues involved in
exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely,
(i)Scalability that directly affects the system performance and overall throughput using
portable Docker containers. (ii) Security that spread the adoption of data protection practices
among practitioners using access controls. An Enhanced Mapreduce Environment (EME),
OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker
(BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for
data streaming to the cloud computing are the main contribution of this thesis study
Load Balancing in Distributed Cloud Computing: A Reinforcement Learning Algorithms in Heterogeneous Environment
Balancing load in cloud based is an important aspect that plays a vital role in order to achieve sharing of load between different types of resources such as virtual machines that lay on servers, storage in the form of hard drives and servers. Reinforcement learning approaches can be adopted with cloud computing to achieve quality of service factors such as minimized cost and response time, increased throughput, fault tolerance and utilization of all available resources in the network, thus increasing system performance. Reinforcement Learning based approaches result in making effective resource utilization by selecting the best suitable processor for task execution with minimum makespan. Since in the earlier related work done on sharing of load, there are limited reinforcement learning based approaches. However this paper, focuses on the importance of RL based approaches for achieving balanced load in the area of distributed cloud computing. A Reinforcement Learning framework is proposed and implemented for execution of tasks in heterogeneous environments, particularly, Least Load Balancing (LLB) and Booster Reinforcement Controller (BRC) Load Balancing. With the help of reinforcement learning approaches an optimal result is achieved for load sharing and task allocation. In this RL based framework processor workload is taken as an input. In this paper, the results of proposed RL based approaches have been evaluated for cost and makespan and are compared with existing load balancing techniques for task execution and resource utilization.
A survey of multi-access edge computing in 5G and beyond : fundamentals, technology integration, and state-of-the-art
Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research
Fast decision algorithms for efficient access point assignment in SDN-controlled wireless access networks
Global optimization of access point (AP) assignment to user terminals requires efficient monitoring of user behavior, fast decision algorithms, efficient control signaling, and fast AP reassignment mechanisms. In this scenario, software defined networking (SDN) technology may be suitable for network monitoring, signaling, and control. We recently proposed embedding virtual switches in user terminals for direct management by an SDN controller, further contributing to SDN-oriented access network optimization. However, since users may restrict terminal-side traffic monitoring for privacy reasons (a common assumption by previous authors), we infer user traffic classes at the APs. On the other hand, since handovers will be more frequent in dense small-cell networks (e.g., mmWave-based 5G deployments will require dense network topologies with inter-site distances of ~150-200 m), the delay to take assignment decisions should be minimal. To this end, we propose taking fast decisions based exclusively on extremely simple network-side application flow-type predictions based on past user behavior. Using real data we show that a centralized allocation algorithm based on those predictions achieves network utilization levels that approximate those of optimal allocations. We also test a distributed version of this algorithm. Finally, we quantify the elapsed time since a user traffic event takes place until its terminal is assigned an AP, when needed.Agencia Estatal de Investigación | Ref. TEC2016-76465-C2-2-RAgencia Estatal de Investigación | Ref. RTC-2016-4898-7Xunta de Galicia | Ref. GRC2018/53Fundación La Caix
BeeFlow: Behavior Tree-based Serverless Workflow Modeling and Scheduling for Resource-Constrained Edge Clusters
Serverless computing has gained popularity in edge computing due to its
flexible features, including the pay-per-use pricing model, auto-scaling
capabilities, and multi-tenancy support. Complex Serverless-based applications
typically rely on Serverless workflows (also known as Serverless function
orchestration) to express task execution logic, and numerous application- and
system-level optimization techniques have been developed for Serverless
workflow scheduling. However, there has been limited exploration of optimizing
Serverless workflow scheduling in edge computing systems, particularly in
high-density, resource-constrained environments such as system-on-chip clusters
and single-board-computer clusters. In this work, we discover that existing
Serverless workflow scheduling techniques typically assume models with limited
expressiveness and cause significant resource contention. To address these
issues, we propose modeling Serverless workflows using behavior trees, a novel
and fundamentally different approach from existing directed-acyclic-graph- and
state machine-based models. Behavior tree-based modeling allows for easy
analysis without compromising workflow expressiveness. We further present
observations derived from the inherent tree structure of behavior trees for
contention-free function collections and awareness of exact and empirical
concurrent function invocations. Based on these observations, we introduce
BeeFlow, a behavior tree-based Serverless workflow system tailored for
resource-constrained edge clusters. Experimental results demonstrate that
BeeFlow achieves up to 3.2X speedup in a high-density, resource-constrained
edge testbed and 2.5X speedup in a high-profile cloud testbed, compared with
the state-of-the-art.Comment: Accepted by Journal of Systems Architectur
- …