280 research outputs found

    A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions

    Get PDF
    The Internet has made several giant leaps over the years, from a fixed to a mobile Internet, then to the Internet of Things, and now to a Tactile Internet. The Tactile Internet goes far beyond data, audio and video delivery over fixed and mobile networks, and even beyond allowing communication and collaboration among things. It is expected to enable haptic communication and allow skill set delivery over networks. Some examples of potential applications are tele-surgery, vehicle fleets, augmented reality and industrial process automation. Several papers already cover many of the Tactile Internet-related concepts and technologies, such as haptic codecs, applications, and supporting technologies. However, none of them offers a comprehensive survey of the Tactile Internet, including its architectures and algorithms. Furthermore, none of them provides a systematic and critical review of the existing solutions. To address these lacunae, we provide a comprehensive survey of the architectures and algorithms proposed to date for the Tactile Internet. In addition, we critically review them using a well-defined set of requirements and discuss some of the lessons learned as well as the most promising research directions

    Towards Autonomous Computer Networks in Support of Critical Systems

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress

    Context-awareness for mobile sensing: a survey and future directions

    Get PDF
    The evolution of smartphones together with increasing computational power have empowered developers to create innovative context-aware applications for recognizing user related social and cognitive activities in any situation and at any location. The existence and awareness of the context provides the capability of being conscious of physical environments or situations around mobile device users. This allows network services to respond proactively and intelligently based on such awareness. The key idea behind context-aware applications is to encourage users to collect, analyze and share local sensory knowledge in the purpose for a large scale community use by creating a smart network. The desired network is capable of making autonomous logical decisions to actuate environmental objects, and also assist individuals. However, many open challenges remain, which are mostly arisen due to the middleware services provided in mobile devices have limited resources in terms of power, memory and bandwidth. Thus, it becomes critically important to study how the drawbacks can be elaborated and resolved, and at the same time better understand the opportunities for the research community to contribute to the context-awareness. To this end, this paper surveys the literature over the period of 1991-2014 from the emerging concepts to applications of context-awareness in mobile platforms by providing up-to-date research and future research directions. Moreover, it points out the challenges faced in this regard and enlighten them by proposing possible solutions

    Advances in the Convergence of Blockchain and Artificial Intelligence

    Get PDF
    Blockchain (BC) and artificial intelligence (AI) are currently two of the hottest computer science topics and their future seems bright. However, their convergence is not straightforward, and more research is needed in both fields. Thus, this book presents some of the latest advances in the convergence of BC and AI, gives useful guidelines for future researchers on how BC can help AI and how AI can become smarter, thanks to the use of BC. This book specifically analyzes the past of BC through the history of Bitcoin and then looks into the future: from massive internet-of-things (IoT) deployments, to the so-called metaverse, and to the next generation of AI-powered BC-based cyber secured applications

    Task-oriented cross-system design for Metaverse in 6G era

    Get PDF
    As an emerging concept, the Metaverse has the potential to revolutionize social interaction in the post-pandemic era by establishing a digital world for online education, remote healthcare, immersive business, intelligent transportation, and advanced manufacturing. The goal is ambitious, yet the methodologies and technologies to achieve the full vision of the Metaverse remain unclear. In this thesis, we first introduce the three pillars of infrastructure that lay the foundation of the Metaverse, i.e., Human-Computer Interfaces (HCIs), sensing and communication systems, and network architectures. Then, we depict the roadmap towards the Metaverse that consists of four stages with different applications. As one of the essential building blocks for the Metaverse, we also review the state-of-the-art Computer Vision for the Metaverse as well as the future scope. To support diverse applications in the Metaverse, we put forward a novel design methodology: task-oriented cross-system design, and further review the potential solutions and future challenges. Specifically, we establish a task-oriented cross-system design for a simple case, where sampling, communications, and prediction modules are jointly optimized for the synchronization of the real-world devices and digital model in the Metaverse. We use domain knowledge to design a deep reinforcement learning (DRL) algorithm to minimize the communication load subject to an average tracking error constraint. We validate our framework on a prototype composed of a real-world robotic arm and its digital model. The results show that our framework achieves a better trade-off between the average tracking error and the average communication load compared to a communication system without sampling and prediction. For example, the average communication load can be reduced to 87% when the average track error constraint is 0.002â—¦ . In addition, our policy outperforms the benchmark with the static sampling rate and prediction horizon optimized by exhaustive search, in terms of the tail probability of the tracking error. Furthermore, with the assistance of expert knowledge, the proposed algorithm achieves a better convergence time, stability, communication load, and average tracking error. Furthermore, we establish a task-oriented cross-system design framework for a general case, where the goal is to minimize the required packet rate for timely and accurate modeling of a real-world robotic arm in the Metaverse. Specifically, different modules including sensing, communications, prediction, control, and rendering are considered. To optimize a scheduling policy and prediction horizons, we design a Constraint Proximal Policy Optimization (CPPO) algorithm by integrating domain knowledge from relevant systems into the advanced reinforcement learning algorithm, Proximal Policy Optimization (PPO). Specifically, the Jacobian matrix for analyzing the motion of the robotic arm is included in the state of the CPPO algorithm, and the Conditional Value-at-Risk (CVaR) of the state-value function characterizing the long-term modeling error is adopted in the constraint. Besides, the policy is represented by a two-branch neural network determining the scheduling policy and the prediction horizons, respectively. To evaluate our algorithm, we build a prototype including a real-world robotic arm and its digital model in the Metaverse. The experimental results indicate that domain knowledge helps to reduce the convergence time and the required packet rate by up to 50%, and the cross-system design framework outperforms a baseline framework in terms of the required packet rate and the tail distribution of the modeling error

    QUALITY-OF-SERVICE PROVISIONING FOR SMART CITY APPLICATIONS USING SOFTWARE-DEFINED NETWORKING

    Get PDF
    In the current world, most cities have WiFi Access Points (AP) in every nook and corner. Hence upraising these cities to the status of a smart city is a more easily achievable task than before. Internet-of-Things (IoT) connections primarily use WiFi standards to form the veins of a smart city. Unfortunately, this vast potential of WiFi technology in the genesis of smart cities is somehow compromised due to its failure in meeting unique Quality-of-Service (QoS) demands of smart city applications. Out of the following QoS factors; transmission link bandwidth, packet transmission delay, jitter, and packet loss rate, not all applications call for the all of the factors at the same time. Since smart city is a pool of drastically unrelated services, this variable demand can actually be advantageous to optimize the network performance. This thesis work is an attempt to achieve one of those QoS demands, namely packet delivery latency. Three algorithms are developed to alleviate traffic load imbalance at APs so as to reduce packet forwarding delay. Software-Defined Networking (SDN) is making its way in the network world to be of great use and practicality. The algorithms make use of SDN features to control the connections to APs in order to achieve the delay requirements of smart city services. Real hardware devices are used to imitate a real-life scenario of citywide coverage consisting of WiFi devices and APs that are currently available in the market with neither of those having any additional requirements such as support for specific roaming protocol, running a software agent or sending probe packets. Extensive hardware experimentation proves the efficacy of the proposed algorithms

    Multi-criteria decision support for energy-efficient IoT edge computing offloading

    Get PDF
    Computation offloading is one of the primary technological enablers of the Internet of Things (IoT), as it helps address individual devices’ resource restrictions (e.g. process- ing and memory). In the past, offloading would always utilise remote cloud infrastruc- tures, but the increasing size of IoT data traffic and the real-time response requirements of modern and future IoT applications have led to the adoption of the edge computing paradigm, where the data is processed at the edge of the network, closer to the IoT devices. The decision as to whether cloud or edge resources will be utilised is typically taken at the design stage, based on the type of the IoT device. Yet, the conditions that determine the optimality of this decision, such as the arrival rate, nature and sizes of the tasks, and crucially the real-time conditions of the networks involved, keep changing. At the same time, the energy consumption of IoT devices is usually a key requirement, which is affected primarily by the time it takes to complete tasks, whether for the actual computation or for offloading them through the network. This thesis presents a dynamic computation offloading mechanism, which improves the performance (i.e. in terms of response time) and energy consumption of IoT de- vices in a decentralised and autonomous manner. We initially propose the Multi-critEria DecIsion support meChanism for IoT offloading(MEDICI), which runs independently on an IoT device, enabling it to make offloading decisions dynamically, based on multiple criteria, such as the state of the IoT, edge or cloud devices and the conditions of the net- work connecting them. It provides mathematical models of the expected time and energy costs for the different options of offloading a task (i.e. to the edge or the cloud or the IoT device itself). To evaluate its effectiveness, we provide simulation results, by extending the EdgeCloudSim simulator, comparing it against previous families of approaches used in the literature. Our simulations on four different types of IoT applications show that allowing customisation and dynamic offloading decision support can improve drastically the response time of time-critical and small-size applications, such as IoT cyber intrusion detection, and the energy consumption not only of the individual IoT devices but also of the system as a whole. Furthermore, we present an enhancement of our MEDICI mechanism, the ProbeLess Multi-critEria DecIsion support meChanism for IoT offloading (PL-MEDICI), which en- ables MEDICI to operate in real IoT environments without the need for probing or having pre-defined parameters in order to estimate or model the network conditions or the com- putation capabilities of the different devices involved. This is the first probeless dynamic and decentralised offloading decision support mechanism for IoT environments. The probeless property is achieved by combining lightweight statistical techniques with the concept of age of knowledge (AoK) to allow us to have accurate enough information to use for our estimations. We provide experimental results performed in a real IoT testbed with three real IoT applications, showcasing that PL-MEDICI outperforms existing techniques in terms of both response time and energy consumption. Finally, in order to further evaluate our PL-MEDICI mechanism, we formulate a mixed- integer linear program optimisation problem that provides the theoretical optimal cen- tralised solution to our problem. This is used to compare our PL-MEDICI against the theoretical optimum, given the same estimated input. Our results showed that our of- floading mechanism is close to the obtained optimal solution in terms of both the re- sponse time and energy consumptio
    • …
    corecore