205 research outputs found

    Consensus Based Networking of Distributed Virtual Environments

    Get PDF
    Distributed Virtual Environments (DVEs) are challenging to create as the goals of consistency and responsiveness become contradictory under increasing latency. DVEs have been considered as both distributed transactional databases and force-reflection systems. Both are good approaches, but they do have drawbacks. Transactional systems do not support Level 3 (L3) collaboration: manipulating the same degree-of-freedom at the same time. Force-reflection requires a client-server architecture and stabilisation techniques. With Consensus Based Networking (CBN), we suggest DVEs be considered as a distributed data-fusion problem. Many simulations run in parallel and exchange their states, with remote states integrated with continous authority. Over time the exchanges average out local differences, performing a distribued-average of a consistent, shared state. CBN aims to build simulations that are highly responsive, but consistent enough for use cases such as the piano-movers problem. CBN's support for heterogeneous nodes can transparently couple different input methods, avoid the requirement of determinism, and provide more options for personal control over the shared experience. Our work is early, however we demonstrate many successes, including L3 collaboration in room-scale VR, 1000's of interacting objects, complex configurations such as stacking, and transparent coupling of haptic devices. These have been shown before, but each with a different technique; CBN supports them all within a single, unified system

    Prediction techniques on FPGA for latency reduction on tactile internet

    Get PDF
    Tactile Internet (TI) is a new internet paradigm that enables sending touch interaction information and other stimuli, which will lead to new human-to-machine applications. However, TI applications require very low latency between devices, as the system’s latency can result from the communication channel, processing power of local devices, and the complexity of the data processing techniques, among others. Therefore, this work proposes using dedicated hardware-based reconfigurable computing to reduce the latency of prediction techniques applied to TI. Finally, we demonstrate that prediction techniques developed on field-programmable gate array (FPGA) can minimize the impacts caused by delays and loss of information. To validate our proposal, we present a comparison between software and hardware implementations and analyze synthesis results regarding hardware area occupation, throughput, and power consumption. Furthermore, comparisons with state-of-the-art works are presented, showing a significant reduction in power consumption of ≈1300× and reaching speedup rates of up to ≈52×

    A two-stage approach for robust HEVC coding and streaming

    Get PDF
    The increased compression ratios achieved by the High Efficiency Video Coding (HEVC) standard lead to reduced robustness of coded streams, with increased susceptibility to network errors and consequent video quality degradation. This paper proposes a method based on a two-stage approach to improve the error robustness of HEVC streaming, by reducing temporal error propagation in case of frame loss. The prediction mismatch that occurs at the decoder after frame loss is reduced through the following two stages: (i) at the encoding stage, the reference pictures are dynamically selected based on constraining conditions and Lagrangian optimisation, which distributes the use of reference pictures, by reducing the number of prediction units (PUs) that depend on a single reference; (ii) at the streaming stage, a motion vector (MV) prioritisation algorithm, based on spatial dependencies, selects an optimal sub-set of MVs to be transmitted, redundantly, as side information to reduce mismatched MV predictions at the decoder. The simulation results show that the proposed method significantly reduces the effect of temporal error propagation. Compared to the reference HEVC, the proposed reference picture selection method is able to improve the video quality at low packet loss rates (e.g., 1%) using the same bitrate, achieving quality gains up to 2.3 dB for 10% of packet loss ratio. It is shown, for instance, that the redundant MVs are able to boost the performance achieving quality gains of 3 dB when compared to the reference HEVC, at the cost using 4% increase in total bitrate

    Consensus Based Networking of Distributed Virtual Environments.

    Get PDF
    Distributed Virtual Environments (DVEs) are challenging to create as the goals of consistency and responsiveness become contradictory under increasing latency. DVEs have been considered as both distributed transactional databases and force-reflection systems. Both are good approaches, but they do have drawbacks. Transactional systems do not support Level 3 (L3) collaboration: manipulating the same degree-of-freedom at the same time. Force-reflection requires a client-server architecture and stabilisation techniques. With Consensus Based Networking (CBN), we suggest DVEs be considered as a distributed data-fusion problem. Many simulations run in parallel and exchange their states, with remote states integrated with continous authority. Over time the exchanges average out local differences, performing a distribued-average of a consistent, shared state. CBN aims to build simulations that are highly responsive, but consistent enough for use cases such as the piano-movers problem. CBN's support for heterogeneous nodes can transparently couple different input methods, avoid the requirement of determinism, and provide more options for personal control over the shared experience. Our work is early, however we demonstrate many successes, including L3 collaboration in room-scale VR, 1000's of interacting objects, complex configurations such as stacking, and transparent coupling of haptic devices. These have been shown before, but each with a different technique; CBN supports them all within a single, unified system

    Cybersecurity and the Digital Health: An Investigation on the State of the Art and the Position of the Actors

    Get PDF
    Cybercrime is increasingly exposing the health domain to growing risk. The push towards a strong connection of citizens to health services, through digitalization, has undisputed advantages. Digital health allows remote care, the use of medical devices with a high mechatronic and IT content with strong automation, and a large interconnection of hospital networks with an increasingly effective exchange of data. However, all this requires a great cybersecurity commitment—a commitment that must start with scholars in research and then reach the stakeholders. New devices and technological solutions are increasingly breaking into healthcare, and are able to change the processes of interaction in the health domain. This requires cybersecurity to become a vital part of patient safety through changes in human behaviour, technology, and processes, as part of a complete solution. All professionals involved in cybersecurity in the health domain were invited to contribute with their experiences. This book contains contributions from various experts and different fields. Aspects of cybersecurity in healthcare relating to technological advance and emerging risks were addressed. The new boundaries of this field and the impact of COVID-19 on some sectors, such as mhealth, have also been addressed. We dedicate the book to all those with different roles involved in cybersecurity in the health domain

    A Prospective Look: Key Enabling Technologies, Applications and Open Research Topics in 6G Networks

    Get PDF
    The fifth generation (5G) mobile networks are envisaged to enable a plethora of breakthrough advancements in wireless technologies, providing support of a diverse set of services over a single platform. While the deployment of 5G systems is scaling up globally, it is time to look ahead for beyond 5G systems. This is driven by the emerging societal trends, calling for fully automated systems and intelligent services supported by extended reality and haptics communications. To accommodate the stringent requirements of their prospective applications, which are data-driven and defined by extremely low-latency, ultra-reliable, fast and seamless wireless connectivity, research initiatives are currently focusing on a progressive roadmap towards the sixth generation (6G) networks. In this article, we shed light on some of the major enabling technologies for 6G, which are expected to revolutionize the fundamental architectures of cellular networks and provide multiple homogeneous artificial intelligence-empowered services, including distributed communications, control, computing, sensing, and energy, from its core to its end nodes. Particularly, this paper aims to answer several 6G framework related questions: What are the driving forces for the development of 6G? How will the enabling technologies of 6G differ from those in 5G? What kind of applications and interactions will they support which would not be supported by 5G? We address these questions by presenting a profound study of the 6G vision and outlining five of its disruptive technologies, i.e., terahertz communications, programmable metasurfaces, drone-based communications, backscatter communications and tactile internet, as well as their potential applications. Then, by leveraging the state-of-the-art literature surveyed for each technology, we discuss their requirements, key challenges, and open research problems

    Internet of Things 2.0: Concepts, Applications, and Future Directions

    Full text link
    Applications and technologies of the Internet of Things are in high demand with the increase of network devices. With the development of technologies such as 5G, machine learning, edge computing, and Industry 4.0, the Internet of Things has evolved. This survey article discusses the evolution of the Internet of Things and presents the vision for Internet of Things 2.0. The Internet of Things 2.0 development is discussed across seven major fields. These fields are machine learning intelligence, mission critical communication, scalability, energy harvesting-based energy sustainability, interoperability, user friendly IoT, and security. Other than these major fields, the architectural development of the Internet of Things and major types of applications are also reviewed. Finally, this article ends with the vision and current limitations of the Internet of Things in future network environments

    Addressing training data sparsity and interpretability challenges in AI based cellular networks

    Get PDF
    To meet the diverse and stringent communication requirements for emerging networks use cases, zero-touch arti cial intelligence (AI) based deep automation in cellular networks is envisioned. However, the full potential of AI in cellular networks remains hindered by two key challenges: (i) training data is not as freely available in cellular networks as in other fields where AI has made a profound impact and (ii) current AI models tend to have black box behavior making operators reluctant to entrust the operation of multibillion mission critical networks to a black box AI engine, which allow little insights and discovery of relationships between the configuration and optimization parameters and key performance indicators. This dissertation systematically addresses and proposes solutions to these two key problems faced by emerging networks. A framework towards addressing the training data sparsity challenge in cellular networks is developed, that can assist network operators and researchers in choosing the optimal data enrichment technique for different network scenarios, based on the available information. The framework encompasses classical interpolation techniques, like inverse distance weighted and kriging to more advanced ML-based methods, like transfer learning and generative adversarial networks, several new techniques, such as matrix completion theory and leveraging different types of network geometries, and simulators and testbeds, among others. The proposed framework will lead to more accurate ML models, that rely on sufficient amount of representative training data. Moreover, solutions are proposed to address the data sparsity challenge specifically in Minimization of drive test (MDT) based automation approaches. MDT allows coverage to be estimated at the base station by exploiting measurement reports gathered by the user equipment without the need for drive tests. Thus, MDT is a key enabling feature for data and artificial intelligence driven autonomous operation and optimization in current and emerging cellular networks. However, to date, the utility of MDT feature remains thwarted by issues such as sparsity of user reports and user positioning inaccuracy. For the first time, this dissertation reveals the existence of an optimal bin width for coverage estimation in the presence of inaccurate user positioning, scarcity of user reports and quantization error. The presented framework can enable network operators to configure the bin size for given positioning accuracy and user density that results in the most accurate MDT based coverage estimation. The lack of interpretability in AI-enabled networks is addressed by proposing a first of its kind novel neural network architecture leveraging analytical modeling, domain knowledge, big data and machine learning to turn black box machine learning models into more interpretable models. The proposed approach combines analytical modeling and domain knowledge to custom design machine learning models with the aim of moving towards interpretable machine learning models, that not only require a lesser training time, but can also deal with issues such as sparsity of training data and determination of model hyperparameters. The approach is tested using both simulated data and real data and results show that the proposed approach outperforms existing mathematical models, while also remaining interpretable when compared with black-box ML models. Thus, the proposed approach can be used to derive better mathematical models of complex systems. The findings from this dissertation can help solve the challenges in emerging AI-based cellular networks and thus aid in their design, operation and optimization

    A prospective look: key enabling technologies, applications and open research topics in 6G networks

    Get PDF
    The fifth generation (5G) mobile networks are envisaged to enable a plethora of breakthrough advancements in wireless technologies, providing support of a diverse set of services over a single platform. While the deployment of 5G systems is scaling up globally, it is time to look ahead for beyond 5G systems. This is mainly driven by the emerging societal trends, calling for fully automated systems and intelligent services supported by extended reality and haptics communications. To accommodate the stringent requirements of their prospective applications, which are data-driven and defined by extremely low-latency, ultra-reliable, fast and seamless wireless connectivity, research initiatives are currently focusing on a progressive roadmap towards the sixth generation (6G) networks, which are expected to bring transformative changes to this premise. In this article, we shed light on some of the major enabling technologies for 6G, which are expected to revolutionize the fundamental architectures of cellular networks and provide multiple homogeneous artificial intelligence-empowered services, including distributed communications, control, computing, sensing, and energy, from its core to its end nodes. In particular, the present paper aims to answer several 6G framework related questions: What are the driving forces for the development of 6G? How will the enabling technologies of 6G differ from those in 5G? What kind of applications and interactions will they support which would not be supported by 5G? We address these questions by presenting a comprehensive study of the 6G vision and outlining seven of its disruptive technologies, i.e., mmWave communications, terahertz communications, optical wireless communications, programmable metasurfaces, drone-based communications, backscatter communications and tactile internet, as well as their potential applications. Then, by leveraging the state-of-the-art literature surveyed for each technology, we discuss the associated requirements, key challenges, and open research problems. These discussions are thereafter used to open up the horizon for future research directions

    Error resilience and concealment techniques for high-efficiency video coding

    Get PDF
    This thesis investigates the problem of robust coding and error concealment in High Efficiency Video Coding (HEVC). After a review of the current state of the art, a simulation study about error robustness, revealed that the HEVC has weak protection against network losses with significant impact on video quality degradation. Based on this evidence, the first contribution of this work is a new method to reduce the temporal dependencies between motion vectors, by improving the decoded video quality without compromising the compression efficiency. The second contribution of this thesis is a two-stage approach for reducing the mismatch of temporal predictions in case of video streams received with errors or lost data. At the encoding stage, the reference pictures are dynamically distributed based on a constrained Lagrangian rate-distortion optimization to reduce the number of predictions from a single reference. At the streaming stage, a prioritization algorithm, based on spatial dependencies, selects a reduced set of motion vectors to be transmitted, as side information, to reduce mismatched motion predictions at the decoder. The problem of error concealment-aware video coding is also investigated to enhance the overall error robustness. A new approach based on scalable coding and optimally error concealment selection is proposed, where the optimal error concealment modes are found by simulating transmission losses, followed by a saliency-weighted optimisation. Moreover, recovery residual information is encoded using a rate-controlled enhancement layer. Both are transmitted to the decoder to be used in case of data loss. Finally, an adaptive error resilience scheme is proposed to dynamically predict the video stream that achieves the highest decoded quality for a particular loss case. A neural network selects among the various video streams, encoded with different levels of compression efficiency and error protection, based on information from the video signal, the coded stream and the transmission network. Overall, the new robust video coding methods investigated in this thesis yield consistent quality gains in comparison with other existing methods and also the ones implemented in the HEVC reference software. Furthermore, the trade-off between coding efficiency and error robustness is also better in the proposed methods
    • …
    corecore