386 research outputs found

    A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions

    Get PDF
    The Internet has made several giant leaps over the years, from a fixed to a mobile Internet, then to the Internet of Things, and now to a Tactile Internet. The Tactile Internet goes far beyond data, audio and video delivery over fixed and mobile networks, and even beyond allowing communication and collaboration among things. It is expected to enable haptic communication and allow skill set delivery over networks. Some examples of potential applications are tele-surgery, vehicle fleets, augmented reality and industrial process automation. Several papers already cover many of the Tactile Internet-related concepts and technologies, such as haptic codecs, applications, and supporting technologies. However, none of them offers a comprehensive survey of the Tactile Internet, including its architectures and algorithms. Furthermore, none of them provides a systematic and critical review of the existing solutions. To address these lacunae, we provide a comprehensive survey of the architectures and algorithms proposed to date for the Tactile Internet. In addition, we critically review them using a well-defined set of requirements and discuss some of the lessons learned as well as the most promising research directions

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Task-oriented joint design of communication and computing for Internet of Skills

    Get PDF
    Nowadays, the internet is taking a revolutionary step forward, which is known as Internet of Skills. The Internet of Skills is a concept that refers to a network of sensors, actuators, and machines that enable knowledge, skills, and expertise delivery between people and machines, regardless of their geographical locations. This concept allows an immersive remote operation and access to expertise through virtual and augmented reality, haptic communications, robotics, and other cutting-edge technologies with various applications, including remote surgery and diagnosis in healthcare, remote laboratory and training in education, remote driving in transportation, and advanced manufacturing in Industry 4.0. In this thesis, we investigate three fundamental communication requirements of Internet of Skills applications, namely ultra-low latency, ultra-high reliability, and wireless resource utilization efficiency. Although 5G communications provide cutting-edge solutions for achieving ultra-low latency and ultra-high reliability with good resource utilization efficiency, meeting these requirements is difficult, particularly in long-distance communications where the distance between source and destination is more than 300 km, considering delays and reliability issues in networking components as well as physical limits of the speed of light. Furthermore, resource utilization efficiency must be improved further to accommodate the rapidly increasing number of mobile devices. Therefore, new design techniques that take into account both communication and computing systems with the task-oriented approach are urgently needed to satisfy conflicting latency and reliability requirements while improving resource utilization efficiency. First, we design and implement a 5G-based teleoperation prototype for Internet of Skills applications. We presented two emerging Internet of Skills use cases in healthcare and education. We conducted extensive experiments evaluating local and long-distance communication latency and reliability to gain insights into the current capabilities and limitations. From our local experiments in laboratory environment where both operator and robot in the same room, we observed that communication latency is around 15 ms with a 99.9% packet reception rate (communication reliability). However, communication latency increases up to 2 seconds in long-distance scenarios (between the UK and China), while it is around 50-300 ms within the UK experiments. In addition, our observations revealed that communication reliability and overall system performance do not exhibit a direct correlation. Instead, the number of consecutive packet drops emerged as the decisive factor influencing the overall system performance and user quality of experience. In light of these findings, we proposed a two-way timeout approach. We discarded stale packets to mitigate waiting times effectively and, in turn, reduce the latency. Nevertheless, we observed that the proposed approach reduced latency at the expense of reliability, thus verifying the challenge of the conflicting latency and reliability requirements. Next, we propose a task-oriented prediction and communication co-design framework to meet conflicting latency and reliability requirements. The proposed framework demonstrates the task-oriented joint design of communication and computing systems, where we considered packet losses in communications and prediction errors in prediction algorithms to derive the upper bound for overall system reliability. We revealed the tradeoff between overall system reliability and resource utilization efficiency, where we consider 5G NR as an example communication system. The proposed framework is evaluated with real-data samples and generated synthetic data samples. From the results, the proposed framework achieves better latency and reliability tradeoff with a 77.80% resource utilization efficiency improvement compared to a task-agnostic benchmark. In addition, we demonstrate that deploying a predictor at the receiver side achieves better overall reliability compared to a system that predictor at the transmitter. Finally, we propose an intelligent mode-switching framework to address the resource utilization challenge. We jointly design the communication, user intention recognition, and modeswitching systems to reduce communication load subject to joint task completion probability. We reveal the tradeoff between task prediction accuracy and task observation length, showing that higher prediction accuracy can be achieved when the task observation length increases. The proposed framework achieves more than 90% task prediction accuracy with 60% observation length. We train a DRL agent with real-world data from our teleoperation prototype for modeswitching between teleoperation and autonomous modes. Our results show that the proposed framework achieves up to 50% communication load reduction with similar task completion probability compared to conventional teleoperation

    Enhancing Security of Automated Teller Machines Using Biometric Authentication: A Case of a Sub-Saharan University

    Get PDF
    A wide variety of systems need reliable personal recognition systems to either authorize or determine the identity of an individual demanding their services. The goal of such systems is to warrant that the rendered services are accessed only by a genuine user and no one else.In the absence of robust personal recognition schemes, these systems are vulnerable to the deceits of an impostor. The ATM has suffered a lot over the years against PIN theft and other associated ATM frauds. In this research is proposed a fingerprint and PIN based authentication arrangement to enhance the security and safety of the ATM and its users. The proposed system demonstrates a three-tier design structure. The first tier is the verification module, which concentrates on the enrollment phase, enhancement phase, feature extraction and matching of the fingerprints. The second tier is the database end which acts as a storehouse for storing the fingerprints of all ATM users preregistered as templates. The last tier presents a system platform to relate banking transactions such as balance enquiries, mini statement and withdrawal. The system is developed to run on Microsoft windows Xp or higher and all systems with .NET framework employing C# programming language, Microsoft Visio studio 2010 and SQL server 2008. The simulated results showed 96% accuracy, the simulation overlooked the absence of a cash tray. The findings of this research will be meaningful to Banks and other financial institutions. Keywords:  SQL Server, ATM, Fraud, .NET framework, financial institutions DOI: 10.7176/IKM/9-7-02 Publication date: August 31st 201

    An Efficient CNN-Based Deep Learning Model to Detect Malware Attacks (CNN-DMA) in 5G-IoT Healthcare Applications

    Get PDF
    The role of 5G-IoT has become indispensable in smart applications and it plays a crucial part in e-health applications. E-health applications require intelligent schemes and architectures to overcome the security threats against the sensitive data of patients. The information in e-healthcare applications is stored in the cloud which is vulnerable to security attacks. However, with deep learning techniques, these attacks can be detected, which needs hybrid models. In this article, a new deep learning model (CNN-DMA) is proposed to detect malware attacks based on a classifier—Convolution Neural Network (CNN). The model uses three layers, i.e., Dense, Dropout, and Flatten. Batch sizes of 64, 20 epoch, and 25 classes are used to train the network. An input image of 32 × 32 × 1 is used for the initial convolutional layer. Results are retrieved on the Malimg dataset where 25 families of malware are fed as input and our model has detected is Alueron.gen!J malware. The proposed model CNN-DMA is 99% accurate and it is validated with state-of-the-art techniques

    Human-Robot Collaborations in Industrial Automation

    Get PDF
    Technology is changing the manufacturing world. For example, sensors are being used to track inventories from the manufacturing floor up to a retail shelf or a customer’s door. These types of interconnected systems have been called the fourth industrial revolution, also known as Industry 4.0, and are projected to lower manufacturing costs. As industry moves toward these integrated technologies and lower costs, engineers will need to connect these systems via the Internet of Things (IoT). These engineers will also need to design how these connected systems interact with humans. The focus of this Special Issue is the smart sensors used in these human–robot collaborations

    Selected Papers from the 5th International Electronic Conference on Sensors and Applications

    Get PDF
    This Special Issue comprises selected papers from the proceedings of the 5th International Electronic Conference on Sensors and Applications, held on 15–30 November 2018, on sciforum.net, an online platform for hosting scholarly e-conferences and discussion groups. In this 5th edition of the electronic conference, contributors were invited to provide papers and presentations from the field of sensors and applications at large, resulting in a wide variety of excellent submissions and topic areas. Papers which attracted the most interest on the web or that provided a particularly innovative contribution were selected for publication in this collection. These peer-reviewed papers are published with the aim of rapid and wide dissemination of research results, developments, and applications. We hope this conference series will grow rapidly in the future and become recognized as a new way and venue by which to (electronically) present new developments related to the field of sensors and their applications

    one6G white paper, 6G technology overview:Second Edition, November 2022

    Get PDF
    6G is supposed to address the demands for consumption of mobile networking services in 2030 and beyond. These are characterized by a variety of diverse, often conflicting requirements, from technical ones such as extremely high data rates, unprecedented scale of communicating devices, high coverage, low communicating latency, flexibility of extension, etc., to non-technical ones such as enabling sustainable growth of the society as a whole, e.g., through energy efficiency of deployed networks. On the one hand, 6G is expected to fulfil all these individual requirements, extending thus the limits set by the previous generations of mobile networks (e.g., ten times lower latencies, or hundred times higher data rates than in 5G). On the other hand, 6G should also enable use cases characterized by combinations of these requirements never seen before, e.g., both extremely high data rates and extremely low communication latency). In this white paper, we give an overview of the key enabling technologies that constitute the pillars for the evolution towards 6G. They include: terahertz frequencies (Section 1), 6G radio access (Section 2), next generation MIMO (Section 3), integrated sensing and communication (Section 4), distributed and federated artificial intelligence (Section 5), intelligent user plane (Section 6) and flexible programmable infrastructures (Section 7). For each enabling technology, we first give the background on how and why the technology is relevant to 6G, backed up by a number of relevant use cases. After that, we describe the technology in detail, outline the key problems and difficulties, and give a comprehensive overview of the state of the art in that technology. 6G is, however, not limited to these seven technologies. They merely present our current understanding of the technological environment in which 6G is being born. Future versions of this white paper may include other relevant technologies too, as well as discuss how these technologies can be glued together in a coherent system

    Error resilience and concealment techniques for high-efficiency video coding

    Get PDF
    This thesis investigates the problem of robust coding and error concealment in High Efficiency Video Coding (HEVC). After a review of the current state of the art, a simulation study about error robustness, revealed that the HEVC has weak protection against network losses with significant impact on video quality degradation. Based on this evidence, the first contribution of this work is a new method to reduce the temporal dependencies between motion vectors, by improving the decoded video quality without compromising the compression efficiency. The second contribution of this thesis is a two-stage approach for reducing the mismatch of temporal predictions in case of video streams received with errors or lost data. At the encoding stage, the reference pictures are dynamically distributed based on a constrained Lagrangian rate-distortion optimization to reduce the number of predictions from a single reference. At the streaming stage, a prioritization algorithm, based on spatial dependencies, selects a reduced set of motion vectors to be transmitted, as side information, to reduce mismatched motion predictions at the decoder. The problem of error concealment-aware video coding is also investigated to enhance the overall error robustness. A new approach based on scalable coding and optimally error concealment selection is proposed, where the optimal error concealment modes are found by simulating transmission losses, followed by a saliency-weighted optimisation. Moreover, recovery residual information is encoded using a rate-controlled enhancement layer. Both are transmitted to the decoder to be used in case of data loss. Finally, an adaptive error resilience scheme is proposed to dynamically predict the video stream that achieves the highest decoded quality for a particular loss case. A neural network selects among the various video streams, encoded with different levels of compression efficiency and error protection, based on information from the video signal, the coded stream and the transmission network. Overall, the new robust video coding methods investigated in this thesis yield consistent quality gains in comparison with other existing methods and also the ones implemented in the HEVC reference software. Furthermore, the trade-off between coding efficiency and error robustness is also better in the proposed methods

    Towards Autonomous Computer Networks in Support of Critical Systems

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    • …
    corecore