131,286 research outputs found

    Exploring the use of data compression for accelerating machine learning in the edge with remote virtual graphics processing units

    Full text link
    [EN] Internet of Things (IoT) devices are usually low performance nodes connected by low bandwidth networks. To improve performance in such scenarios, some computations could be done at the edge of the network. However, edge devices may not have enough computing power to accelerate applications such as the popular machine learning ones. Using remote virtual graphics processing units (GPUs) can address this concern by accelerating applications leveraging a GPU installed in a remote device. However, this requires exchanging data with the remote GPU across the slow network. To address the problem with the slow network, the data to be exchanged with the remote GPU could be compressed. In this article, we explore the suitability of using data compression in the context of remote GPU virtualization frameworks in edge scenarios executing machine learning applications. We use popular machine learning applications to carry out such exploration. After characterizing the GPU data transfers of these applications, we analyze the usage of existing compression libraries for compressing those data transfers to/from the remote GPU. Our exploration shows that transferring compressed data becomes more beneficial as networks get slower, reducing transfer time by up to 10 times. Our analysis also reveals that efficient integration of compression into remote GPU virtualization frameworks is strongly required.European Union's Horizon 2020 Research and Innovation Programme, Grant/Award Numbers: 101016577, 101017861.Peñaranda-Cebrián, C.; Reaño, C.; Silla, F. (2022). Exploring the use of data compression for accelerating machine learning in the edge with remote virtual graphics processing units. Concurrency and Computation: Practice and Experience. 35(20):1-19. https://doi.org/10.1002/cpe.7328119352

    On Studying Distributed Machine Learning

    Get PDF
    The Internet of Things (IoT) is utilizing Deep Learning (DL) for applications such as voice or image recognition. Processing data for DL directly on IoT edge devices reduces latency and increases privacy. To overcome the resource constraints of IoT edge devices, the computation for DL inference is distributed between a cluster of several devices. This paper explores DL, IoT networks, and a novel framework for distributed processing of DL in IoT clusters. The aim is to facilitate and simplify deployment, testing, and study of a distributed DL system, even without physical devices. The contributions of this paper are a deployment of the framework to an Ubuntu virtual machine testbed and a repackaging of the framework as a Docker image for portability and fast future deployment

    Metaverse: A Vision, Architectural Elements, and Future Directions for Scalable and Realtime Virtual Worlds

    Full text link
    With the emergence of Cloud computing, Internet of Things-enabled Human-Computer Interfaces, Generative Artificial Intelligence, and high-accurate Machine and Deep-learning recognition and predictive models, along with the Post Covid-19 proliferation of social networking, and remote communications, the Metaverse gained a lot of popularity. Metaverse has the prospective to extend the physical world using virtual and augmented reality so the users can interact seamlessly with the real and virtual worlds using avatars and holograms. It has the potential to impact people in the way they interact on social media, collaborate in their work, perform marketing and business, teach, learn, and even access personalized healthcare. Several works in the literature examine Metaverse in terms of hardware wearable devices, and virtual reality gaming applications. However, the requirements of realizing the Metaverse in realtime and at a large-scale need yet to be examined for the technology to be usable. To address this limitation, this paper presents the temporal evolution of Metaverse definitions and captures its evolving requirements. Consequently, we provide insights into Metaverse requirements. In addition to enabling technologies, we lay out architectural elements for scalable, reliable, and efficient Metaverse systems, and a classification of existing Metaverse applications along with proposing required future research directions

    Compact optimized deep learning model for edge: a review

    Get PDF
    Most real-time computer vision applications, such as pedestrian detection, augmented reality, and virtual reality, heavily rely on convolutional neural networks (CNN) for real-time decision support. In addition, edge intelligence is becoming necessary for low-latency real-time applications to process the data at the source device. Therefore, processing massive amounts of data impact memory footprint, prediction time, and energy consumption, essential performance metrics in machine learning based internet of things (IoT) edge clusters. However, deploying deeper, dense, and hefty weighted CNN models on resource-constraint embedded systems and limited edge computing resources, such as memory, and battery constraints, poses significant challenges in developing the compact optimized model. Reducing the energy consumption in edge IoT networks is possible by reducing the computation and data transmission between IoT devices and gateway devices. Hence there is a high demand for making energy-efficient deep learning models for deploying on edge devices. Furthermore, recent studies show that smaller compressed models achieve significant performance compared to larger deep-learning models. This review article focuses on state-of-the-art techniques of edge intelligence, and we propose a new research framework for designing a compact optimized deep learning (DL) model deployment on edge devices

    Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms

    Get PDF
    The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent “devices”, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew “cognitive devices” are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications

    ChatGPT: Vision and Challenges

    Get PDF
    Artificial intelligence (AI) and machine learning have changed the nature of scientific inquiry in recent years. Of these, the development of virtual assistants has accelerated greatly in the past few years, with ChatGPT becoming a prominent AI language model. In this study, we examine the foundations, vision, research challenges of ChatGPT. This article investigates into the background and development of the technology behind it, as well as its popular applications. Moreover, we discuss the advantages of bringing everything together through ChatGPT and Internet of Things (IoT). Further, we speculate on the future of ChatGPT by considering various possibilities for study and development, such as energy-efficiency, cybersecurity, enhancing its applicability to additional technologies (Robotics and Computer Vision), strengthening human-AI communications, and bridging the technological gap. Finally, we discuss the important ethics and current trends of ChatGPT
    corecore