398,100 research outputs found

    Intelligence Beyond the Edge: Inference on Intermittent Embedded Systems

    Full text link
    Energy-harvesting technology provides a promising platform for future IoT applications. However, since communication is very expensive in these devices, applications will require inference "beyond the edge" to avoid wasting precious energy on pointless communication. We show that application performance is highly sensitive to inference accuracy. Unfortunately, accurate inference requires large amounts of computation and memory, and energy-harvesting systems are severely resource-constrained. Moreover, energy-harvesting systems operate intermittently, suffering frequent power failures that corrupt results and impede forward progress. This paper overcomes these challenges to present the first full-scale demonstration of DNN inference on an energy-harvesting system. We design and implement SONIC, an intermittence-aware software system with specialized support for DNN inference. SONIC introduces loop continuation, a new technique that dramatically reduces the cost of guaranteeing correct intermittent execution for loop-heavy code like DNN inference. To build a complete system, we further present GENESIS, a tool that automatically compresses networks to optimally balance inference accuracy and energy, and TAILS, which exploits SIMD hardware available in some microcontrollers to improve energy efficiency. Both SONIC & TAILS guarantee correct intermittent execution without any hand-tuning or performance loss across different power systems. Across three neural networks on a commercially available microcontroller, SONIC & TAILS reduce inference energy by 6.9x and 12.2x, respectively, over the state-of-the-art

    Data Analysis with Bayesian Networks: A Bootstrap Approach

    Full text link
    In recent years there has been significant progress in algorithms and methods for inducing Bayesian networks from data. However, in complex data analysis problems, we need to go beyond being satisfied with inducing networks with high scores. We need to provide confidence measures on features of these networks: Is the existence of an edge between two nodes warranted? Is the Markov blanket of a given node robust? Can we say something about the ordering of the variables? We should be able to address these questions, even when the amount of data is not enough to induce a high scoring network. In this paper we propose Efron's Bootstrap as a computationally efficient approach for answering these questions. In addition, we propose to use these confidence measures to induce better structures from the data, and to detect the presence of latent variables.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999

    Machine learning \& artificial intelligence in the quantum domain

    Full text link
    Quantum information technologies, and intelligent learning systems, are both emergent technologies that will likely have a transforming impact on our society. The respective underlying fields of research -- quantum information (QI) versus machine learning (ML) and artificial intelligence (AI) -- have their own specific challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question to what extent these fields can learn and benefit from each other. QML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Recently, we have witnessed breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups in ML, critical in our "big data" world. Conversely, ML already permeates cutting-edge technologies, and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical ML optimization used in quantum experiments, quantum enhancements have also been demonstrated for interactive learning, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of AI for the very design of quantum experiments, and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement, researchers have also broached the fundamental issue of quantum generalizations of ML/AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is described by quantum mechanics. In this review, we describe the main ideas, recent developments, and progress in a broad spectrum of research investigating machine learning and artificial intelligence in the quantum domain.Comment: Review paper. 106 pages. 16 figure

    Machine Intelligence Techniques for Next-Generation Context-Aware Wireless Networks

    Full text link
    The next generation wireless networks (i.e. 5G and beyond), which would be extremely dynamic and complex due to the ultra-dense deployment of heterogeneous networks (HetNets), poses many critical challenges for network planning, operation, management and troubleshooting. At the same time, generation and consumption of wireless data are becoming increasingly distributed with ongoing paradigm shift from people-centric to machine-oriented communications, making the operation of future wireless networks even more complex. In mitigating the complexity of future network operation, new approaches of intelligently utilizing distributed computational resources with improved context-awareness becomes extremely important. In this regard, the emerging fog (edge) computing architecture aiming to distribute computing, storage, control, communication, and networking functions closer to end users, have a great potential for enabling efficient operation of future wireless networks. These promising architectures make the adoption of artificial intelligence (AI) principles which incorporate learning, reasoning and decision-making mechanism, as natural choices for designing a tightly integrated network. Towards this end, this article provides a comprehensive survey on the utilization of AI integrating machine learning, data analytics and natural language processing (NLP) techniques for enhancing the efficiency of wireless network operation. In particular, we provide comprehensive discussion on the utilization of these techniques for efficient data acquisition, knowledge discovery, network planning, operation and management of the next generation wireless networks. A brief case study utilizing the AI techniques for this network has also been provided.Comment: ITU Special Issue N.1 The impact of Artificial Intelligence (AI) on communication networks and services, (To appear

    Experimental Quantum-enhanced Cryptographic Remote Control

    Full text link
    The Internet of Things (IoT), as a cutting-edge integrated cross-technology, promises to informationize people's daily lives, while being threatened by continuous challenges of eavesdropping and tampering. The emerging quantum cryptography, harnessing the random nature of quantum mechanics, may also enable unconditionally secure control network, beyond the applications in secure communications. Here, we present a quantum-enhanced cryptographic remote control scheme that combines quantum randomness and one-time pad algorithm for delivering commands remotely. We experimentally demonstrate this on an unmanned aircraft vehicle (UAV) control system. We precharge quantum random number (QRN) into controller and controlee before launching UAV, instead of distributing QRN like standard quantum communication during flight. We statistically verify the randomness of both quantum keys and the converted ciphertexts to check the security capability. All commands in the air are found to be completely chaotic after encryption, and only matched keys on UAV can decipher those commands precisely. In addition, the controlee does not response to the commands that are not or incorrectly encrypted, showing the immunity against interference and decoy. Our work adds true randomness and quantum enhancement into the realm of secure control algorithm in a straightforward and practical fashion, providing a promoted solution for the security of artificial intelligence and IoT.Comment: 7 pages, 5 figures, 2 table

    6G: The Next Frontier

    Full text link
    The current development of 5G networks represents a breakthrough in the design of communication networks, for its ability to provide a single platform enabling a variety of different services, from enhanced mobile broadband communications, automated driving, Internet-of-Things, with its huge number of connected devices, etc. Nevertheless, looking at the current development of technologies and new services, it is already possible to envision the need to move beyond 5G with a new architecture incorporating new services and technologies. The goal of this paper is to motivate the need to move to a sixth generation (6G) of mobile communication networks, starting from a gap analysis of 5G, and predicting a new synthesis of near future services, like hologram interfaces, ambient sensing intelligence, a pervasive introduction of artificial intelligence and the incorporation of technologies, like TeraHertz (THz) or Visible Light Communications (VLC), 3-dimensional coverage.Comment: This paper was submitted to IEEE Vehicular Technologies Magazine on the 7th of January 201

    Mobile Edge Computing and Artificial Intelligence: A Mutually-Beneficial Relationship

    Full text link
    This article provides an overview of mobile edge computing (MEC) and artificial intelligence (AI) and discusses the mutually-beneficial relationship between them. AI provides revolutionary solutions in nearly every important aspect of the MEC offloading process, such as resource management and scheduling. On the other hand, MEC servers are utilized to avail a distributed and parallelized learning framework, namely mobile edge learning.Comment: 6 pages, 2 figures, IEEE ComSoc Technical Committees Newslette

    A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems

    Full text link
    The ongoing deployment of 5G cellular systems is continuously exposing the inherent limitations of this system, compared to its original premise as an enabler for Internet of Everything applications. These 5G drawbacks are currently spurring worldwide activities focused on defining the next-generation 6G wireless system that can truly integrate far-reaching applications ranging from autonomous systems to extended reality and haptics. Despite recent 6G initiatives1, the fundamental architectural and performance components of the system remain largely undefined. In this paper, we present a holistic, forward-looking vision that defines the tenets of a 6G system. We opine that 6G will not be a mere exploration of more spectrum at high-frequency bands, but it will rather be a convergence of upcoming technological trends driven by exciting, underlying services. In this regard, we first identify the primary drivers of 6G systems, in terms of applications and accompanying technological trends. Then, we propose a new set of service classes and expose their target 6G performance requirements. We then identify the enabling technologies for the introduced 6G services and outline a comprehensive research agenda that leverages those technologies. We conclude by providing concrete recommendations for the roadmap toward 6G. Ultimately, the intent of this article is to serve as a basis for stimulating more out-of-the-box research around 6G.Comment: This paper has been accepted by IEEE Networ

    Adapted and Oversegmenting Graphs: Application to Geometric Deep Learning

    Full text link
    We propose a novel iterative method to adapt a a graph to d-dimensional image data. The method drives the nodes of the graph towards image features. The adaptation process naturally lends itself to a measure of feature saliency which can then be used to retain meaningful nodes and edges in the graph. From the adapted graph, we also propose the computation of a dual graph, which inherits the saliency measure from the adapted graph, and whose edges run along image features, hence producing an oversegmenting graph. The proposed method is computationally efficient and fully parallelisable. We propose two distance measures to find image saliency along graph edges, and evaluate the performance on synthetic images and on natural images from publicly available databases. In both cases, the most salient nodes of the graph achieve average boundary recall over 90%. We also apply our method to image classification on the MNIST hand-written digit dataset, using a recently proposed Deep Geometric Learning architecture, and achieving state-of-the-art classification accuracy, for a graph-based method, of 97.86%.Comment: Submited to CVI

    Physarum-inspired Network Optimization: A Review

    Full text link
    The popular Physarum-inspired Algorithms (PAs) have the potential to solve challenging network optimization problems. However, the existing researches on PAs are still immature and far from being fully recognized. A major reason is that these researches have not been well organized so far. In this paper, we aim to address this issue. First, we introduce Physarum and its intelligence from the biological perspective. Then, we summarize and group four types of Physarum-inspired networking models. After that, we analyze the network optimization problems and applications that have been challenged by PAs based on these models. Ultimately, we discuss the existing researches on PAs and identify two fundamental questions: 1) What are the characteristics of Physarum networks? 2) Why can Physarum solve some network optimization problems? Answering these two questions is essential to the future development of Physarum-inspired network optimization.Comment: Physarum polycephalum; nature-inspired algorithm; data analytic
    corecore