73,658 research outputs found

    A Generalized Data Representation and Training-Performance Analysis for Deep Learning-Based Communications Systems

    Full text link
    Deep learning (DL)-based autoencoder is a potential architecture to implement end-to-end communication systems. In this letter, we first give a brief introduction to the autoencoder-represented communication system. Then, we propose a novel generalized data representation (GDR) aiming to improve the data rate of DL-based communication systems. Finally, simulation results show that the proposed GDR scheme has lower training complexity, comparable block error rate performance and higher channel capacity than the conventional one-hot vector scheme. Furthermore, we investigate the effect of signal-to-noise ratio (SNR) in DL-based communication systems and prove that training at a high SNR could produce a good training performance for autoencoder

    Deep Potential: a general representation of a many-body potential energy surface

    Full text link
    We present a simple, yet general, end-to-end deep neural network representation of the potential energy surface for atomic and molecular systems. This methodology, which we call Deep Potential, is "first-principle" based, in the sense that no ad hoc approximations or empirical fitting functions are required. The neural network structure naturally respects the underlying symmetries of the systems. When tested on a wide variety of examples, Deep Potential is able to reproduce the original model, whether empirical or quantum mechanics based, within chemical accuracy. The computational cost of this new model is not substantially larger than that of empirical force fields. In addition, the method has promising scalability properties. This brings us one step closer to being able to carry out molecular simulations with accuracy comparable to that of quantum mechanics models and computational cost comparable to that of empirical potentials

    Machine Learning for Wireless Communications in the Internet of Things: A Comprehensive Survey

    Full text link
    The Internet of Things (IoT) is expected to require more effective and efficient wireless communications than ever before. For this reason, techniques such as spectrum sharing, dynamic spectrum access, extraction of signal intelligence and optimized routing will soon become essential components of the IoT wireless communication paradigm. Given that the majority of the IoT will be composed of tiny, mobile, and energy-constrained devices, traditional techniques based on a priori network optimization may not be suitable, since (i) an accurate model of the environment may not be readily available in practical scenarios; (ii) the computational requirements of traditional optimization techniques may prove unbearable for IoT devices. To address the above challenges, much research has been devoted to exploring the use of machine learning to address problems in the IoT wireless communications domain. This work provides a comprehensive survey of the state of the art in the application of machine learning techniques to address key problems in IoT wireless communications with an emphasis on its ad hoc networking aspect. First, we present extensive background notions of machine learning techniques. Then, by adopting a bottom-up approach, we examine existing work on machine learning for the IoT at the physical, data-link and network layer of the protocol stack. Thereafter, we discuss directions taken by the community towards hardware implementation to ensure the feasibility of these techniques. Additionally, before concluding, we also provide a brief discussion of the application of machine learning in IoT beyond wireless communication. Finally, each of these discussions is accompanied by a detailed analysis of the related open problems and challenges.Comment: Ad Hoc Networks Journa

    Machine Learning for Heterogeneous Ultra-Dense Networks with Graphical Representations

    Full text link
    Heterogeneous ultra-dense network (H-UDN) is envisioned as a promising solution to sustain the explosive mobile traffic demand through network densification. By placing access points, processors, and storage units as close as possible to mobile users, H-UDNs bring forth a number of advantages, including high spectral efficiency, high energy efficiency, and low latency. Nonetheless, the high density and diversity of network entities in H-UDNs introduce formidable design challenges in collaborative signal processing and resource management. This article illustrates the great potential of machine learning techniques in solving these challenges. In particular, we show how to utilize graphical representations of H-UDNs to design efficient machine learning algorithms

    Design of Communication Systems using Deep Learning: A Variational Inference Perspective

    Full text link
    Recent research in the design of end to end communication system using deep learning has produced models which can outperform traditional communication schemes. Most of these architectures leveraged autoencoders to design the encoder at the transmitter and decoder at the receiver and train them jointly by modeling transmit symbols as latent codes from the encoder. However, in communication systems, the receiver has to work with noise corrupted versions of transmit symbols. Traditional autoencoders are not designed to work with latent codes corrupted with noise. In this work, we provide a framework to design end to end communication systems which accounts for the existence of noise corrupted transmit symbols. The proposed method uses deep neural architecture. An objective function for optimizing these models is derived based on the concepts of variational inference. Further, domain knowledge such as channel type can be systematically integrated into the objective. Through numerical simulation, the proposed method is shown to consistently produce models with better packing density and achieving it faster in multiple popular channel models as compared to the previous works leveraging deep learning models

    Online unsupervised deep unfolding for massive MIMO channel estimation

    Full text link
    Massive MIMO communication systems have a huge potential both in terms of data rate and energy efficiency, although channel estimation becomes challenging for a large number antennas. Using a physical model allows to ease the problem by injecting a priori information based on the physics of propagation. However, such a model rests on simplifying assumptions and requires to know precisely the configuration of the system, which is unrealistic in practice. In this letter, we propose to perform online learning for channel estimation in a massive MIMO context, adding flexibility to physical channel models by unfolding a channel estimation algorithm (matching pursuit) as a neural network. This leads to a computationally efficient neural network structure that can be trained online when initialized with an imperfect model. The method allows a base station to automatically correct its channel estimation algorithm based on incoming data, without the need for a separate offline training phase. It is applied to realistic millimeter wave channels and shows great performance, achieving a channel estimation error almost as low as one would get with a perfectly calibrated system

    Model-free, Model-based, and General Intelligence

    Full text link
    During the 60s and 70s, AI researchers explored intuitions about intelligence by writing programs that displayed intelligent behavior. Many good ideas came out from this work but programs written by hand were not robust or general. After the 80s, research increasingly shifted to the development of learners capable of inferring behavior and functions from experience and data, and solvers capable of tackling well-defined but intractable models like SAT, classical planning, Bayesian networks, and POMDPs. The learning approach has achieved considerable success but results in black boxes that do not have the flexibility, transparency, and generality of their model-based counterparts. Model-based approaches, on the other hand, require models and scalable algorithms. Model-free learners and model-based solvers have close parallels with Systems 1 and 2 in current theories of the human mind: the first, a fast, opaque, and inflexible intuitive mind; the second, a slow, transparent, and flexible analytical mind. In this paper, I review developments in AI and draw on these theories to discuss the gap between model-free learners and model-based solvers, a gap that needs to be bridged in order to have intelligent systems that are robust and general

    A Fog Robotics Approach to Deep Robot Learning: Application to Object Recognition and Grasp Planning in Surface Decluttering

    Full text link
    The growing demand of industrial, automotive and service robots presents a challenge to the centralized Cloud Robotics model in terms of privacy, security, latency, bandwidth, and reliability. In this paper, we present a `Fog Robotics' approach to deep robot learning that distributes compute, storage and networking resources between the Cloud and the Edge in a federated manner. Deep models are trained on non-private (public) synthetic images in the Cloud; the models are adapted to the private real images of the environment at the Edge within a trusted network and subsequently, deployed as a service for low-latency and secure inference/prediction for other robots in the network. We apply this approach to surface decluttering, where a mobile robot picks and sorts objects from a cluttered floor by learning a deep object recognition and a grasp planning model. Experiments suggest that Fog Robotics can improve performance by sim-to-real domain adaptation in comparison to exclusively using Cloud or Edge resources, while reducing the inference cycle time by 4\times to successfully declutter 86% of objects over 213 attempts.Comment: IEEE International Conference on Robotics and Automation, ICRA, 201

    A Journey from Improper Gaussian Signaling to Asymmetric Signaling

    Full text link
    The deviation of continuous and discrete complex random variables from the traditional proper and symmetric assumption to a generalized improper and asymmetric characterization (accounting correlation between a random entity and its complex conjugate), respectively, introduces new design freedom and various potential merits. As such, the theory of impropriety has vast applications in medicine, geology, acoustics, optics, image and pattern recognition, computer vision, and other numerous research fields with our main focus on the communication systems. The journey begins from the design of improper Gaussian signaling in the interference-limited communications and leads to a more elaborate and practically feasible asymmetric discrete modulation design. Such asymmetric shaping bridges the gap between theoretically and practically achievable limits with sophisticated transceiver and detection schemes in both coded/uncoded wireless/optical communication systems. Interestingly, introducing asymmetry and adjusting the transmission parameters according to some design criterion render optimal performance without affecting the bandwidth or power requirements of the systems. This dual-flavored article initially presents the tutorial base content covering the interplay of reality/complexity, propriety/impropriety and circularity/noncircularity and then surveys majority of the contributions in this enormous journey.Comment: IEEE COMST (Early Access

    Short-term Road Traffic Prediction based on Deep Cluster at Large-scale Networks

    Full text link
    Short-term road traffic prediction (STTP) is one of the most important modules in Intelligent Transportation Systems (ITS). However, network-level STTP still remains challenging due to the difficulties both in modeling the diverse traffic patterns and tacking high-dimensional time series with low latency. Therefore, a framework combining with a deep clustering (DeepCluster) module is developed for STTP at largescale networks in this paper. The DeepCluster module is proposed to supervise the representation learning in a visualized way from the large unlabeled dataset. More specifically, to fully exploit the traffic periodicity, the raw series is first split into a number of sub-series for triplets generation. The convolutional neural networks (CNNs) with triplet loss are utilized to extract the features of shape by transferring the series into visual images. The shape-based representations are then used for road segments clustering. Thereafter, motivated by the fact that the road segments in a group have similar patterns, a model sharing strategy is further proposed to build recurrent NNs (RNNs)-based predictions through a group-based model (GM), instead of individual-based model (IM) in which one model are built for one road exclusively. Our framework can not only significantly reduce the number of models and cost, but also increase the number of training data and the diversity of samples. In the end, we evaluate the proposed framework over the network of Liuli Bridge in Beijing. Experimental results show that the DeepCluster can effectively cluster the road segments and GM can achieve comparable performance against the IM with less number of models.Comment: 12 pages, 15 figures, journa
    • …
    corecore