319,305 research outputs found

    Federated Meta Learning Enhanced Acoustic Radio Cooperative Framework for Ocean of Things Underwater Acoustic Communications

    Full text link
    Sixth-generation wireless communication (6G) will be an integrated architecture of "space, air, ground and sea". One of the most difficult part of this architecture is the underwater information acquisition which need to transmitt information cross the interface between water and air.In this senario, ocean of things (OoT) will play an important role, because it can serve as a hub connecting Internet of things (IoT) and Internet of underwater things (IoUT). OoT device not only can collect data through underwater methods, but also can utilize radio frequence over the air. For underwater communications, underwater acoustic communications (UWA COMMs) is the most effective way for OoT devices to exchange information, but it is always tormented by doppler shift and synchronization errors. In this paper, in order to overcome UWA tough conditions, a deep neural networks based receiver for underwater acoustic chirp communication, called C-DNN, is proposed. Moreover, to improve the performance of DL-model and solve the problem of model generalization, we also proposed a novel federated meta learning (FML) enhanced acoustic radio cooperative (ARC) framework, dubbed ARC/FML, to do transfer. Particularly, tractable expressions are derived for the convergence rate of FML in a wireless setting, accounting for effects from both scheduling ratio, local epoch and the data amount on a single node.From our analysis and simulation results, it is shown that, the proposed C-DNN can provide a better BER performance and lower complexity than classical matched filter (MF) in underwater acoustic communications scenario. The ARC/FML framework has good convergence under a variety of channels than federated learning (FL). In summary, the proposed ARC/FML for OoT is a promising scheme for information exchange across water and air

    Distributed learning and inference in deep models

    Get PDF
    In recent years, the size of deep learning problems has been increased significantly, both in terms of the number of available training samples as well as the number of parameters and complexity of the model. In this thesis, we considered the challenges encountered in training and inference of large deep models, especially on nodes with limited computational power and capacity. We studied two classes of related problems; 1) distributed training of deep models, and 2) compression and restructuring of deep models for efficient distributed and parallel execution to reduce inference times. Especially, we considered the communication bottleneck in distributed training and inference of deep models. Data compression is a viable tool to mitigate the communication bottleneck in distributed deep learning. However, the existing methods suffer from a few drawbacks, such as the increased variance of stochastic gradients (SG), slower convergence rate, or added bias to SG. In my Ph.D. research, we have addressed these challenges from three different perspectives: 1) Information Theory and the CEO Problem, 2) Indirect SG compression via Matrix Factorization, and 3) Quantized Compressive Sampling. We showed, both theoretically and via simulations, that our proposed methods can achieve smaller MSE than other unbiased compression methods with fewer communication bit-rates, resulting in superior convergence rates. Next, we considered federated learning over wireless multiple access channels (MAC). Efficient communication requires the communication algorithm to satisfy the constraints imposed by the nodes in the network and the communication medium. To satisfy these constraints and take advantage of the over-the-air computation inherent in MAC, we proposed a framework based on random linear coding and developed efficient power management and channel usage techniques to manage the trade-offs between power consumption and communication bit-rate. In the second part of this thesis, we considered the distributed parallel implementation of an already-trained deep model on multiple workers. Since latency due to the synchronization and data transfer among workers adversely affects the performance of the parallel implementation, it is desirable to have minimum interdependency among parallel sub-models on the workers. To achieve this goal, we developed and analyzed RePurpose, an efficient algorithm to rearrange the neurons in the neural network and partition them (without changing the general topology of the neural network) such that the interdependency among sub-models is minimized under the computations and communications constraints of the workers.Ph.D

    Guided Deep Reinforcement Learning for Swarm Systems

    Full text link
    In this paper, we investigate how to learn to control a group of cooperative agents with limited sensing capabilities such as robot swarms. The agents have only very basic sensor capabilities, yet in a group they can accomplish sophisticated tasks, such as distributed assembly or search and rescue tasks. Learning a policy for a group of agents is difficult due to distributed partial observability of the state. Here, we follow a guided approach where a critic has central access to the global state during learning, which simplifies the policy evaluation problem from a reinforcement learning point of view. For example, we can get the positions of all robots of the swarm using a camera image of a scene. This camera image is only available to the critic and not to the control policies of the robots. We follow an actor-critic approach, where the actors base their decisions only on locally sensed information. In contrast, the critic is learned based on the true global state. Our algorithm uses deep reinforcement learning to approximate both the Q-function and the policy. The performance of the algorithm is evaluated on two tasks with simple simulated 2D agents: 1) finding and maintaining a certain distance to each others and 2) locating a target.Comment: 15 pages, 8 figures, accepted at the AAMAS 2017 Autonomous Robots and Multirobot Systems (ARMS) Worksho
    • …
    corecore