3 research outputs found
Leveraging Communication Topologies Between Learning Agents in Deep Reinforcement Learning
A common technique to improve learning performance in deep reinforcement
learning (DRL) and many other machine learning algorithms is to run multiple
learning agents in parallel. A neglected component in the development of these
algorithms has been how best to arrange the learning agents involved to improve
distributed search. Here we draw upon results from the networked optimization
literatures suggesting that arranging learning agents in communication networks
other than fully connected topologies (the implicit way agents are commonly
arranged in) can improve learning. We explore the relative performance of four
popular families of graphs and observe that one such family (Erdos-Renyi random
graphs) empirically outperforms the de facto fully-connected communication
topology across several DRL benchmark tasks. Additionally, we observe that 1000
learning agents arranged in an Erdos-Renyi graph can perform as well as 3000
agents arranged in the standard fully-connected topology, showing the large
learning improvement possible when carefully designing the topology over which
agents communicate. We complement these empirical results with a theoretical
investigation of why our alternate topologies perform better. Overall, our work
suggests that distributed machine learning algorithms could be made more
effective if the communication topology between learning agents was optimized.Comment: arXiv admin note: substantial text overlap with arXiv:1811.1255
Social Learning and the Accuracy-Risk Trade-off in the Wisdom of the Crowd
How do we design and deploy crowdsourced prediction platforms for real-world
applications where risk is an important dimension of prediction performance? To
answer this question, we conducted a large online Wisdom of the Crowd study
where participants predicted the prices of real financial assets (e.g. S&P
500). We observe a Pareto frontier between accuracy of prediction and risk, and
find that this trade-off is mediated by social learning i.e. as social learning
is increasingly leveraged, it leads to lower accuracy but also lower risk. We
also observe that social learning leads to superior accuracy during one of our
rounds that occurred during the high market uncertainty of the Brexit vote. Our
results have implications for the design of crowdsourced prediction platforms:
for example, they suggest that the performance of the crowd should be more
comprehensively characterized by using both accuracy and risk (as is standard
in financial and statistical forecasting), in contrast to prior work where risk
of prediction has been overlooked
Communication-Efficient Edge AI: Algorithms and Systems
Artificial intelligence (AI) has achieved remarkable breakthroughs in a wide
range of fields, ranging from speech processing, image classification to drug
discovery. This is driven by the explosive growth of data, advances in machine
learning (especially deep learning), and easy access to vastly powerful
computing resources. Particularly, the wide scale deployment of edge devices
(e.g., IoT devices) generates an unprecedented scale of data, which provides
the opportunity to derive accurate models and develop various intelligent
applications at the network edge. However, such enormous data cannot all be
sent from end devices to the cloud for processing, due to the varying channel
quality, traffic congestion and/or privacy concerns. By pushing inference and
training processes of AI models to edge nodes, edge AI has emerged as a
promising alternative. AI at the edge requires close cooperation among edge
devices, such as smart phones and smart vehicles, and edge servers at the
wireless access points and base stations, which however result in heavy
communication overheads. In this paper, we present a comprehensive survey of
the recent developments in various techniques for overcoming these
communication challenges. Specifically, we first identify key communication
challenges in edge AI systems. We then introduce communication-efficient
techniques, from both algorithmic and system perspectives for training and
inference tasks at the network edge. Potential future research directions are
also highlighted.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl