1,140 research outputs found
Searching Toward Pareto-Optimal Device-Aware Neural Architectures
Recent breakthroughs in Neural Architectural Search (NAS) have achieved
state-of-the-art performance in many tasks such as image classification and
language understanding. However, most existing works only optimize for model
accuracy and largely ignore other important factors imposed by the underlying
hardware and devices, such as latency and energy, when making inference. In
this paper, we first introduce the problem of NAS and provide a survey on
recent works. Then we deep dive into two recent advancements on extending NAS
into multiple-objective frameworks: MONAS and DPP-Net. Both MONAS and DPP-Net
are capable of optimizing accuracy and other objectives imposed by devices,
searching for neural architectures that can be best deployed on a wide spectrum
of devices: from embedded systems and mobile devices to workstations.
Experimental results are poised to show that architectures found by MONAS and
DPP-Net achieves Pareto optimality w.r.t the given objectives for various
devices.Comment: ICCAD'18 Invited Pape
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Toward Edge-Efficient Dense Predictions with Synergistic Multi-Task Neural Architecture Search
In this work, we propose a novel and scalable solution to address the
challenges of developing efficient dense predictions on edge platforms. Our
first key insight is that MultiTask Learning (MTL) and hardware-aware Neural
Architecture Search (NAS) can work in synergy to greatly benefit on-device
Dense Predictions (DP). Empirical results reveal that the joint learning of the
two paradigms is surprisingly effective at improving DP accuracy, achieving
superior performance over both the transfer learning of single-task NAS and
prior state-of-the-art approaches in MTL, all with just 1/10th of the
computation. To the best of our knowledge, our framework, named EDNAS, is the
first to successfully leverage the synergistic relationship of NAS and MTL for
DP. Our second key insight is that the standard depth training for multi-task
DP can cause significant instability and noise to MTL evaluation. Instead, we
propose JAReD, an improved, easy-to-adopt Joint Absolute-Relative Depth loss,
that reduces up to 88% of the undesired noise while simultaneously boosting
accuracy. We conduct extensive evaluations on standard datasets, benchmark
against strong baselines and state-of-the-art approaches, as well as provide an
analysis of the discovered optimal architectures.Comment: WACV 2023. 14 pages, 5 figure
- …