221 research outputs found
The current opportunities and challenges of Web 3.0
With recent advancements in AI and 5G technologies,as well as the nascent
concepts of blockchain and metaverse,a new revolution of the Internet,known as
Web 3.0,is emerging. Given its significant potential impact on the internet
landscape and various professional sectors,Web 3.0 has captured considerable
attention from both academic and industry circles. This article presents an
exploratory analysis of the opportunities and challenges associated with Web
3.0. Firstly, the study evaluates the technical differences between Web 1.0,
Web 2.0, and Web 3.0, while also delving into the unique technical architecture
of Web 3.0. Secondly, by reviewing current literature, the article highlights
the current state of development surrounding Web 3.0 from both economic and
technological perspective. Thirdly, the study identifies numerous research and
regulatory obstacles that presently confront Web 3.0 initiatives. Finally, the
article concludes by providing a forward-looking perspective on the potential
future growth and progress of Web 3.0 technology
MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks
Some recent works revealed that deep neural networks (DNNs) are vulnerable to
so-called adversarial attacks where input examples are intentionally perturbed
to fool DNNs. In this work, we revisit the DNN training process that includes
adversarial examples into the training dataset so as to improve DNN's
resilience to adversarial attacks, namely, adversarial training. Our
experiments show that different adversarial strengths, i.e., perturbation
levels of adversarial examples, have different working zones to resist the
attack. Based on the observation, we propose a multi-strength adversarial
training method (MAT) that combines the adversarial training examples with
different adversarial strengths to defend adversarial attacks. Two training
structures - mixed MAT and parallel MAT - are developed to facilitate the
tradeoffs between training time and memory occupation. Our results show that
MAT can substantially minimize the accuracy degradation of deep learning
systems to adversarial attacks on MNIST, CIFAR-10, CIFAR-100, and SVHN.Comment: 6 pages, 4 figures, 2 table
Multi-Fidelity Local Surrogate Model for Computationally Efficient Microwave Component Design Optimization
Publisher's version (útgefin grein)In order to minimize the number of evaluations of high-fidelity (fine) model in the optimization process, to increase the optimization speed, and to improve optimal solution accuracy, a robust and computational-efficient multi-fidelity local surrogate-model optimization method is proposed. Based on the principle of response surface approximation, the proposed method exploits the multi-fidelity coarse models and polynomial interpolation to construct a series of local surrogate models. In the optimization process, local region modeling and optimization are performed iteratively. A judgment factor is introduced to provide information for local region size update. The last local surrogate model is refined by space mapping techniques to obtain the optimal design with high accuracy. The operation and efficiency of the approach are demonstrated through design of a bandpass filter and a compact ultra-wide-band (UWB) multiple-in multiple-out (MIMO) antenna. The response of the optimized design of the fine model meet the design specification. The proposed method not only has better convergence compared to an existing local surrogate method, but also reduces the computational cost substantially.The National Natural Science Foundation of China Grant 61471258 and by Science & Technology Innovation Committee of Shenzhen Municipality Grant KQJSCX20170328153625183."Peer Reviewed
AutoShrink: A Topology-aware NAS for Discovering Efficient Neural Architecture
Resource is an important constraint when deploying Deep Neural Networks
(DNNs) on mobile and edge devices. Existing works commonly adopt the cell-based
search approach, which limits the flexibility of network patterns in learned
cell structures. Moreover, due to the topology-agnostic nature of existing
works, including both cell-based and node-based approaches, the search process
is time consuming and the performance of found architecture may be sub-optimal.
To address these problems, we propose AutoShrink, a topology-aware Neural
Architecture Search(NAS) for searching efficient building blocks of neural
architectures. Our method is node-based and thus can learn flexible network
patterns in cell structures within a topological search space. Directed Acyclic
Graphs (DAGs) are used to abstract DNN architectures and progressively optimize
the cell structure through edge shrinking. As the search space intrinsically
reduces as the edges are progressively shrunk, AutoShrink explores more
flexible search space with even less search time. We evaluate AutoShrink on
image classification and language tasks by crafting ShrinkCNN and ShrinkRNN
models. ShrinkCNN is able to achieve up to 48% parameter reduction and save 34%
Multiply-Accumulates (MACs) on ImageNet-1K with comparable accuracy of
state-of-the-art (SOTA) models. Specifically, both ShrinkCNN and ShrinkRNN are
crafted within 1.5 GPU hours, which is 7.2x and 6.7x faster than the crafting
time of SOTA CNN and RNN models, respectively
LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning
Distributed learning systems have enabled training large-scale models over
large amount of data in significantly shorter time. In this paper, we focus on
decentralized distributed deep learning systems and aim to achieve differential
privacy with good convergence rate and low communication cost. To achieve this
goal, we propose a new learning algorithm LEASGD (Leader-Follower Elastic
Averaging Stochastic Gradient Descent), which is driven by a novel
Leader-Follower topology and a differential privacy model.We provide a
theoretical analysis of the convergence rate and the trade-off between the
performance and privacy in the private setting.The experimental results show
that LEASGD outperforms state-of-the-art decentralized learning algorithm DPSGD
by achieving steadily lower loss within the same iterations and by reducing the
communication cost by 30%. In addition, LEASGD spends less differential privacy
budget and has higher final accuracy result than DPSGD under private setting
Positive association between Interleukin-8 -251A > T polymorphism and susceptibility to gastric carcinogenesis: a meta-analysis
- …