115 research outputs found

    Self-Ensemling for 3D Point Cloud Domain Adaption

    Full text link
    Recently 3D point cloud learning has been a hot topic in computer vision and autonomous driving. Due to the fact that it is difficult to manually annotate a qualitative large-scale 3D point cloud dataset, unsupervised domain adaptation (UDA) is popular in 3D point cloud learning which aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain. However, the generalization and reconstruction errors caused by domain shift with simply-learned model are inevitable which substantially hinder the model's capability from learning good representations. To address these issues, we propose an end-to-end self-ensembling network (SEN) for 3D point cloud domain adaption tasks. Generally, our SEN resorts to the advantages of Mean Teacher and semi-supervised learning, and introduces a soft classification loss and a consistency loss, aiming to achieve consistent generalization and accurate reconstruction. In SEN, a student network is kept in a collaborative manner with supervised learning and self-supervised learning, and a teacher network conducts temporal consistency to learn useful representations and ensure the quality of point clouds reconstruction. Extensive experiments on several 3D point cloud UDA benchmarks show that our SEN outperforms the state-of-the-art methods on both classification and segmentation tasks. Moreover, further analysis demonstrates that our SEN also achieves better reconstruction results

    Adversarial Samples on Android Malware Detection Systems for IoT Systems

    Full text link
    Many IoT(Internet of Things) systems run Android systems or Android-like systems. With the continuous development of machine learning algorithms, the learning-based Android malware detection system for IoT devices has gradually increased. However, these learning-based detection models are often vulnerable to adversarial samples. An automated testing framework is needed to help these learning-based malware detection systems for IoT devices perform security analysis. The current methods of generating adversarial samples mostly require training parameters of models and most of the methods are aimed at image data. To solve this problem, we propose a \textbf{t}esting framework for \textbf{l}earning-based \textbf{A}ndroid \textbf{m}alware \textbf{d}etection systems(TLAMD) for IoT Devices. The key challenge is how to construct a suitable fitness function to generate an effective adversarial sample without affecting the features of the application. By introducing genetic algorithms and some technical improvements, our test framework can generate adversarial samples for the IoT Android Application with a success rate of nearly 100\% and can perform black-box testing on the system

    Preparation of imine alkaloids from norditerpenoids alkaloids

    Get PDF
    682-68

    Resource allocation in information-centric wireless networking with D2D-enabled MEC: A deep reinforcement learning approach

    Get PDF
    Recently, information-centric wireless networks (ICWNs) have become a promising Internet architecture of the next generation, which allows network nodes to have computing and caching capabilities and adapt to the growing mobile data traffic in 5G high-speed communication networks. However, the design of ICWN is still faced with various challenges with respect to capacity and traffic. Therefore, mobile edge computing (MEC) and device-to-device (D2D) communications can be employed to aid offloading the core networks. This paper investigates the optimal policy for resource allocation in ICWNs by maximizing the spectrum efficiency and system capacity of the overall network. Due to unknown and stochastic properties of the wireless channel environment, this problem was modeled as a Markov decision process. In continuousvalued state and action variables, the policy gradient approach was employed to learn the optimal policy through interactions with the environment. We first recognized the communication mode according to the location of the cached content, considering whether it is D2D mode or cellular mode. Then, we adopt the Gaussian distribution as the parameterization strategy to generate continuous stochastic actions to select power. In addition, we use softmax to output channel selection to maximize system capacity and spectrum efficiency while avoiding interference to cellular users. The numerical experiments show that our learning method performs well in a D2D-enabled MEC system. 2020 Association for Computing Machinery. All rights reserved.This work was supported in part by the National Natural Science Foundation of China under Grant 61772387, in part by the Fundamental Research Funds of Ministry of Education and China Mobile under Grant MCM20170202, in part by the National Natural Science Foundation of Shaanxi Province under Grant 2019ZDLGY03-03, in part by the Graduate Innovation Fund of Xidian University under Grant 5001-20109195456, and in part by the ISN State Key Laboratory.Scopus2-s2.0-8507753100

    A graph convolutional network-based deep reinforcement learning approach for resource allocation in a cognitive radio network

    Get PDF
    Cognitive radio (CR) is a critical technique to solve the conflict between the explosive growth of traffic and severe spectrum scarcity. Reasonable radio resource allocation with CR can effectively achieve spectrum sharing and co-channel interference (CCI) mitigation. In this paper, we propose a joint channel selection and power adaptation scheme for the underlay cognitive radio network (CRN), maximizing the data rate of all secondary users (SUs) while guaranteeing the quality of service (QoS) of primary users (PUs). To exploit the underlying topology of CRNs, we model the communication network as dynamic graphs, and the random walk is used to imitate the users’ movements. Considering the lack of accurate channel state information (CSI), we use the user distance distribution contained in the graph to estimate CSI. Moreover, the graph convolutional network (GCN) is employed to extract the crucial interference features. Further, an end-to-end learning model is designed to implement the following resource allocation task to avoid the split with mismatched features and tasks. Finally, the deep reinforcement learning (DRL) framework is adopted for model learning, to explore the optimal resource allocation strategy. The simulation results verify the feasibility and convergence of the proposed scheme, and prove that its performance is significantly improved
    corecore