156 research outputs found

    An Extensive Game-Based Resource Allocation for Securing D2D Underlay Communications

    Get PDF
    Device-to-device (D2D) communication has been increasingly attractive due to its great potential to improve cellular communication performance. While resource allocation optimization for improving the spectrum efficiency is of interest in the D2D-related work, communication security, as a key issue in the system design, has not been well investigated yet. Recently, a few studies have shown that D2D users can actually serve as friendly jammers to help enhance the security of cellular user communication against eavesdropping attacks. However, only a few studies considered the security of D2D communications. In this paper, we consider the secure resource allocation problem, particularly, how to assign resources to cellular and the D2D users to maximize the system security. To solve this problem, we propose an extensive game-based algorithm aiming at strengthening the security of both cellular and the D2D communications via system resource allocation. Finally, the simulation results show that the proposed method is able to efficiently improve the overall system security when compared to existing studies

    GPT-NAS: Neural Architecture Search with the Generative Pre-Trained Model

    Full text link
    Neural Architecture Search (NAS) has emerged as one of the effective methods to design the optimal neural network architecture automatically. Although neural architectures have achieved human-level performances in several tasks, few of them are obtained from the NAS method. The main reason is the huge search space of neural architectures, making NAS algorithms inefficient. This work presents a novel architecture search algorithm, called GPT-NAS, that optimizes neural architectures by Generative Pre-Trained (GPT) model. In GPT-NAS, we assume that a generative model pre-trained on a large-scale corpus could learn the fundamental law of building neural architectures. Therefore, GPT-NAS leverages the generative pre-trained (GPT) model to propose reasonable architecture components given the basic one. Such an approach can largely reduce the search space by introducing prior knowledge in the search process. Extensive experimental results show that our GPT-NAS method significantly outperforms seven manually designed neural architectures and thirteen architectures provided by competing NAS methods. In addition, our ablation study indicates that the proposed algorithm improves the performance of finely tuned neural architectures by up to about 12% compared to those without GPT, further demonstrating its effectiveness in searching neural architectures

    CoF-CoT: Enhancing Large Language Models with Coarse-to-Fine Chain-of-Thought Prompting for Multi-domain NLU Tasks

    Full text link
    While Chain-of-Thought prompting is popular in reasoning tasks, its application to Large Language Models (LLMs) in Natural Language Understanding (NLU) is under-explored. Motivated by multi-step reasoning of LLMs, we propose Coarse-to-Fine Chain-of-Thought (CoF-CoT) approach that breaks down NLU tasks into multiple reasoning steps where LLMs can learn to acquire and leverage essential concepts to solve tasks from different granularities. Moreover, we propose leveraging semantic-based Abstract Meaning Representation (AMR) structured knowledge as an intermediate step to capture the nuances and diverse structures of utterances, and to understand connections between their varying levels of granularity. Our proposed approach is demonstrated effective in assisting the LLMs adapt to the multi-grained NLU tasks under both zero-shot and few-shot multi-domain settings.Comment: Accepted at EMNLP 2023 (Main Conference

    Aiming in Harsh Environments: A New Framework for Flexible and Adaptive Resource Management

    Full text link
    The harsh environment imposes a unique set of challenges on networking strategies. In such circumstances, the environmental impact on network resources and long-time unattended maintenance has not been well investigated yet. To address these challenges, we propose a flexible and adaptive resource management framework that incorporates the environment awareness functionality. In particular, we propose a new network architecture and introduce the new functionalities against the traditional network components. The novelties of the proposed architecture include a deep-learning-based environment resource prediction module and a self-organized service management module. Specifically, the available network resource under various environmental conditions is predicted by using the prediction module. Then based on the prediction, an environment-oriented resource allocation method is developed to optimize the system utility. To demonstrate the effectiveness and efficiency of the proposed new functionalities, we examine the method via an experiment in a case study. Finally, we introduce several promising directions of resource management in harsh environments that can be extended from this paper.Comment: 8 pages, 4 figures, to appear in IEEE Network Magazine, 202

    An Entropy-Awareness Meta-Learning Method for SAR Open-Set ATR

    Full text link
    Existing synthetic aperture radar automatic target recognition (SAR ATR) methods have been effective for the classification of seen target classes. However, it is more meaningful and challenging to distinguish the unseen target classes, i.e., open set recognition (OSR) problem, which is an urgent problem for the practical SAR ATR. The key solution of OSR is to effectively establish the exclusiveness of feature distribution of known classes. In this letter, we propose an entropy-awareness meta-learning method that improves the exclusiveness of feature distribution of known classes which means our method is effective for not only classifying the seen classes but also encountering the unseen other classes. Through meta-learning tasks, the proposed method learns to construct a feature space of the dynamic-assigned known classes. This feature space is required by the tasks to reject all other classes not belonging to the known classes. At the same time, the proposed entropy-awareness loss helps the model to enhance the feature space with effective and robust discrimination between the known and unknown classes. Therefore, our method can construct a dynamic feature space with discrimination between the known and unknown classes to simultaneously classify the dynamic-assigned known classes and reject the unknown classes. Experiments conducted on the moving and stationary target acquisition and recognition (MSTAR) dataset have shown the effectiveness of our method for SAR OSR

    Semi-Supervised SAR ATR Framework with Transductive Auxiliary Segmentation

    Full text link
    Convolutional neural networks (CNNs) have achieved high performance in synthetic aperture radar (SAR) automatic target recognition (ATR). However, the performance of CNNs depends heavily on a large amount of training data. The insufficiency of labeled training SAR images limits the recognition performance and even invalidates some ATR methods. Furthermore, under few labeled training data, many existing CNNs are even ineffective. To address these challenges, we propose a Semi-supervised SAR ATR Framework with transductive Auxiliary Segmentation (SFAS). The proposed framework focuses on exploiting the transductive generalization on available unlabeled samples with an auxiliary loss serving as a regularizer. Through auxiliary segmentation of unlabeled SAR samples and information residue loss (IRL) in training, the framework can employ the proposed training loop process and gradually exploit the information compilation of recognition and segmentation to construct a helpful inductive bias and achieve high performance. Experiments conducted on the MSTAR dataset have shown the effectiveness of our proposed SFAS for few-shot learning. The recognition performance of 94.18\% can be achieved under 20 training samples in each class with simultaneous accurate segmentation results. Facing variances of EOCs, the recognition ratios are higher than 88.00\% when 10 training samples each class
    corecore