2,190 research outputs found

    Role Playing Learning for Socially Concomitant Mobile Robot Navigation

    Full text link
    In this paper, we present the Role Playing Learning (RPL) scheme for a mobile robot to navigate socially with its human companion in populated environments. Neural networks (NN) are constructed to parameterize a stochastic policy that directly maps sensory data collected by the robot to its velocity outputs, while respecting a set of social norms. An efficient simulative learning environment is built with maps and pedestrians trajectories collected from a number of real-world crowd data sets. In each learning iteration, a robot equipped with the NN policy is created virtually in the learning environment to play itself as a companied pedestrian and navigate towards a goal in a socially concomitant manner. Thus, we call this process Role Playing Learning, which is formulated under a reinforcement learning (RL) framework. The NN policy is optimized end-to-end using Trust Region Policy Optimization (TRPO), with consideration of the imperfectness of robot's sensor measurements. Simulative and experimental results are provided to demonstrate the efficacy and superiority of our method

    Channel Covariance Matrix Estimation via Dimension Reduction for Hybrid MIMO MmWave Communication Systems

    Get PDF
    Hybrid massive MIMO structures with lower hardware complexity and power consumption have been considered as a potential candidate for millimeter wave (mmWave) communications. Channel covariance information can be used for designing transmitter precoders, receiver combiners, channel estimators, etc. However, hybrid structures allow only a lower-dimensional signal to be observed, which adds difficulties for channel covariance matrix estimation. In this paper, we formulate the channel covariance estimation as a structured low-rank matrix sensing problem via Kronecker product expansion and use a low-complexity algorithm to solve this problem. Numerical results with uniform linear arrays (ULA) and uniform squared planar arrays (USPA) are provided to demonstrate the effectiveness of our proposed method

    Matrix Completion-Based Channel Estimation for MmWave Communication Systems With Array-Inherent Impairments

    Get PDF
    Hybrid massive MIMO structures with reduced hardware complexity and power consumption have been widely studied as a potential candidate for millimeter wave (mmWave) communications. Channel estimators that require knowledge of the array response, such as those using compressive sensing (CS) methods, may suffer from performance degradation when array-inherent impairments bring unknown phase errors and gain errors to the antenna elements. In this paper, we design matrix completion (MC)-based channel estimation schemes which are robust against the array-inherent impairments. We first design an open-loop training scheme that can sample entries from the effective channel matrix randomly and is compatible with the phase shifter-based hybrid system. Leveraging the low-rank property of the effective channel matrix, we then design a channel estimator based on the generalized conditional gradient (GCG) framework and the alternating minimization (AltMin) approach. The resulting estimator is immune to array-inherent impairments and can be implemented to systems with any array shapes for its independence of the array response. In addition, we extend our design to sample a transformed channel matrix following the concept of inductive matrix completion (IMC), which can be solved efficiently using our proposed estimator and achieve similar performance with a lower requirement of the dynamic range of the transmission power per antenna. Numerical results demonstrate the advantages of our proposed MC-based channel estimators in terms of estimation performance, computational complexity and robustness against array-inherent impairments over the orthogonal matching pursuit (OMP)-based CS channel estimator.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Quantum calculation of axion-photon transition in electromagnetodynamics for cavity haloscope

    Full text link
    The Witten effect implies the presence of electric charge of magnetic monople and possible relationship between axion and dyon. The axion-dyon dynamics can be reliably built based on the quantum electromagnetodynamics (QEMD) which was developed by Schwinger and Zwanziger in 1960's. A generic low-energy axion-photon effective field theory can also be realized in the language of ``generalized symmetries'' with higher-form symmetries and background gauge fields. In this work, we implement the quantum calculation of axion-single photon transition rate inside a homogeneous electromagnetic field in terms of the new axion interaction Hamiltonian in QEMD. This quantum calculation can clearly imply the enhancement of conversion rate through resonant cavity in axion haloscope experiments. We also show the promising potentials on the cavity search of new axion-photon couplings in QEMD.Comment: 15 pages, 2 figure

    Axion-like particle from primordial black hole evaporation and its detection in neutrino experiments

    Full text link
    The primordial black holes (PBHs) play as a novel source to radiate light elementary particles of energies in the region of a few hundred MeV. We explore the possibility that the axion-like particles (ALPs) with mass less than 1 MeV are produced from PBH evaporation. The absorption of light ALPs in the underground detector targets then induces energetic photoelectron signatures in current and future neutrino experiments. Utilizing the PBH ALP event rate, we place general exclusion limits on the axion couplings at Super-K and Hyper-K. We also translate these limits into the upper bound on the fraction of DM composed of PBHs fPBHf_{\rm PBH}.Comment: 16 pages, 5 figure

    Boosting Commit Classification with Contrastive Learning

    Full text link
    Commit Classification (CC) is an important task in software maintenance, which helps software developers classify code changes into different types according to their nature and purpose. It allows developers to understand better how their development efforts are progressing, identify areas where they need improvement, and make informed decisions about when and how to release new software versions. However, existing models need lots of manually labeled data for fine-tuning processes, and ignore sentence-level semantic information, which is often essential for discovering the difference between diverse commits. Therefore, it is still challenging to solve CC in fewshot scenario. To solve the above problems, we propose a contrastive learning-based commit classification framework. Firstly, we generate KK sentences and pseudo-labels according to the labels of the dataset, which aims to enhance the dataset. Secondly, we randomly group the augmented data NN times to compare their similarity with the positive TpCT_p^{|C|} and negative TnCT_n^{|C|} samples. We utilize individual pretrained sentence transformers (ST)s to efficiently obtain the sentence-level embeddings from different features respectively. Finally, we adopt the cosine similarity function to limit the distribution of vectors, similar vectors are more adjacent. The light fine-tuned model is then applied to the label prediction of incoming commits. Extensive experiments on two open available datasets demonstrate that our framework can solve the CC problem simply but effectively in fewshot scenarios, while achieving state-of-the-art(SOTA) performance and improving the adaptability of the model without requiring a large number of training samples for fine-tuning. The code, data, and trained models are available at https://github.com/AppleMax1992/CommitFit

    Incorprating Prompt tuning for Commit classification with prior Knowledge

    Full text link
    Commit Classification(CC) is an important task in software maintenance since it helps software developers classify code changes into different types according to their nature and purpose. This allows them to better understand how their development efforts are progressing, identify areas where they need improvement. However, existing methods are all discriminative models, usually with complex architectures that require additional output layers to produce class label probabilities. Moreover, they require a large amount of labeled data for fine-tuning, and it is difficult to learn effective classification boundaries in the case of limited labeled data. To solve above problems, we propose a generative framework that Incorporating prompt-tuning for commit classification with prior knowledge (IPCK) https://github.com/AppleMax1992/IPCK, which simplifies the model structure and learns features across different tasks. It can still reach the SOTA performance with only limited samples. Firstly, we proposed a generative framework based on T5. This encoder-decoder construction method unifies different CC task into a text2text problem, which simplifies the structure of the model by not requiring an extra output layer. Second, instead of fine-tuning, we design an prompt-tuning solution which can be adopted in few-shot scenarios with only limit samples. Furthermore, we incorporate prior knowledge via an external knowledge graph to map the probabilities of words into the final labels in the speech machine step to improve performance in few-shot scenarios. Extensive experiments on two open available datasets show that our framework can solve the CC problem simply but effectively in few-shot and zeroshot scenarios, while improving the adaptability of the model without requiring a large amount of training samples for fine-tuning
    corecore