710 research outputs found

    Single-valued Wave Function of Aharonov-Bohm Problem

    Get PDF

    Quantization of Even-Dimensional Actions of Chern-Simons Form with Infinite Reducibility

    Get PDF
    We investigate the quantization of even-dimensional topological actions of Chern-Simons form which were proposed previously. We quantize the actions by Lagrangian and Hamiltonian formulations {\`a} la Batalin, Fradkin and Vilkovisky. The models turn out to be infinitely reducible and thus we need infinite number of ghosts and antighosts. The minimal actions of Lagrangian formulation which satisfy the master equation of Batalin and Vilkovisky have the same Chern-Simons form as the starting classical actions. In the Hamiltonian formulation we have used the formulation of cohomological perturbation and explicitly shown that the gauge-fixed actions of both formulations coincide even though the classical action breaks Dirac's regularity condition. We find an interesting relation that the BRST charge of Hamiltonian formulation is the odd-dimensional fermionic counterpart of the topological action of Chern-Simons form. Although the quantization of two dimensional models which include both bosonic and fermionic gauge fields are investigated in detail, it is straightforward to extend the quantization into arbitrary even dimensions. This completes the quantization of previously proposed topological gravities in two and four dimensions.Comment: 50 pages, latex, no figure

    Generalized Gauge Theories and Weinberg-Salam Model with Dirac-K\"ahler Fermions

    Full text link
    We extend previously proposed generalized gauge theory formulation of Chern-Simons type and topological Yang-Mills type actions into Yang-Mills type actions. We formulate gauge fields and Dirac-K\"ahler matter fermions by all degrees of differential forms. The simplest version of the model which includes only zero and one form gauge fields accommodated with the graded Lie algebra of SU(21)SU(2|1) supergroup leads Weinberg-Salam model. Thus the Weinberg-Salam model formulated by noncommutative geometry is a particular example of the present formulation.Comment: 33 pages, LaTe

    Conversion of T cells to B cells by inactivation of polycomb-mediated epigenetic suppression of the B-lineage program

    Get PDF
    12 p.-6 fig.1 tab.Tomokatsu Ikawa, et al.In general, cell fate is determined primarily by transcription factors, followed by epigenetic mechanisms fixing the status. While the importance of transcription factors controlling cell fate has been well characterized, epigenetic regulation of cell fate maintenance remains to be elucidated. Here we provide an obvious fate conversion case, in which the inactivation of polycomb-medicated epigenetic regulation results in conversion of T-lineage progenitors to the B-cell fate. In T-cell-specific Ring1A/B-deficient mice, T-cell development was severely blocked at an immature stage. We found that these developmentally arrested T-cell precursors gave rise to functional B cells upon transfer to immunodeficient mice. We further demonstrated that the arrest was almost completely canceled by additional deletion of Pax5. These results indicate that the maintenance of T-cell fate critically requires epigenetic suppression of the B-lineage gene program.This work was supported in part by grants from the Japan Society for the Promotion of Science (24689042 to T.I.), the Japan Science and Technology Agency (T.I.), RIKEN Center for Integrative Medical Sciences (IMS) Young Chief Investigator program (T.I.), and the Kanae Foundation for the Promotion of Medical Science (T.I.).Peer reviewe

    Game-Theoretic Understanding of Misclassification

    Full text link
    This paper analyzes various types of image misclassification from a game-theoretic view. Particularly, we consider the misclassification of clean, adversarial, and corrupted images and characterize it through the distribution of multi-order interactions. We discover that the distribution of multi-order interactions varies across the types of misclassification. For example, misclassified adversarial images have a higher strength of high-order interactions than correctly classified clean images, which indicates that adversarial perturbations create spurious features that arise from complex cooperation between pixels. By contrast, misclassified corrupted images have a lower strength of low-order interactions than correctly classified clean images, which indicates that corruptions break the local cooperation between pixels. We also provide the first analysis of Vision Transformers using interactions. We found that Vision Transformers show a different tendency in the distribution of interactions from that in CNNs, and this implies that they exploit the features that CNNs do not use for the prediction. Our study demonstrates that the recent game-theoretic analysis of deep learning models can be broadened to analyze various malfunctions of deep learning models including Vision Transformers by using the distribution, order, and sign of interactions.Comment: 15 pages, 8 figure

    Adversarial joint attacks on legged robots

    Full text link
    We address adversarial attacks on the actuators at the joints of legged robots trained by deep reinforcement learning. The vulnerability to the joint attacks can significantly impact the safety and robustness of legged robots. In this study, we demonstrate that the adversarial perturbations to the torque control signals of the actuators can significantly reduce the rewards and cause walking instability in robots. To find the adversarial torque perturbations, we develop black-box adversarial attacks, where, the adversary cannot access the neural networks trained by deep reinforcement learning. The black box attack can be applied to legged robots regardless of the architecture and algorithms of deep reinforcement learning. We employ three search methods for the black-box adversarial attacks: random search, differential evolution, and numerical gradient descent methods. In experiments with the quadruped robot Ant-v2 and the bipedal robot Humanoid-v2, in OpenAI Gym environments, we find that differential evolution can efficiently find the strongest torque perturbations among the three methods. In addition, we realize that the quadruped robot Ant-v2 is vulnerable to the adversarial perturbations, whereas the bipedal robot Humanoid-v2 is robust to the perturbations. Consequently, the joint attacks can be used for proactive diagnosis of robot walking instability.Comment: 6 pages, 8 figure

    Adversarially Trained Object Detector for Unsupervised Domain Adaptation

    Full text link
    Unsupervised domain adaptation, which involves transferring knowledge from a label-rich source domain to an unlabeled target domain, can be used to substantially reduce annotation costs in the field of object detection. In this study, we demonstrate that adversarial training in the source domain can be employed as a new approach for unsupervised domain adaptation. Specifically, we establish that adversarially trained detectors achieve improved detection performance in target domains that are significantly shifted from source domains. This phenomenon is attributed to the fact that adversarially trained detectors can be used to extract robust features that are in alignment with human perception and worth transferring across domains while discarding domain-specific non-robust features. In addition, we propose a method that combines adversarial training and feature alignment to ensure the improved alignment of robust features with the target domain. We conduct experiments on four benchmark datasets and confirm the effectiveness of our proposed approach on large domain shifts from real to artistic images. Compared to the baseline models, the adversarially trained detectors improve the mean average precision by up to 7.7%, and further by up to 11.8% when feature alignments are incorporated. Although our method degrades performance for small domain shifts, quantification of the domain shift based on the Frechet distance allows us to determine whether adversarial training should be conducted.Comment: 10 pages, 6 figures. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Fourier Analysis on Robustness of Graph Convolutional Neural Networks for Skeleton-based Action Recognition

    Full text link
    Using Fourier analysis, we explore the robustness and vulnerability of graph convolutional neural networks (GCNs) for skeleton-based action recognition. We adopt a joint Fourier transform (JFT), a combination of the graph Fourier transform (GFT) and the discrete Fourier transform (DFT), to examine the robustness of adversarially-trained GCNs against adversarial attacks and common corruptions. Experimental results with the NTU RGB+D dataset reveal that adversarial training does not introduce a robustness trade-off between adversarial attacks and low-frequency perturbations, which typically occurs during image classification based on convolutional neural networks. This finding indicates that adversarial training is a practical approach to enhancing robustness against adversarial attacks and common corruptions in skeleton-based action recognition. Furthermore, we find that the Fourier approach cannot explain vulnerability against skeletal part occlusion corruption, which highlights its limitations. These findings extend our understanding of the robustness of GCNs, potentially guiding the development of more robust learning methods for skeleton-based action recognition.Comment: 17 pages, 13 figure

    Improving Accuracy of Zero-Shot Action Recognition with Handcrafted Features

    Full text link
    With the development of machine learning, datasets for models are getting increasingly larger. This leads to increased data annotation costs and training time, which undoubtedly hinders the development of machine learning. To solve this problem, zero-shot learning is gaining considerable attention. With zero-shot learning, objects can be recognized or classified, even without having been seen before. Nevertheless, the accuracy of this method is still low, thus limiting its practical application. To solve this problem, we propose a video-text matching model, which can learn from handcrafted features. Our model can be used alone to predict the action classes and can also be added to any other model to improve its accuracy. Moreover, our model can be continuously optimized to improve its accuracy. We only need to manually annotate some features, which incurs some labor costs; in many situations, the costs are worth it. The results with UCF101 and HMDB51 show that our model achieves the best accuracy and also improves the accuracies of other models.Comment: 15 pages, 7 figure
    corecore