4,326 research outputs found

    Singular equivalences induced by bimodules and quadratic monomial algebras

    Full text link
    We investigate the problem when the tensor functor by a bimodule yields a singular equivalence. It turns out that this problem is equivalent to the one when the Hom functor given by the same bimodule induces a triangle equivalence between the homotopy categories of acyclic complexes of injective modules. We give conditions on when a bimodule appears in a pair of bimodules, that defines a singular equivalence with level. We construct an explicit bimodule, which yields a singular equivalence between a quadratic monomial algebra and its associated algebra with radical square zero. Under certain conditions which include the Gorenstein cases, the bimodule does appear in a pair of bimodules defining a singular equivalence with level.Comment: 20 pages, all comments are welcome

    Application of Monte Carlo Simulation in Optical Tweezers

    Get PDF

    Manual-scanning optical coherence tomography probe based on position tracking

    Get PDF
    A method based on position tracking to reconstruct images for a manual-scanning optical coherence tomography (OCT) probe is proposed and implemented. The method employs several feature points on a hand-held probe and a camera to track the device's pose. The continuous device poses tracking, and the collected OCT depth scans can then be combined to render OCT images. The tracking accuracy of the system was characterized to be about 6 μm along two axes and 19 μm along the third. A phantom target was used to validate the method. In addition, we report OCT images of a 54-stage Xenopus laevis tadpole acquired by manual scanning

    SMIL: Multimodal Learning with Severely Missing Modality

    Full text link
    A common assumption in multimodal learning is the completeness of training data, i.e., full modalities are available in all training examples. Although there exists research endeavor in developing novel methods to tackle the incompleteness of testing data, e.g., modalities are partially missing in testing examples, few of them can handle incomplete training modalities. The problem becomes even more challenging if considering the case of severely missing, e.g., 90% training examples may have incomplete modalities. For the first time in the literature, this paper formally studies multimodal learning with missing modality in terms of flexibility (missing modalities in training, testing, or both) and efficiency (most training data have incomplete modality). Technically, we propose a new method named SMIL that leverages Bayesian meta-learning in uniformly achieving both objectives. To validate our idea, we conduct a series of experiments on three popular benchmarks: MM-IMDb, CMU-MOSI, and avMNIST. The results prove the state-of-the-art performance of SMIL over existing methods and generative baselines including autoencoders and generative adversarial networks. Our code is available at https://github.com/mengmenm/SMIL.Comment: In AAAI 2021 (9 pages

    Lexicon-Enhanced Self-Supervised Training for Multilingual Dense Retrieval

    Full text link
    Recent multilingual pre-trained models have shown better performance in various multilingual tasks. However, these models perform poorly on multilingual retrieval tasks due to lacking multilingual training data. In this paper, we propose to mine and generate self-supervised training data based on a large-scale unlabeled corpus. We carefully design a mining method which combines the sparse and dense models to mine the relevance of unlabeled queries and passages. And we introduce a query generator to generate more queries in target languages for unlabeled passages. Through extensive experiments on Mr. TYDI dataset and an industrial dataset from a commercial search engine, we demonstrate that our method performs better than baselines based on various pre-trained multilingual models. Our method even achieves on-par performance with the supervised method on the latter dataset.Comment: EMNLP 2022 Finding
    • …
    corecore