19,426 research outputs found

    A Novel Framework based on Unknown Estimation via Principal Sub-space for Universal Domain Adaption

    Full text link
    Universal domain adaptation (UniDA) aims to transfer the knowledge of common classes from source domain to target domain without any prior knowledge on the label set, which requires to distinguish the unknown samples from the known ones in the target domain. Like the traditional unsupervised domain adaptation problem, the misalignment between two domains exists due to the biased and less-discriminative embedding. Recent methods proposed to complete the domain misalignment by clustering target samples with the nearest neighbors or the prototypes. However, it is dangerous to do so since we do not have any prior knowledge about the distributions of unknown samples which can magnify the misalignment especially when the unknown set is big. Meanwhile, other existing classifier-based methods could easily produce overconfident predictions of unknown samples because of the supervised objective in source domain leading the whole model to be biased towards the common classes in the target domain. Therefore, we propose a novel non-parameter unknown samples detection method based on mapping the samples in the original feature space into a reliable linear sub-space which makes data points more sparse to reduce the misalignment between unknown samples and source samples. Moreover, unlike the recent methods applying extra parameters to improve the classification of unknown samples, this paper well balances the confidence values of both known and unknown samples through an unknown-adaptive margin loss which can control the gradient updating of the classifier learning on supervised source samples depending on the confidence level of detected unknown samples at current step. Finally, experiments on four public datasets demonstrate that our method significantly outperforms existing state-of-the-art methods.Comment: 13 pages. arXiv admin note: text overlap with arXiv:2207.0928

    Evidence for non-self-similarity of microearthquakes recorded at a Taiwan borehole seismometer array

    Get PDF
    We investigate the relationship between seismic moment M0 and source duration tw of microearthquakes by using high-quality seismic data recorded with a vertical borehole array installed in central Taiwan. We apply a waveform cross-correlation method to the three-component records and identify several event clusters with high waveform similarity, with event magnitudes ranging from 0.3 to 2.0. Three clusters—Clusters A, B and C—contain 11, 8 and 6 events with similar waveforms, respectively. To determine how M0 scales with tw, we remove path effects by using a path-averaged Q. The results indicate a nearly constant tw for events within each cluster, regardless of M0, with mean values of tw being 0.058, 0.056 and 0.034 s for Clusters A, B and C, respectively. Constant tw, independent of M0, violates the commonly used scaling relation tw∝M1/30tw∝M01/3. This constant duration may arise either because all events in a cluster are hosted on the same isolated seismogenic patch, or because the events are driven by external factors of constant duration, such as fluid injections into the fault zone. It may also be related to the earthquake nucleation size

    Fusion characterization of biomass ash

    Get PDF
    The ash fusion characteristics are important parameters for thermochemical utilization of biomass. In this research, a method for measuring the fusion characteristics of biomass ash by Thermo-mechanical Analyzer, TMA, is described. The typical TMA shrinking ratio curve can be divided into two stages, which are closely related to ash melting behaviors. Several characteristics temperatures based on the TMA curves are used to assess the ash fusion characteristics. A new characteristics temperature, T-m, is proposed to represent the severe melting temperature of biomass ash. The fusion characteristics of six types of biomass ash have been measured by TMA. Compared with standard ash fusibility temperatures (AFT) test, TMA is more suitable for measuring the fusion characteristics of biomass ash. The glassy molten areas of the ash samples are sticky and mainly consist of K-Ca-silicates. (C) 2016 Elsevier B.V. All rights reserved.</p

    OFAR: A Multimodal Evidence Retrieval Framework for Illegal Live-streaming Identification

    Full text link
    Illegal live-streaming identification, which aims to help live-streaming platforms immediately recognize the illegal behaviors in the live-streaming, such as selling precious and endangered animals, plays a crucial role in purifying the network environment. Traditionally, the live-streaming platform needs to employ some professionals to manually identify the potential illegal live-streaming. Specifically, the professional needs to search for related evidence from a large-scale knowledge database for evaluating whether a given live-streaming clip contains illegal behavior, which is time-consuming and laborious. To address this issue, in this work, we propose a multimodal evidence retrieval system, named OFAR, to facilitate the illegal live-streaming identification. OFAR consists of three modules: Query Encoder, Document Encoder, and MaxSim-based Contrastive Late Intersection. Both query encoder and document encoder are implemented with the advanced OFA encoder, which is pretrained on a large-scale multimodal dataset. In the last module, we introduce contrastive learning on the basis of the MaxiSim-based late intersection, to enhance the model's ability of query-document matching. The proposed framework achieves significant improvement on our industrial dataset TaoLive, demonstrating the advances of our scheme

    Learning Point-Language Hierarchical Alignment for 3D Visual Grounding

    Full text link
    This paper presents a novel hierarchical alignment model (HAM) that learns multi-granularity visual and linguistic representations in an end-to-end manner. We extract key points and proposal points to model 3D contexts and instances, and propose point-language alignment with context modulation (PLACM) mechanism, which learns to gradually align word-level and sentence-level linguistic embeddings with visual representations, while the modulation with the visual context captures latent informative relationships. To further capture both global and local relationships, we propose a spatially multi-granular modeling scheme that applies PLACM to both global and local fields. Experimental results demonstrate the superiority of HAM, with visualized results showing that it can dynamically model fine-grained visual and linguistic representations. HAM outperforms existing methods by a significant margin and achieves state-of-the-art performance on two publicly available datasets, and won the championship in ECCV 2022 ScanRefer challenge. Code is available at~\url{https://github.com/PPjmchen/HAM}.Comment: Champion on ECCV 2022 ScanRefer Challeng

    KGExplainer: Towards Exploring Connected Subgraph Explanations for Knowledge Graph Completion

    Full text link
    Knowledge graph completion (KGC) aims to alleviate the inherent incompleteness of knowledge graphs (KGs), which is a critical task for various applications, such as recommendations on the web. Although knowledge graph embedding (KGE) models have demonstrated superior predictive performance on KGC tasks, these models infer missing links in a black-box manner that lacks transparency and accountability, preventing researchers from developing accountable models. Existing KGE-based explanation methods focus on exploring key paths or isolated edges as explanations, which is information-less to reason target prediction. Additionally, the missing ground truth leads to these explanation methods being ineffective in quantitatively evaluating explored explanations. To overcome these limitations, we propose KGExplainer, a model-agnostic method that identifies connected subgraph explanations and distills an evaluator to assess them quantitatively. KGExplainer employs a perturbation-based greedy search algorithm to find key connected subgraphs as explanations within the local structure of target predictions. To evaluate the quality of the explored explanations, KGExplainer distills an evaluator from the target KGE model. By forwarding the explanations to the evaluator, our method can examine the fidelity of them. Extensive experiments on benchmark datasets demonstrate that KGExplainer yields promising improvement and achieves an optimal ratio of 83.3% in human evaluation.Comment: 13 pages, 7 figures, 11 tables. Under Revie
    corecore