246 research outputs found

    Higher-order multi-scale deep Ritz method for multi-scale problems of authentic composite materials

    Full text link
    The direct deep learning simulation for multi-scale problems remains a challenging issue. In this work, a novel higher-order multi-scale deep Ritz method (HOMS-DRM) is developed for thermal transfer equation of authentic composite materials with highly oscillatory and discontinuous coefficients. In this novel HOMS-DRM, higher-order multi-scale analysis and modeling are first employed to overcome limitations of prohibitive computation and Frequency Principle when direct deep learning simulation. Then, improved deep Ritz method are designed to high-accuracy and mesh-free simulation for macroscopic homogenized equation without multi-scale property and microscopic lower-order and higher-order cell problems with highly discontinuous coefficients. Moreover, the theoretical convergence of the proposed HOMS-DRM is rigorously demonstrated under appropriate assumptions. Finally, extensive numerical experiments are presented to show the computational accuracy of the proposed HOMS-DRM. This study offers a robust and high-accuracy multi-scale deep learning framework that enables the effective simulation and analysis of multi-scale problems of authentic composite materials

    Hyponormal Toeplitz Operators on the Dirichlet Spaces

    Get PDF
    We completely characterize the hyponormality of bounded Toeplitz operators with Sobolev symbols on the Dirichlet space and the harmonic Dirichlet space

    In vitro corrosion of Mg–1.21Li–1.12Ca–1Y alloy

    Get PDF
    AbstractThe influence of the microstructure on mechanical properties and corrosion behavior of the Mg–1.21Li–1.12Ca–1Y alloy was investigated using OM, SEM, XRD, EPMA, EDS, tensile tests and corrosion measurements. The results demonstrated that the microstructure of the Mg–1.21Li–1.12Ca–1Y alloy was characterized by α-Mg substrate and intermetallic compounds Mg2Ca and Mg24Y5. Most of the fine Mg2Ca particles for the as-cast alloy were distributed along the grain boundaries, while for the as-extruded along the extrusion direction. The Mg24Y5 particles with a larger size than the Mg2Ca particles were positioned inside the grains. The mechanical properties of Mg–1.21Li–1.12Ca–1Y alloy were improved by the grain refinement and dispersion strengthening. Corrosion pits initiated at the α-Mg matrix neighboring the Mg2Ca particles and subsequently the alloy exhibited general corrosion and filiform corrosion as the corrosion product layer of Mg(OH)2 and MgCO3 became compact and thick

    A Graph-based Relevance Matching Model for Ad-hoc Retrieval

    Full text link
    To retrieve more relevant, appropriate and useful documents given a query, finding clues about that query through the text is crucial. Recent deep learning models regard the task as a term-level matching problem, which seeks exact or similar query patterns in the document. However, we argue that they are inherently based on local interactions and do not generalise to ubiquitous, non-consecutive contextual relationships. In this work, we propose a novel relevance matching model based on graph neural networks to leverage the document-level word relationships for ad-hoc retrieval. In addition to the local interactions, we explicitly incorporate all contexts of a term through the graph-of-word text format. Matching patterns can be revealed accordingly to provide a more accurate relevance score. Our approach significantly outperforms strong baselines on two ad-hoc benchmarks. We also experimentally compare our model with BERT and show our advantages on long documents.Comment: To appear at AAAI 202

    EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters

    Full text link
    Scaling up contrastive language-image pretraining (CLIP) is critical for empowering both vision and multimodal models. We present EVA-CLIP-18B, the largest and most powerful open-source CLIP model to date, with 18-billion parameters. With only 6-billion training samples seen, EVA-CLIP-18B achieves an exceptional 80.7% zero-shot top-1 accuracy averaged across 27 widely recognized image classification benchmarks, outperforming its forerunner EVA-CLIP (5-billion parameters) and other open-source CLIP models by a large margin. Remarkably, we observe a consistent performance improvement with the model size scaling of EVA-CLIP, despite maintaining a constant training dataset of 2-billion image-text pairs from LAION-2B and COYO-700M. This dataset is openly available and much smaller than the in-house datasets (e.g., DFN-5B, WebLI-10B) employed in other state-of-the-art CLIP models. EVA-CLIP-18B demonstrates the potential of EVA-style weak-to-strong visual model scaling. With our model weights made publicly available, we hope to facilitate future research in vision and multimodal foundation models

    Evaluating Modules in Graph Contrastive Learning

    Full text link
    The recent emergence of contrastive learning approaches facilitates the research on graph representation learning (GRL), introducing graph contrastive learning (GCL) into the literature. These methods contrast semantically similar and dissimilar sample pairs to encode the semantics into node or graph embeddings. However, most existing works only performed model-level evaluation, and did not explore the combination space of modules for more comprehensive and systematic studies. For effective module-level evaluation, we propose a framework that decomposes GCL models into four modules: (1) a sampler to generate anchor, positive and negative data samples (nodes or graphs); (2) an encoder and a readout function to get sample embeddings; (3) a discriminator to score each sample pair (anchor-positive and anchor-negative); and (4) an estimator to define the loss function. Based on this framework, we conduct controlled experiments over a wide range of architectural designs and hyperparameter settings on node and graph classification tasks. Specifically, we manage to quantify the impact of a single module, investigate the interaction between modules, and compare the overall performance with current model architectures. Our key findings include a set of module-level guidelines for GCL, e.g., simple samplers from LINE and DeepWalk are strong and robust; an MLP encoder associated with Sum readout could achieve competitive performance on graph classification. Finally, we release our implementations and results as OpenGCL, a modularized toolkit that allows convenient reproduction, standard model and module evaluation, and easy extension
    • …
    corecore