316 research outputs found

    Superconductivity in pressurized CeRhGe3 and related non-centrosymmetric compounds

    Full text link
    We report the discovery of superconductivity in pressurized CeRhGe3, until now the only remaining non-superconducting member of the isostructural family of non-centrosymmetric heavy-fermion compounds CeTX3 (T = Co, Rh, Ir and X = Si, Ge). Superconductivity appears in CeRhGe3 at a pressure of 19.6 GPa and the transition temperature Tc reaches a maximum value of 1.3 K at 21.5 GPa. This finding provides an opportunity to establish systematic correlations between superconductivity and materials properties within this family. Though ambient-pressure unit-cell volumes and critical pressures for superconductivity vary substantially across the series, all family members reach a maximum Tcmax at a common critical cell volume Vcrit, and Tcmax at Vcrit increases with increasing spin-orbit coupling strength of the d-electrons. These correlations show that substantial Kondo hybridization and spin-orbit coupling favor superconductivity in this family, the latter reflecting the role of broken centro-symmetry.Comment: 15 pages and 4 figure

    On the Mathematics of RNA Velocity II: Algorithmic Aspects

    Full text link
    In a previous paper [CSIAM Trans. Appl. Math. 2 (2021), 1-55], the authors proposed a theoretical framework for the analysis of RNA velocity, which is a promising concept in scRNA-seq data analysis to reveal the cell state-transition dynamical processes underlying snapshot data. The current paper is devoted to the algorithmic study of some key components in RNA velocity workflow. Four important points are addressed in this paper: (1) We construct a rational time-scale fixation method which can determine the global gene-shared latent time for cells. (2) We present an uncertainty quantification strategy for the inferred parameters obtained through the EM algorithm. (3) We establish the optimal criterion for the choice of velocity kernel bandwidth with respect to the sample size in the downstream analysis and discuss its implications. (4) We propose a temporal distance estimation approach between two cell clusters along the cellular development path. Some illustrative numerical tests are also carried out to verify our analysis. These results are intended to provide tools and insights in further development of RNA velocity type methods in the future.Comment: 32 pages, 5 figure

    TetCNN: Convolutional Neural Networks on Tetrahedral Meshes

    Full text link
    Convolutional neural networks (CNN) have been broadly studied on images, videos, graphs, and triangular meshes. However, it has seldom been studied on tetrahedral meshes. Given the merits of using volumetric meshes in applications like brain image analysis, we introduce a novel interpretable graph CNN framework for the tetrahedral mesh structure. Inspired by ChebyNet, our model exploits the volumetric Laplace-Beltrami Operator (LBO) to define filters over commonly used graph Laplacian which lacks the Riemannian metric information of 3D manifolds. For pooling adaptation, we introduce new objective functions for localized minimum cuts in the Graclus algorithm based on the LBO. We employ a piece-wise constant approximation scheme that uses the clustering assignment matrix to estimate the LBO on sampled meshes after each pooling. Finally, adapting the Gradient-weighted Class Activation Mapping algorithm for tetrahedral meshes, we use the obtained heatmaps to visualize discovered regions-of-interest as biomarkers. We demonstrate the effectiveness of our model on cortical tetrahedral meshes from patients with Alzheimer's disease, as there is scientific evidence showing the correlation of cortical thickness to neurodegenerative disease progression. Our results show the superiority of our LBO-based convolution layer and adapted pooling over the conventionally used unitary cortical thickness, graph Laplacian, and point cloud representation.Comment: Accepted as a conference paper to Information Processing in Medical Imaging (IPMI 2023) conferenc

    Common Sense Enhanced Knowledge-based Recommendation with Large Language Model

    Full text link
    Knowledge-based recommendation models effectively alleviate the data sparsity issue leveraging the side information in the knowledge graph, and have achieved considerable performance. Nevertheless, the knowledge graphs used in previous work, namely metadata-based knowledge graphs, are usually constructed based on the attributes of items and co-occurring relations (e.g., also buy), in which the former provides limited information and the latter relies on sufficient interaction data and still suffers from cold start issue. Common sense, as a form of knowledge with generality and universality, can be used as a supplement to the metadata-based knowledge graph and provides a new perspective for modeling users' preferences. Recently, benefiting from the emergent world knowledge of the large language model, efficient acquisition of common sense has become possible. In this paper, we propose a novel knowledge-based recommendation framework incorporating common sense, CSRec, which can be flexibly coupled to existing knowledge-based methods. Considering the challenge of the knowledge gap between the common sense-based knowledge graph and metadata-based knowledge graph, we propose a knowledge fusion approach based on mutual information maximization theory. Experimental results on public datasets demonstrate that our approach significantly improves the performance of existing knowledge-based recommendation models.Comment: Accepted by DASFAA 202

    Research on prevention and control methods of land subsidence induced by groundwater overexploitation based on three-dimensional fluid solid coupling model—a case study of Guangrao County

    Get PDF
    Land subsidence is an environmental geological phenomenon with slowly decreasing ground elevation, The North China Plain is one of the areas with the most serious land subsidence in China, and Guangrao County is one of the subsidence centers. This paper is based on the hydrogeological and engineering geological data of Guangrao County, the groundwater monitoring data for many years and the land subsidence monitoring data, systematically analyzes the dynamic characteristics of groundwater, the distribution and evolution of land subsidence, and the correlation between groundwater exploitation and land subsidence development in different layers of this area. Based on Biot porous medium consolidation theory, establishes a three-dimensional fluid solid coupling numerical model of land subsidence in Guangrao County, restores the development process of land subsidence, predicts and analyzes the subsidence evolution law under different groundwater exploitation schemes, and proposes targeted prevention and control measures. The research results show that: the shallow groundwater forms a cone of depression with Guangbei Salt Field as the center, and the deep groundwater forms an elliptical regional cone of depression with the urban area as the center. The ground is gradually formed two small settlement areas with the urban area of Guangrao County and Guangbei Salt Field as the settlement center, and there is a trend of interrelated expansion. The three-dimensional fluid solid coupling model of land subsidence accurately restored the development process of land subsidence in the study area, predicted that under the current groundwater exploitation conditions, by 2040, the settlement of Guangrao urban settlement center will increase to 1,350 mm, forming a large regional funnel centered around the urban area, and gradually developing and expanding around. Prohibition of groundwater exploitation in the main funnel area is a more reasonable and effective exploitation plan to prevent the development of land subsidence

    Sequential Recommendation with Latent Relations based on Large Language Model

    Full text link
    Sequential recommender systems predict items that may interest users by modeling their preferences based on historical interactions. Traditional sequential recommendation methods rely on capturing implicit collaborative filtering signals among items. Recent relation-aware sequential recommendation models have achieved promising performance by explicitly incorporating item relations into the modeling of user historical sequences, where most relations are extracted from knowledge graphs. However, existing methods rely on manually predefined relations and suffer the sparsity issue, limiting the generalization ability in diverse scenarios with varied item relations. In this paper, we propose a novel relation-aware sequential recommendation framework with Latent Relation Discovery (LRD). Different from previous relation-aware models that rely on predefined rules, we propose to leverage the Large Language Model (LLM) to provide new types of relations and connections between items. The motivation is that LLM contains abundant world knowledge, which can be adopted to mine latent relations of items for recommendation. Specifically, inspired by that humans can describe relations between items using natural language, LRD harnesses the LLM that has demonstrated human-like knowledge to obtain language knowledge representations of items. These representations are fed into a latent relation discovery module based on the discrete state variational autoencoder (DVAE). Then the self-supervised relation discovery tasks and recommendation tasks are jointly optimized. Experimental results on multiple public datasets demonstrate our proposed latent relations discovery method can be incorporated with existing relation-aware sequential recommendation models and significantly improve the performance. Further analysis experiments indicate the effectiveness and reliability of the discovered latent relations.Comment: Accepted by SIGIR 202

    Equiaxed Ti-based Composites With High Strength And Large Plasticity Prepared By Sintering And Crystallizing Amorphous Powder

    Get PDF
    High-performance titanium alloys with an equiaxed composite microstructure were achieved by sintering and crystallizing amorphous powder. By introducing a second phase in a β-Ti matrix, series of optimized Ti-Nb-Fe-Co-Al and Ti-Nb-Cu-Ni-Al composites, which have a microstructure composed of ultrafine-grained and equiaxed CoTi2 or (Cu,Ni)Ti2 precipitated phases surrounded by a ductile β-Ti matrix, were fabricated by sintering and crystallizing mechanically alloyed amorphous powder. The as-fabricated composites exhibit ultra-high ultimate compressive strength of 2585MPa and extremely large compressive plastic strain of around 40%, which are greater than the corresponding ones for most titanium alloys. In contrast, the alloy fabricated by sintering and crystallizing Ti-Zr-Cu-Ni-Al amorphous powder, which possesses significantly higher glass forming ability in comparison with the Ti-Nb-Fe-Co-Al and Ti-Nb-Cu-Ni-Al alloy systems, exhibits a complex microstructure with several intermetallic compounds and a typical brittle fracture feature. The deformation behavior and fracture mechanism indicate that the ultrahigh compressive strength and large plasticity of the as-fabricated equiaxed composites is induced by dislocations pinning effect of the CoTi2 or (Cu,Ni)Ti2 second phases and the interaction and multiplication of generated shear bands in the ductile β-Ti matrix, respectively. The results obtained provide basis guidelines for designing and fabricating titanium alloys with excellent mechanical properties by powder metallurgy

    Double Correction Framework for Denoising Recommendation

    Full text link
    As its availability and generality in online services, implicit feedback is more commonly used in recommender systems. However, implicit feedback usually presents noisy samples in real-world recommendation scenarios (such as misclicks or non-preferential behaviors), which will affect precise user preference learning. To overcome the noisy samples problem, a popular solution is based on dropping noisy samples in the model training phase, which follows the observation that noisy samples have higher training losses than clean samples. Despite the effectiveness, we argue that this solution still has limits. (1) High training losses can result from model optimization instability or hard samples, not just noisy samples. (2) Completely dropping of noisy samples will aggravate the data sparsity, which lacks full data exploitation. To tackle the above limitations, we propose a Double Correction Framework for Denoising Recommendation (DCF), which contains two correction components from views of more precise sample dropping and avoiding more sparse data. In the sample dropping correction component, we use the loss value of the samples over time to determine whether it is noise or not, increasing dropping stability. Instead of averaging directly, we use the damping function to reduce the bias effect of outliers. Furthermore, due to the higher variance exhibited by hard samples, we derive a lower bound for the loss through concentration inequality to identify and reuse hard samples. In progressive label correction, we iteratively re-label highly deterministic noisy samples and retrain them to further improve performance. Finally, extensive experimental results on three datasets and four backbones demonstrate the effectiveness and generalization of our proposed framework.Comment: Accepted by KDD 202

    OTRE: Where Optimal Transport Guided Unpaired Image-to-Image Translation Meets Regularization by Enhancing

    Full text link
    Non-mydriatic retinal color fundus photography (CFP) is widely available due to the advantage of not requiring pupillary dilation, however, is prone to poor quality due to operators, systemic imperfections, or patient-related causes. Optimal retinal image quality is mandated for accurate medical diagnoses and automated analyses. Herein, we leveraged the Optimal Transport (OT) theory to propose an unpaired image-to-image translation scheme for mapping low-quality retinal CFPs to high-quality counterparts. Furthermore, to improve the flexibility, robustness, and applicability of our image enhancement pipeline in the clinical practice, we generalized a state-of-the-art model-based image reconstruction method, regularization by denoising, by plugging in priors learned by our OT-guided image-to-image translation network. We named it as regularization by enhancing (RE). We validated the integrated framework, OTRE, on three publicly available retinal image datasets by assessing the quality after enhancement and their performance on various downstream tasks, including diabetic retinopathy grading, vessel segmentation, and diabetic lesion segmentation. The experimental results demonstrated the superiority of our proposed framework over some state-of-the-art unsupervised competitors and a state-of-the-art supervised method.Comment: Accepted as a conference paper to The 28th biennial international conference on Information Processing in Medical Imaging (IPMI 2023
    corecore