815 research outputs found

    Research of influence and mechanism of combining exercise with diet control on a model of lipid metabolism rat induced by high fat diet

    Get PDF
    OBJECTIVE: To investigate the influence and mechanism of combining exercise with diet control on a model of lipid metabolism rat induced by high fat diet. METHODS: Twenty-four male Wistar rats were randomly divided into 3 groups of 8: normal, model and intervention. The model group and intervention group were fed with high fat diet, while the normal group received basal feed. From day 1, the intervention group was randomly given interventions such as swimming exercise and dietary restriction. The interventions duration were 28 days. At the end of the experiment, the levels of rats’ body weight and liver weight were detected, the serum levels of total cholesterol (TC), high density lipoprotein cholesterol (HDL-C), low density lipoprotein cholesterol (LDL-C) and hepatic triglyceride content (TG) were detected by using biochemical assay, serum level of gastrin (GAS), motilin (MTL) were assayed by the enzyme linked immunosorbent assay (ELISA). RESULTS: Compared with the level of body weight and liver weight in the normal rats, body weight and liver weight in the rat of the model group were significantly increase (P<0.05 or P<0.01). Plasma concentrations of TC, LDL-C and hepatic TG in the model group were significantly increased compared with those in the normal group (P<0.05 or P<0.01). The contents of GAS, MTL, HDL-C in the model rats’plasma were significantly reduced compared with those of the normal group (P<0.05 or P<0.01). Compared with those in the model group, rats’ body weight, liver weight, serum TC, LDL-C, and TG content of liver in the intervention group decreased significantly (P<0.05 or P<0.01). Meanwhile, serum content of GAS, MTL, HDL-C were significantly improved in the intervention rats compared to the model group. CONCLUSION: The action of combining exercise with diet control for lipid metabolism disorder might be related to regulation of GAS, MTL and other gastrointestinal hormones

    The impact of programme management on the speed of AI development in semiconductor industries

    Get PDF
    The rapid advancement of AI has swept through the technology industry. The dissertation explores the relationship between AI projects and programme management. We try to analyze the various programme management parameters that contribute to the differences in AI projects development speed among high-tech IC companies. Drawing on literature from AI projects management theory, the literature indicates that the rapidly evolving AI tools are gradually being integrated into various dimensions of programme management, and individual program parameters also have some impacts on AI development. However, there is less emphasis on the interrelationships between these parameters and which parameters are truly the most critical management parameters in AI programme. The dissertation was researched through the mixed-method process. Qualitative data was sampled through interviews with team members working in different IC companies, with diverse experiences, locations, backgrounds, and roles. The data extraction and analysis are carried out using the latest AI neural deep learning network models to finally predict the main elements in high-tech programme management. Overall, this research demonstrates technique skills and leadership become the critical parameters, and execution, culture, and organization have less impact on AI programmes compared to traditional ones. “Technology and Leadership” should always remain our primary focus during our AI programme management in IC industries

    The study of sediment dynamics in a shallow estuary using integrated numerical modeling and satellite remote sensing

    Get PDF
    The primary objective of the study is to develop an effective tool to investigate the resuspension, deposition and transport of mixed cohesive and non-cohesive sediments in an estuary. The research has integrated 1) statistical analyses of the primary forcing, 2) numerical models for hydrodynamics, surface waves and sediment transport, and 3) the MODIS (Moderate Imaging Spectroradiometer) remotely sensed imagery, to investigate hydrodynamics and sediment dynamics in Mobile Bay, Alabama. First, based on long-term meteorological and tidal observations, statistical analyses have been conducted to predict extreme values of winds and water levels at different return periods in the estuary. Application of predicted extreme winds and surges is illustrated though the development of a storm wave atlas and through the estimation of erosion potential in the estuary. Secondly, three open-source community models for estuarine circulation, wind wave prediction, and sediment transport have been coupled and carefully tested against available field measurements. In particular, the sediment transport model has been improved by implementing the continuous deposition scheme, the general solution to the wave-current bottom boundary layer model (Grand and Madsen, 1979), and a formula of flocculation-influenced settling velocity (Whitehouse et al., 2000). Idealized test cases were designed to evaluate the performances of the integrated model system, in addition to model calibration and verification using field observations. Thirdly, a new algorithm has been developed based on the suspended sediment concentration measured from field water sampling and the corrected MODIS red-channel reflectance for Mobile Bay. The algorithm has been applied not only to winter-front and post-hurricane conditions to reveal the impact of different forcing agents on sediment dynamics in Mobile Bay, but also to the normal weather condition for providing guidance to calibrate the sediment transport model. Integration of these three different approaches has enabled us to understand how land-based particulates are transported, deposited and re-suspended in the estuary, and to disclose the dynamic changes of the suspended sediment concentration under normal and extreme forcing. The methodology and tools developed in this study can be used for other coastal areas

    Expression and clinical significance of RHCG in endometrial cancer

    Get PDF
    Endometrial cancer (EC) is the most common gynecological cancer. Rhesus family, C glycoprotein (RHCG) has been evidenced to be involved in the occurrence and development of various tumors. This study aimed to investigate the expression and clinical significance of RHCG in EC. Bioinformatics analysis was based on the RNAseq counts data from TCGA database, and the prognosis analysis was performed using the Kaplan-Meier method; 4 cases of endometrioid adenocarcinomas samples and 4 cases of normal proliferative endometrium were collected for qPCR and western blot; immunohistochemistry analysis was employed to assess the expression of RHCG in a tissue microarray; the correlation between RHCG and clinicopathological factors was analyzed through Mann-Whitney U test. The lentiviral interference vector was further constructed. The results demonstrated that RHCG was highly expressed in EC tissues, and RHCG was an independent factor affecting the overall survival of patients. Additionally, the expression of RHCG was related to FIGO stage and tumor infiltrate. After interfering with shRHCG, the proliferation activity of EC cells decreased, the migration ability of cells decreased, the apoptosis of cells increased, and the tumor outgrowth was arrested. In summary, RHCG promotes the malignant proliferation and migration of EC, and makes the cells have anti-apoptotic activity. Our study provides a theoretical basis for RHCG to become a potential therapeutic target for EC in the futur

    Robust model of fresh jujube soluble solids content with near-infrared (NIR) spectroscopy

    Get PDF
    A robust partial least square (PLS) calibration model with high accuracy and stability was established for the measurement of soluble solids content (SSC) of fresh jujube using near-infrared (NIR) spectroscopytechnique. Fresh jujube samples were collected in different areas of Taigu and Taiyuan cities, central China in 2008 and 2009. A partial least squares (PLS) calibration model was established based on the NIR spectra of 70 fresh jujube samples collected in 2008. A good calibration result was obtained with correlation coefficient (Rc) of 0.9530 and the root mean square error of calibration (RMSEC) of 0.3951 °Brix. Another PLS calibration model was established based on the NIR spectral of 180 samples collected in 2009; it resulted in the Rc of 0.8536 and the RMSEC of 1.1410 °Brix. It could be seen that the accuracy of established PLS models were different when samples harvested in different years were used for the model calibration. In order to improve the accuracy and robustness of model, different numbers (5, 10, 15, 20, 30 and 40) of samples harvested in 2008 were added to the calibration sample set of the model with samples harvested in 2009, respectively. The established PLS models obtained Rc with the range of 0.8846 to 0.8893 and RMSEC with the range of 1.0248 to 0.9645 °Brix. The obtained results werebetter than the result of the model which was established only with samples harvested in 2009. Moreover, the models established using different numbers of added samples had similar results. Therefore, it was concluded that adding samples from another harvest year could improve the accuracy and robustness of the model for SSC prediction of fresh jujube. The overall results proved that the consideration of samples from different harvest places and years would be useful for establishing an accuracy and robustness spectral model.Keywords: Near-infrared (NIR) spectroscopy, Huping jujube, soluble solids content (SSC), partial least squares (PLS), accuracy, stabilit

    All in One and One for All: A Simple yet Effective Method towards Cross-domain Graph Pretraining

    Full text link
    Large Language Models (LLMs) have revolutionized the fields of computer vision (CV) and natural language processing (NLP). One of the most notable advancements of LLMs is that a single model is trained on vast and diverse datasets spanning multiple domains -- a paradigm we term `All in One'. This methodology empowers LLMs with super generalization capabilities, facilitating an encompassing comprehension of varied data distributions. Leveraging these capabilities, a single LLM demonstrates remarkable versatility across a variety of domains -- a paradigm we term `One for All'. However, applying this idea to the graph field remains a formidable challenge, with cross-domain pretraining often resulting in negative transfer. This issue is particularly important in few-shot learning scenarios, where the paucity of training data necessitates the incorporation of external knowledge sources. In response to this challenge, we propose a novel approach called Graph COordinators for PrEtraining (GCOPE), that harnesses the underlying commonalities across diverse graph datasets to enhance few-shot learning. Our novel methodology involves a unification framework that amalgamates disparate graph datasets during the pretraining phase to distill and transfer meaningful knowledge to target tasks. Extensive experiments across multiple graph datasets demonstrate the superior efficacy of our approach. By successfully leveraging the synergistic potential of multiple graph datasets for pretraining, our work stands as a pioneering contribution to the realm of graph foundational model.Comment: Accepted to KDD'24, August 25-29, 2024, Barcelona, Spai

    Numerical Analysis and Optimization for Hydrodynamic Lubrication in Journal Bearings of Rotary Compressor

    Get PDF
    Based on the average Reynolds’ equation of hydrodynamic lubrication, the axis locus and the minimum oil film thickness of the journal bearings under the dynamic load were numerically solved in this paper, and the movement and bearing characteristics of the compressor bearings at different rotation speeds were analyzed to evaluate the influence of the friction loss of the journal bearings on the performance of the rotary compressor. The simulation results indicated that the minimum oil film thickness of the sub bearing is smaller than the critical oil film thickness at low rotation speed (n ≤ 1800 rpm), therefore, the friction power increases significantly with the decrease of rotation speed. The effects of width-diameter ratio, clearance and viscosity of lubricating oil on the improvement of minimum oil film thickness were further analyzed. By optimizing the width-diameter ratio of the bearing, the load carrying capacity of the oil film is improved, and the friction power of sub bearing reduces by more than 80%. The experimental results showed that the performance of rotary compressor can be improved by more than 1% at low rotation speed

    ProG: A Graph Prompt Learning Benchmark

    Full text link
    Artificial general intelligence on graphs has shown significant advancements across various applications, yet the traditional 'Pre-train & Fine-tune' paradigm faces inefficiencies and negative transfer issues, particularly in complex and few-shot settings. Graph prompt learning emerges as a promising alternative, leveraging lightweight prompts to manipulate data and fill the task gap by reformulating downstream tasks to the pretext. However, several critical challenges still remain: how to unify diverse graph prompt models, how to evaluate the quality of graph prompts, and to improve their usability for practical comparisons and selection. In response to these challenges, we introduce the first comprehensive benchmark for graph prompt learning. Our benchmark integrates SIX pre-training methods and FIVE state-of-the-art graph prompt techniques, evaluated across FIFTEEN diverse datasets to assess performance, flexibility, and efficiency. We also present 'ProG', an easy-to-use open-source library that streamlines the execution of various graph prompt models, facilitating objective evaluations. Additionally, we propose a unified framework that categorizes existing graph prompt methods into two main approaches: prompts as graphs and prompts as tokens. This framework enhances the applicability and comparison of graph prompt techniques. The code is available at: https://github.com/sheldonresearch/ProG

    SEGNO: Generalizing Equivariant Graph Neural Networks with Physical Inductive Biases

    Full text link
    Graph Neural Networks (GNNs) with equivariant properties have emerged as powerful tools for modeling complex dynamics of multi-object physical systems. However, their generalization ability is limited by the inadequate consideration of physical inductive biases: (1) Existing studies overlook the continuity of transitions among system states, opting to employ several discrete transformation layers to learn the direct mapping between two adjacent states; (2) Most models only account for first-order velocity information, despite the fact that many physical systems are governed by second-order motion laws. To incorporate these inductive biases, we propose the Second-order Equivariant Graph Neural Ordinary Differential Equation (SEGNO). Specifically, we show how the second-order continuity can be incorporated into GNNs while maintaining the equivariant property. Furthermore, we offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states, which is crucial for model generalization. Additionally, we prove that the discrepancy between this learned trajectory of SEGNO and the true trajectory is bounded. Extensive experiments on complex dynamical systems including molecular dynamics and motion capture demonstrate that our model yields a significant improvement over the state-of-the-art baselines
    corecore