129 research outputs found

    Estimation of a Composite Food Demand System for the United States—A Revisit

    Get PDF
    We revisit the composite food demand system for the United States covering the period from 1953 to 2008. We were unable to produce elasticity measures that are as close or similar to those reported in Huang and Haidacher (1983), although the same demand system specification was employed. Our results based on the more recent data set of 1982-2008 show that most of the own-price elasticities are negative and statistically significant, varying from -.1861 (poultry) to -.9476 (nonfood). In general, the estimated own-price elasticities appear to be smaller in magnitude or more inelastic than previously reported. In contrast, we estimate the significant income elasticities for food vary from .5172 (dairy) to 4.6687 (fish), while Huang and Haidaicher (1983) report their income elasticities to vary from -.6343 (fruits) to .5748 (fats & oils). The estimated income elasticities for nonfood group appear to remain fairly constant among different studies, ranging from 1.0407 to1.2035.Differential-form demand system, iterative seemingly unrelated regression, Engel aggregation, Homogeneity, Symmetry, Uncompensated and compensated elasticities, Demand and Price Analysis, Food Consumption/Nutrition/Food Safety,

    Lane change decision prediction:an efficient BO-XGB modelling approach with SHAP analysis

    Get PDF
    The lane-change decision (LCD) is a critical aspect of driving behaviour. This study proposes an LCD model based on a Bayesian optimization (BO) framework and extreme gradient boosting (XGBoost) to predict whether a vehicle should change lanes. First, an LCD point extraction method is proposed to refine the exact LCD points with a highD dataset to increase model learning accuracy. Subsequently, an efficient XGBoost with BO (BO-XGB) was used to learn the LCD principles. The prediction accuracy on the highD dataset was 99.14% with a computation time of 66.837s. The accuracy on the CQSkyEyeX dataset was 99.45%. Model explanation using the shapley additive explanation (SHAP) method was developed to analyse the mechanism of the BO-XGB’s LCD prediction results, including global and sample explanations. The former indicates the particular contribution of each feature to the model prediction throughout the entire dataset. The latter denotes each feature's contribution to a single sample

    SPA: A Graph Spectral Alignment Perspective for Domain Adaptation

    Full text link
    Unsupervised domain adaptation (UDA) is a pivotal form in machine learning to extend the in-domain model to the distinctive target domains where the data distributions differ. Most prior works focus on capturing the inter-domain transferability but largely overlook rich intra-domain structures, which empirically results in even worse discriminability. In this work, we introduce a novel graph SPectral Alignment (SPA) framework to tackle the tradeoff. The core of our method is briefly condensed as follows: (i)-by casting the DA problem to graph primitives, SPA composes a coarse graph alignment mechanism with a novel spectral regularizer towards aligning the domain graphs in eigenspaces; (ii)-we further develop a fine-grained message propagation module -- upon a novel neighbor-aware self-training mechanism -- in order for enhanced discriminability in the target domain. On standardized benchmarks, the extensive experiments of SPA demonstrate that its performance has surpassed the existing cutting-edge DA methods. Coupled with dense model analysis, we conclude that our approach indeed possesses superior efficacy, robustness, discriminability, and transferability. Code and data are available at: https://github.com/CrownX/SPA.Comment: NeurIPS 2023 camera read

    Trace Amounts of Triple-Functional Additives Enable Reversible Aqueous Zinc-Ion Batteries from a Comprehensive Perspective

    Get PDF
    Although their cost-effectiveness and intrinsic safety, aqueous zinc-ion batteries suffer from notorious side reactions including hydrogen evolution reaction, Zn corrosion and passivation, and Zn dendrite formation on the anode. Despite numerous strategies to alleviate these side reactions have been demonstrated, they can only provide limited performance improvement from a single aspect. Herein, a triple-functional additive with trace amounts, ammonium hydroxide, was demonstrated to comprehensively protect zinc anodes. The results show that the shift of electrolyte pH from 4.1 to 5.2 lowers the HER potential and encourages the in situ formation of a uniform ZHS-based solid electrolyte interphase on Zn anodes. Moreover, cationic NH4+ can preferentially adsorb on the Zn anode surface to shield the "tip effect" and homogenize the electric field. Benefitting from this comprehensive protection, dendrite-free Zn deposition and highly reversible Zn plating/stripping behaviors were realized. Besides, improved electrochemical performances can also be achieved in Zn//MnO2 full cells by taking the advantages of this triple-functional additive. This work provides a new strategy for stabilizing Zn anodes from a comprehensive perspective

    Impact of chest pain center quality control indicators on mortality risk in ST-segment elevation myocardial infarction patients: a study based on Killip classification

    Get PDF
    BackgroundDespite the crucial role of Chest pain centers (CPCs) in acute myocardial infarction (AMI) management, China's mortality rate for ST-segment elevation myocardial infarction (STEMI) has remained stagnant. This study evaluates the influence of CPC quality control indicators on mortality risk in STEMI patients receiving primary percutaneous coronary intervention (PPCI) during the COVID-19 pandemic.MethodsA cohort of 664 consecutive STEMI patients undergoing PPCI from 2020 to 2022 was analyzed using Cox proportional hazards regression models. The cohort was stratified by Killip classification at admission (Class 1: n = 402, Class ≥2: n = 262).ResultsAt a median follow-up of 17 months, 35 deaths were recorded. In Class ≥2, longer door-to-balloon (D-to-B) time, PCI informed consent time, catheterization laboratory activation time, and diagnosis-to-loading dose dual antiplatelet therapy (DAPT) time were associated with increased mortality risk. In Class 1, consultation time (notice to arrival) under 10 min reduced death risk. In Class ≥2, PCI informed consent time under 20 min decreased mortality risk.ConclusionCPC quality control metrics affect STEMI mortality based on Killip class. Key factors include time indicators and standardization of CPC management. The study provides guidance for quality care during COVID-19

    Training semantic long-term memory retrieval transfers to executive function and reading fluency

    Get PDF
    The retrieval of information from long-term memory is a fundamental cognitive ability, crucial for most aspects of successful human functioning. Whether and how long-term memory retrieval (LTMR) can be improved with training has clear societal importance but also theoretical value for furthering our understanding of underlying mechanisms. Here, we provide electrophysiological evidence for the plasticity of semantic LTMR. Thirty-five university students were randomly assigned to adaptive semantic LTMR training (using a Posner task) or to a non-adaptive version of the training. Before and after training they were assessed on measures of semantic LTMR, working memory, central executive function (interference control, switching), reading fluency, and fluid intelligence. Adaptive LTMR training (relative to non-adaptive training) led to significant improvements in semantic LTMR. The intervention group (in contrast to the control group) also showed a significant reduction in the mean amplitude of the N400 ERP component and 700–1000 ms measured during a semantic LTMR task, suggesting that changes in retrieval occurred at an early/automatic point and retrieval processing in semantic processing. Moreover, transfer effects were observed for switching, working memory and reading fluency, but not for interference control or fluid intelligence. These results point to the plasticity of semantic LTMR, and suggest that improvement in this ability can transfer to other domains for which LTMR is key

    TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT

    Full text link
    Tables are prevalent in real-world databases, requiring significant time and effort for humans to analyze and manipulate. The advancements in large language models (LLMs) have made it possible to interact with tables using natural language input, bringing this capability closer to reality. In this paper, we present TableGPT, a unified fine-tuned framework that enables LLMs to understand and operate on tables using external functional commands. It introduces the capability to seamlessly interact with tables, enabling a wide range of functionalities such as question answering, data manipulation (e.g., insert, delete, query, and modify operations), data visualization, analysis report generation, and automated prediction. TableGPT aims to provide convenience and accessibility to users by empowering them to effortlessly leverage tabular data. At the core of TableGPT lies the novel concept of global tabular representations, which empowers LLMs to gain a comprehensive understanding of the entire table beyond meta-information. By jointly training LLMs on both table and text modalities, TableGPT achieves a deep understanding of tabular data and the ability to perform complex operations on tables through chain-of-command instructions. Importantly, TableGPT offers the advantage of being a self-contained system rather than relying on external API interfaces. Moreover, it supports efficient data process flow, query rejection (when appropriate) and private deployment, enabling faster domain data fine-tuning and ensuring data privacy, which enhances the framework's adaptability to specific use cases.Comment: Technical Repor
    • …
    corecore