316 research outputs found

    Exploring Large Language Models for Human Mobility Prediction under Public Events

    Full text link
    Public events, such as concerts and sports games, can be major attractors for large crowds, leading to irregular surges in travel demand. Accurate human mobility prediction for public events is thus crucial for event planning as well as traffic or crowd management. While rich textual descriptions about public events are commonly available from online sources, it is challenging to encode such information in statistical or machine learning models. Existing methods are generally limited in incorporating textual information, handling data sparsity, or providing rationales for their predictions. To address these challenges, we introduce a framework for human mobility prediction under public events (LLM-MPE) based on Large Language Models (LLMs), leveraging their unprecedented ability to process textual data, learn from minimal examples, and generate human-readable explanations. Specifically, LLM-MPE first transforms raw, unstructured event descriptions from online sources into a standardized format, and then segments historical mobility data into regular and event-related components. A prompting strategy is designed to direct LLMs in making and rationalizing demand predictions considering historical mobility and event features. A case study is conducted for Barclays Center in New York City, based on publicly available event information and taxi trip data. Results show that LLM-MPE surpasses traditional models, particularly on event days, with textual data significantly enhancing its accuracy. Furthermore, LLM-MPE offers interpretable insights into its predictions. Despite the great potential of LLMs, we also identify key challenges including misinformation and high costs that remain barriers to their broader adoption in large-scale human mobility analysis

    Chiral Brønsted acid catalyzed enantioselective dehydrative Nazarov-type electrocyclization of aryl and 2-Thienyl vinyl alcohols

    Get PDF
    An efficient chiral Brønsted acid-catalyzed enantioselective dehydrative Nazarov-type electrocyclization (DNE) of electron-rich aryl- and 2-thienyl-β-amino-2-en-1-ols is described. The 4π conrotatory electrocyclization reaction affords access to a wide variety of the corresponding 1H-indenes and 4H-cyclopenta[b]thiophenes in excellent yields of up to 99% and enantiomeric excess (ee) values of up to 99%. Experimental and computational studies based on a proposed intimate contact ion-pair species that is further assisted by hydrogen bonding between the amino group of the substrate cation and chiral catalyst anion provide insight into the observed product enantioselectivities

    A Unified Framework for Multi-Domain CTR Prediction via Large Language Models

    Full text link
    Click-Through Rate (CTR) prediction is a crucial task in online recommendation platforms as it involves estimating the probability of user engagement with advertisements or items by clicking on them. Given the availability of various services like online shopping, ride-sharing, food delivery, and professional services on commercial platforms, recommendation systems in these platforms are required to make CTR predictions across multiple domains rather than just a single domain. However, multi-domain click-through rate (MDCTR) prediction remains a challenging task in online recommendation due to the complex mutual influence between domains. Traditional MDCTR models typically encode domains as discrete identifiers, ignoring rich semantic information underlying. Consequently, they can hardly generalize to new domains. Besides, existing models can be easily dominated by some specific domains, which results in significant performance drops in the other domains (i.e. the "seesaw phenomenon"). In this paper, we propose a novel solution Uni-CTR to address the above challenges. Uni-CTR leverages a backbone Large Language Model (LLM) to learn layer-wise semantic representations that capture commonalities between domains. Uni-CTR also uses several domain-specific networks to capture the characteristics of each domain. Note that we design a masked loss strategy so that these domain-specific networks are decoupled from backbone LLM. This allows domain-specific networks to remain unchanged when incorporating new or removing domains, thereby enhancing the flexibility and scalability of the system significantly. Experimental results on three public datasets show that Uni-CTR outperforms the state-of-the-art (SOTA) MDCTR models significantly. Furthermore, Uni-CTR demonstrates remarkable effectiveness in zero-shot prediction. We have applied Uni-CTR in industrial scenarios, confirming its efficiency.Comment: submited to TOI

    Proton pump inhibitors induced fungal dysbiosis in patients with gastroesophageal reflux disease

    Get PDF
    Gut mycobiota inhabits human gastrointestinal lumen and plays a role in human health and disease. We investigated the influence of proton pump inhibitors (PPIs) on gastric mucosal and fecal mycobiota in patients with gastroesophageal reflux diseases (GERD) by using Internal Transcribed Spacer 1 sequencing. A total of 65 participants were included, consisting of the healthy control (HC) group, GERD patients who did not use PPIs (nt-GERD), and GERD patients who used PPIs, which were further divided into short-term (s-PPI) and long-term PPI user (l-PPI) groups based on the duration of PPI use. The alpha diversity and beta diversity of gastric mucosal mycobiota in GERD patients with PPI use were significantly different from HCs, but there were no differences between s-PPI and l-PPI groups. LEfSe analysis identified Candida at the genus level as a biomarker for the s-PPI group when compared to the nt-GERD group. Meanwhile, Candida, Nothojafnea, Rhizodermea, Ambispora, and Saccharicola were more abundant in the l-PPI group than in the nt-GERD group. Furthermore, colonization of Candida in gastric mucosa was significantly increased after PPI treatment. However, there was no significant difference in Candida colonization between patients with endoscopic esophageal mucosal breaks and those without. There were significant differences in the fecal mycobiota composition between HCs and GERD patients regardless whether or not they used PPI. As compared to nt-GERD patient samples, there was a high abundance of Alternaria, Aspergillus, Mycenella, Exserohilum, and Clitopilus in the s-PPI group. In addition, there was a significantly higher abundance of Alternaria, Aspergillus, Podospora, Phallus, and Monographella in the l-PPI group than nt-GERD patients. In conclusion, our study indicates that dysbiosis of mycobiota was presented in GERD patients in both gastric mucosal and fecal mycobiota. PPI treatment may increase the colonization of Candida in the gastric mucosa in GERD patients

    Synergy between CSST galaxy survey and gravitational-wave observation: Inferring the Hubble constant from dark standard sirens

    Full text link
    Gravitational waves (GWs) from compact binary coalescences encode the absolute luminosity distances of GW sources. Once the redshifts of GW sources are known, one can use the distance-redshift relation to constrain cosmological parameters. One way to obtain the redshifts is to localize GW sources by GW observations and then use galaxy catalogs to determine redshifts from a statistical analysis of redshift information of the potential host galaxies, commonly referred to as the dark siren method. The third-generation (3G) GW detectors are planned to work in the 2030s and will observe numerous compact binary coalescences. Using these GW events as dark sirens requires high-quality galaxy catalogs from future sky survey projects. The China Space Station Telescope (CSST) will be launched in 2024 and will observe billions of galaxies within a 17500 deg2^2 survey area with redshift up to z∼4z\sim 4, providing photometric and spectroscopic galaxy catalogs. In this work, we simulate the CSST galaxy catalogs and the 5-year GW data from the 3G GW detectors and combine them to infer the Hubble constant (H0H_0). Our results show that the measurement precision of H0H_0 could reach the sub-percent level, meeting the standard of precision cosmology. We conclude that the synergy between CSST and the 3G GW detectors is of great significance in measuring the Hubble constant.Comment: 13 pages, 5 figure

    Cosmology with fast radio bursts in the era of SKA

    Full text link
    We present a forecast of the cosmological parameter estimation using fast radio bursts (FRBs) from the upcoming Square Kilometre Array (SKA), focusing on the issues of dark energy, the Hubble constant, and baryon density. We simulate 10510^5 and 10610^6 localized FRBs from a 10-year SKA observation, and find that: (i) using 10610^6 FRB data alone can tightly constrain dark-energy equation of state parameters better than CMB+BAO+SN, providing a single cosmological probe to explore dark energy; (ii) combining the FRB data with gravitational wave standard siren data from 10-year observation with the Einstein Telescope, the Hubble constant can be constrained to a sub-percent level, serving as a powerful low-redshift probe; (iii) using 10610^6 FRB data can constrain the baryon density Ωbh\Omega_{\rm b}h to a precision of ∼0.1%\sim 0.1\%. Our results indicate that SKA-era FRBs will provide precise cosmological measurements to shed light on both dark energy and the missing baryon problem, and help resolve the Hubble tension.Comment: 16 pages, 6 figure

    Fast radio burst energy function in the presence of DMhost\rm DM_{host} variation

    Full text link
    Fast radio bursts (FRBs) have been found in great numbers but the physical mechanism of these sources is still a mystery. The redshift evolutions of the FRB energy distribution function and the volumetric rate shed light on revealing the origin of the FRBs. However, such estimations rely on the dispersion measurement (DM)-redshift (zz) relationship. A few of FRBs detected recently show large excess DM beyond the expectation from the cosmological and Milky Way contributions, which indicates large spread of DM from their host galaxies. In this work, we adopt the lognormal distributed DMhost\rm DM_{host} model and estimate the energy function using the non-repeating FRBs selected from the Canadian Hydrogen Intensity Mapping Experiment (CHIME)/FRB Catalog 1. By comparing the lognormal distributed DMhost\rm DM_{host} model to the constant DMhost\rm DM_{host} model, the FRB energy function results are consistent within the measurement uncertainty. We also estimate the volumetric rate of the non-repeating FRBs in three different redshift bins. The volumetric rate shows that the trend is consistent with the stellar-mass density redshift evolution. Since the lognormal distributed DMhost\rm DM_{host} model increases the measurement errors, the inference of FRBs tracking the stellar-mass density is nonetheless undermined.Comment: 8 pages, 5 figure

    Diffusion Models for Reinforcement Learning: A Survey

    Full text link
    Diffusion models surpass previous generative models in sample quality and training stability. Recent works have shown the advantages of diffusion models in improving reinforcement learning (RL) solutions. This survey aims to provide an overview of this emerging field and hopes to inspire new avenues of research. First, we examine several challenges encountered by RL algorithms. Then, we present a taxonomy of existing methods based on the roles of diffusion models in RL and explore how the preceding challenges are addressed. We further outline successful applications of diffusion models in various RL-related tasks. Finally, we conclude the survey and offer insights into future research directions. We are actively maintaining a GitHub repository for papers and other related resources in utilizing diffusion models in RL: https://github.com/apexrl/Diff4RLSurvey.Comment: Fixed typo
    • …
    corecore