26 research outputs found

    Generating Personalized Recommendations via Large Language Models (LLMs)

    Get PDF
    Personalized recommendations used in many applications and websites are generated using techniques such as collaborative filtering, content-based filtering, reinforcement learning, etc. These are task-specific approaches. Large language models (LLMs) can generate predictions based on priming with specific input without the need for task-specific model tuning. However, LLMs have not been applied for making personalized recommendations because their maximum input size is smaller than the typical size of user histories used to personalize recommendations. This disclosure describes techniques to obtain personalized recommendations via LLMs by automatically augmenting a user command or query with relevant text phrases about the user. The set of relevant phrases that fit within the input limits of the LLM are extracted from a collection of phrases obtained from relevant historical and contextual information sources based on the embeddings generated based on the user command or query. Implementation of the techniques can improve the relevance and utility of personalized recommendations and can lead to increased user engagement with the recommended content

    MEGA: Multilingual Evaluation of Generative AI

    Full text link
    Generative AI models have shown impressive performance on many Natural Language Processing tasks such as language understanding, reasoning, and language generation. An important question being asked by the AI community today is about the capabilities and limits of these models, and it is clear that evaluating generative AI is very challenging. Most studies on generative LLMs have been restricted to English and it is unclear how capable these models are at understanding and generating text in other languages. We present the first comprehensive benchmarking of generative LLMs - MEGA, which evaluates models on standard NLP benchmarks, covering 16 NLP datasets across 70 typologically diverse languages. We compare the performance of generative LLMs including Chat-GPT and GPT-4 to State of the Art (SOTA) non-autoregressive models on these tasks to determine how well generative models perform compared to the previous generation of LLMs. We present a thorough analysis of the performance of models across languages and tasks and discuss challenges in improving the performance of generative LLMs on low-resource languages. We create a framework for evaluating generative LLMs in the multilingual setting and provide directions for future progress in the field.Comment: EMNLP 202

    Leveraging Large Language Models in Conversational Recommender Systems

    Full text link
    A Conversational Recommender System (CRS) offers increased transparency and control to users by enabling them to engage with the system through a real-time multi-turn dialogue. Recently, Large Language Models (LLMs) have exhibited an unprecedented ability to converse naturally and incorporate world knowledge and common-sense reasoning into language understanding, unlocking the potential of this paradigm. However, effectively leveraging LLMs within a CRS introduces new technical challenges, including properly understanding and controlling a complex conversation and retrieving from external sources of information. These issues are exacerbated by a large, evolving item corpus and a lack of conversational data for training. In this paper, we provide a roadmap for building an end-to-end large-scale CRS using LLMs. In particular, we propose new implementations for user preference understanding, flexible dialogue management and explainable recommendations as part of an integrated architecture powered by LLMs. For improved personalization, we describe how an LLM can consume interpretable natural language user profiles and use them to modulate session-level context. To overcome conversational data limitations in the absence of an existing production CRS, we propose techniques for building a controllable LLM-based user simulator to generate synthetic conversations. As a proof of concept we introduce RecLLM, a large-scale CRS for YouTube videos built on LaMDA, and demonstrate its fluency and diverse functionality through some illustrative example conversations

    Role of whole-brain computed tomography perfusion in head injury patients to predict outcome

    No full text
    Purpose: To evaluate utility, pattern, and extent of perfusion abnormalities in traumatic brain injury by using whole-brain computed tomography perfusion (CTP) and to assess co-relation of CTP data clinically with Glasgow outcome score (GOS). Materials and Methods: Prospective analytic evaluation of the traumatic head injury patients who were immediately taken up for CTP was done. Patient's demographic, clinical, and radiological findings were tabulated and analyzed. GOS was measured by a neurosurgeon after 3 months of trauma who was blinded to CTP results. Results: Of the 78 patients included in this study, 28 patients were found to have GOS 5, 19 of them had GOS 4, 27 of them had GOS 3, and 4 of them had a GOS 2. Higher mean cerebral blood flow (CBF) and cerebral blood volume (CBV) values were observed in those who had a better GOS, i.e., 4 or 5, whereas those in the GOS range ≤3 had lower mean CBF and CBV values. Conclusion: Statistically significant positive correlation was found between cerebral perfusion parameters with that of GOS. CBF of frontal area shows better correlation with GOS. CBF was the most important predictor among all the perfusion parameters

    Salt-assisted growth of monolayer MoS2 for high-performance hysteresis-free field-effect transistor

    No full text
    Atomically thin layered materials such as MoS2 have future versatile applications in low power electronics. Here, we demonstrate the growth of a salt-assisted large scale, high-quality monolayer MoS2 toward the realization of a high-performance hysteresis-free field-effect transistor (FET). Density functional theory calculations are implemented to monitor the effects of the Schottky barrier and metal-induced gap states between our metal electrodes and MoS2 for achieving high carrier transport. The role of absorbed molecules and oxide traps on the hysteresis are studied in detail. For the first time, a hysteresis-free intrinsic transistor behavior is obtained by an amplitude sweep pulse I-V measurement with varying pulse widths. Under this condition, a significant enhancement of the field-effect mobility up to 30cm(2)V(-1)s(-1) is achieved. Moreover, to correlate these results, a single-pulse time-domain drain current analysis is carried out to unleash the fast and slow transient charge trapping phenomena. Our findings on the hysteresis-free transfer characteristic and high intrinsic field-effect mobility in salt-assisted monolayer MoS2 FETs will be beneficial for future device applications in complex memory, logic, and sensor systems
    corecore