38 research outputs found

    From Kepler to Newton: Explainable AI for Science Discovery

    Full text link
    The Observation--Hypothesis--Prediction--Experimentation loop paradigm for scientific research has been practiced by researchers for years towards scientific discoveries. However, with data explosion in both mega-scale and milli-scale scientific research, it has been sometimes very difficult to manually analyze the data and propose new hypotheses to drive the cycle for scientific discovery. In this paper, we discuss the role of Explainable AI in scientific discovery process by demonstrating an Explainable AI-based paradigm for science discovery. The key is to use Explainable AI to help derive data or model interpretations, hypotheses, as well as scientific discoveries or insights. We show how computational and data-intensive methodology -- together with experimental and theoretical methodology -- can be seamlessly integrated for scientific research. To demonstrate the AI-based science discovery process, and to pay our respect to some of the greatest minds in human history, we show how Kepler's laws of planetary motion and Newton's law of universal gravitation can be rediscovered by (Explainable) AI based on Tycho Brahe's astronomical observation data, whose works were leading the scientific revolution in the 16-17th century. This work also highlights the important role of Explainable AI (as compared to Blackbox AI) in science discovery to help humans prevent or better prepare for the possible technological singularity that may happen in the future, since science is not only about the know how, but also the know why.Comment: Presented at ICML-AI4Science 202

    OpenAGI: When LLM Meets Domain Experts

    Full text link
    Human intelligence has the remarkable ability to assemble basic skills into complex ones so as to solve complex tasks. This ability is equally important for Artificial Intelligence (AI), and thus, we assert that in addition to the development of large, comprehensive intelligent models, it is equally crucial to equip such models with the capability to harness various domain-specific expert models for complex task-solving in the pursuit of Artificial General Intelligence (AGI). Recent developments in Large Language Models (LLMs) have demonstrated remarkable learning and reasoning abilities, making them promising as a controller to select, synthesize, and execute external models to solve complex tasks. In this project, we develop OpenAGI, an open-source AGI research platform, specifically designed to offer complex, multi-step tasks and accompanied by task-specific datasets, evaluation metrics, and a diverse range of extensible models. OpenAGI formulates complex tasks as natural language queries, serving as input to the LLM. The LLM subsequently selects, synthesizes, and executes models provided by OpenAGI to address the task. Furthermore, we propose a Reinforcement Learning from Task Feedback (RLTF) mechanism, which uses the task-solving result as feedback to improve the LLM's task-solving ability. Thus, the LLM is responsible for synthesizing various external models for solving complex tasks, while RLTF provides feedback to improve its task-solving ability, enabling a feedback loop for self-improving AI. We believe that the paradigm of LLMs operating various expert models for complex task-solving is a promising approach towards AGI. To facilitate the community's long-term improvement and evaluation of AGI's ability, we open-source the code, benchmark, and evaluation methods of the OpenAGI project at https://github.com/agiresearch/OpenAGI.Comment: 18 pages, 6 figures, 7 table

    User-Controllable Recommendation via Counterfactual Retrospective and Prospective Explanations

    Full text link
    Modern recommender systems utilize users' historical behaviors to generate personalized recommendations. However, these systems often lack user controllability, leading to diminished user satisfaction and trust in the systems. Acknowledging the recent advancements in explainable recommender systems that enhance users' understanding of recommendation mechanisms, we propose leveraging these advancements to improve user controllability. In this paper, we present a user-controllable recommender system that seamlessly integrates explainability and controllability within a unified framework. By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system by interacting with these explanations. Furthermore, we introduce and assess two attributes of controllability in recommendation systems: the complexity of controllability and the accuracy of controllability. Experimental evaluations on MovieLens and Yelp datasets substantiate the effectiveness of our proposed framework. Additionally, our experiments demonstrate that offering users control options can potentially enhance recommendation accuracy in the future. Source code and data are available at \url{https://github.com/chrisjtan/ucr}.Comment: Accepted for presentation at 26th European Conference on Artificial Intelligence (ECAI2023

    GenRec: Large Language Model for Generative Recommendation

    Full text link
    In recent years, large language models (LLM) have emerged as powerful tools for diverse natural language processing tasks. However, their potential for recommender systems under the generative recommendation paradigm remains relatively unexplored. This paper presents an innovative approach to recommendation systems using large language models (LLMs) based on text data. In this paper, we present a novel LLM for generative recommendation (GenRec) that utilized the expressive power of LLM to directly generate the target item to recommend, rather than calculating ranking score for each candidate item one by one as in traditional discriminative recommendation. GenRec uses LLM's understanding ability to interpret context, learn user preferences, and generate relevant recommendation. Our proposed approach leverages the vast knowledge encoded in large language models to accomplish recommendation tasks. We first we formulate specialized prompts to enhance the ability of LLM to comprehend recommendation tasks. Subsequently, we use these prompts to fine-tune the LLaMA backbone LLM on a dataset of user-item interactions, represented by textual data, to capture user preferences and item characteristics. Our research underscores the potential of LLM-based generative recommendation in revolutionizing the domain of recommendation systems and offers a foundational framework for future explorations in this field. We conduct extensive experiments on benchmark datasets, and the experiments shows that our GenRec has significant better results on large dataset

    Efficient DS-UWB MUD Algorithm Using Code Mapping and RVM

    Get PDF
    A hybrid multiuser detection (MUD) using code mapping and a wrong code recognition based on relevance vector machine (RVM) for direct sequence ultra wide band (DS-UWB) system is developed to cope with the multiple access interference (MAI) and the computational efficiency. A new MAI suppression mechanism is studied in the following steps: firstly, code mapping, an optimal decision function, is constructed and the output candidate code of the matched filter is mapped to a feature space by the function. In the feature space, simulation results show that the error codes caused by MAI and the single user mapped codes can be classified by a threshold which is related to SNR of the receiver. Then, on the base of code mapping, use RVM to distinguish the wrong codes from the right ones and finally correct them. Compared with the traditional MUD approaches, the proposed method can considerably improve the bit error ratio (BER) performance due to its special MAI suppression mechanism. Simulation results also show that the proposed method can approximately achieve the BER performance of optimal multiuser detection (OMD) and the computational complexity approximately equals the matched filter. Moreover, the proposed method is less sensitive to the number of users

    Counterfactual Collaborative Reasoning

    Full text link
    Causal reasoning and logical reasoning are two important types of reasoning abilities for human intelligence. However, their relationship has not been extensively explored under machine intelligence context. In this paper, we explore how the two reasoning abilities can be jointly modeled to enhance both accuracy and explainability of machine learning models. More specifically, by integrating two important types of reasoning ability -- counterfactual reasoning and (neural) logical reasoning -- we propose Counterfactual Collaborative Reasoning (CCR), which conducts counterfactual logic reasoning to improve the performance. In particular, we use recommender system as an example to show how CCR alleviate data scarcity, improve accuracy and enhance transparency. Technically, we leverage counterfactual reasoning to generate "difficult" counterfactual training examples for data augmentation, which -- together with the original training examples -- can enhance the model performance. Since the augmented data is model irrelevant, they can be used to enhance any model, enabling the wide applicability of the technique. Besides, most of the existing data augmentation methods focus on "implicit data augmentation" over users' implicit feedback, while our framework conducts "explicit data augmentation" over users explicit feedback based on counterfactual logic reasoning. Experiments on three real-world datasets show that CCR achieves better performance than non-augmented models and implicitly augmented models, and also improves model transparency by generating counterfactual explanations

    Life history traits of low-toxicity alternative bisphenol S on Daphnia magna with short breeding cycles : A multigenerational study

    Get PDF
    Due to relatively lower toxicity, bisphenol S (BPS) has become an alternative to previously used bisphenol A. Nevertheless, the occurrence of BPS and its ecological impact have recently attracted increasing attentions because the toxicology effect of BPS with life cycle or multigenerational exposure on aquatic organisms remains questionable. Herein, Daphnia magna (D. magna) multigenerational bioassays spanning four generations (F0–F3) and single-generation recovery (F1 and F3) in clean water were used to investigate the ecotoxicology of variable chronic BPS exposure. For both assays, four kinds of life-history traits (i.e., survival, reproduction, growth and ecological behavior) were examined for each generation. After an 18-day exposure under concentration of 200 μg/L, the survival rate of D. magna was less than 15 % for the F2 generation, whereas all died for the F3 generation. With continuous exposure of four generations of D. magna at environmentally relevant concentrations of BPS (2 μg/L), inhibition of growth and development, prolonged sexual maturity, decreased offspring production and decreased swimming activity were observed for the F3 generation. In particular, it is difficult for D. magna to return to its normal level through a single-generation recovery in clean water in terms of reproductive function, ecological behavior and population health. Hence, multi-generational exposure to low concentrations of BPS can have adverse effects on population health of aquatic organisms with short breeding cycles, highlighting the necessity to assess the ecotoxicology of chronic BPS exposure for public health.publishedVersionPeer reviewe
    corecore