92 research outputs found

    Session-based Recommendation with Graph Neural Networks

    Full text link
    The problem of session-based recommendation aims to predict user actions based on anonymous sessions. Previous methods model a session as a sequence and estimate user representations besides item representations to make recommendations. Though achieved promising results, they are insufficient to obtain accurate user vectors in sessions and neglect complex transitions of items. To obtain accurate item embedding and take complex transitions of items into account, we propose a novel method, i.e. Session-based Recommendation with Graph Neural Networks, SR-GNN for brevity. In the proposed method, session sequences are modeled as graph-structured data. Based on the session graph, GNN can capture complex transitions of items, which are difficult to be revealed by previous conventional sequential methods. Each session is then represented as the composition of the global preference and the current interest of that session using an attention network. Extensive experiments conducted on two real datasets show that SR-GNN evidently outperforms the state-of-the-art session-based recommendation methods consistently.Comment: 9 pages, 4 figures, accepted by AAAI Conference on Artificial Intelligence (AAAI-19

    A Review of Researches on Blockchain

    Get PDF
    Analyzing 242 articles related to the study of blockchain which were published in China and abroad from 2014 to 2016, and from the aspects of literature sources, research subjects, research methods and western countries, the basic frame of blockchain research classification is put forward. Summarize the current blockchain technology progress, research limitations and future development trends. The research shows that the domestic research on the blockchain is more decentralized, non-systematic, and has not reached a certain research depth. What’s more, it is lack of quantitative analysis. Digital currency, Internet finance, and the risk of blockchain technology research will be the focus of future research

    TAGNN: Target Attentive Graph Neural Networks for Session-based Recommendation

    Full text link
    Session-based recommendation nowadays plays a vital role in many websites, which aims to predict users' actions based on anonymous sessions. There have emerged many studies that model a session as a sequence or a graph via investigating temporal transitions of items in a session. However, these methods compress a session into one fixed representation vector without considering the target items to be predicted. The fixed vector will restrict the representation ability of the recommender model, considering the diversity of target items and users' interests. In this paper, we propose a novel target attentive graph neural network (TAGNN) model for session-based recommendation. In TAGNN, target-aware attention adaptively activates different user interests with respect to varied target items. The learned interest representation vector varies with different target items, greatly improving the expressiveness of the model. Moreover, TAGNN harnesses the power of graph neural networks to capture rich item transitions in sessions. Comprehensive experiments conducted on real-world datasets demonstrate its superiority over state-of-the-art methods.Comment: 5 pages, accepted to SIGIR 2020, authors' versio

    Improving Molecular Pretraining with Complementary Featurizations

    Full text link
    Molecular pretraining, which learns molecular representations over massive unlabeled data, has become a prominent paradigm to solve a variety of tasks in computational chemistry and drug discovery. Recently, prosperous progress has been made in molecular pretraining with different molecular featurizations, including 1D SMILES strings, 2D graphs, and 3D geometries. However, the role of molecular featurizations with their corresponding neural architectures in molecular pretraining remains largely unexamined. In this paper, through two case studies -- chirality classification and aromatic ring counting -- we first demonstrate that different featurization techniques convey chemical information differently. In light of this observation, we propose a simple and effective MOlecular pretraining framework with COmplementary featurizations (MOCO). MOCO comprehensively leverages multiple featurizations that complement each other and outperforms existing state-of-the-art models that solely relies on one or two featurizations on a wide range of molecular property prediction tasks.Comment: 24 pages, work in progres

    SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models

    Full text link
    Recent advances in large language models (LLMs) have demonstrated notable progress on many mathematical benchmarks. However, most of these benchmarks only feature problems grounded in junior and senior high school subjects, contain only multiple-choice questions, and are confined to a limited scope of elementary arithmetic operations. To address these issues, this paper introduces an expansive benchmark suite SciBench that aims to systematically examine the reasoning capabilities required for complex scientific problem solving. SciBench contains two carefully curated datasets: an open set featuring a range of collegiate-level scientific problems drawn from mathematics, chemistry, and physics textbooks, and a closed set comprising problems from undergraduate-level exams in computer science and mathematics. Based on the two datasets, we conduct an in-depth benchmark study of two representative LLMs with various prompting strategies. The results reveal that current LLMs fall short of delivering satisfactory performance, with an overall score of merely 35.80%. Furthermore, through a detailed user study, we categorize the errors made by LLMs into ten problem-solving abilities. Our analysis indicates that no single prompting strategy significantly outperforms others and some strategies that demonstrate improvements in certain problem-solving skills result in declines in other skills. We envision that SciBench will catalyze further developments in the reasoning abilities of LLMs, thereby ultimately contributing to scientific research and discovery.Comment: Work in progress, 18 page

    D-efficient or deficient? A robustness analysis of stated choice experimental designs

    Get PDF
    This paper is motivated by the increasing popularity of efficient designs for stated choice experiments. The objective in efficient designs is to create a stated choice experiment that minimizes the standard errors of the estimated parameters. In order to do so, such designs require specifying prior values for the parameters to be estimated. While there is significant literature demonstrating the efficiency improvements (and cost savings) of employing efficient designs, the bulk of the literature tests conditions where the priors used to generate the efficient design are assumed to be accurate. However, there is substantially less literature that compares how different design types perform under varying degree of error of the prior. The literature that does exist assumes small fractions are used (e.g., under 20 unique choice tasks generated), which is in contrast to computer-aided surveys that readily allow for large fractions. Further, the results in the literature are abstract in that there is no reference point (i.e., meaningful units) to provide clear insight on the magnitude of any issue. Our objective is to analyze the robustness of different designs within a typical stated choice experiment context of a trade-off between price and quality. We use as an example transportation mode choice, where the key parameter to estimate is the value of time (VOT). Within this context, we test many designs to examine how robust efficient designs are against a misspecification of the prior parameters. The simple mode choice setting allows for insightful visualizations of the designs themselves and also an interpretable reference point (VOT) for the range in which each design is robust. Not surprisingly, the D-efficient design is most efficient in the region where the true population VOT is near the prior used to generate the design: the prior is 20/handtheefficientrangeis20/h and the efficient range is 10–30/h.However,theD−efficientdesignquicklybecomesthemostinefficientoutsideofthisrange(under30/h. However, the D-efficient design quickly becomes the most inefficient outside of this range (under 5/h and above 40/h),andtheestimationsignificantlydegradesabove40/h), and the estimation significantly degrades above 50/h. The orthogonal and random designs are robust for a much larger range of VOT. The robustness of Bayesian efficient designs varies depending on the variance that the prior assumes. Implementing two-stage designs that first use a small sample to estimate priors are also not robust relative to uninformative designs. Arguably, the random design (which is the easiest to generate) performs as well as any design, and it (as well as any design) will perform even better if data cleaning is done to remove choice tasks where one alternative dominates the other. Keywords: Stated choice experiments, Robustness, Mode choice model, Value-of-time Experimental design, D-efficien
    • …
    corecore