10,827 research outputs found

    Design issues in the production of hyper‐books and visual‐books

    Get PDF
    This paper describes an ongoing research project in the area of electronic books. After a brief overview of the state of the art in this field, two new forms of electronic book are presented: hyper‐books and visual‐books. A flexible environment allows them to be produced in a semi‐automatic way starting from different sources: electronic texts (as input for hyper‐books) and paper books (as input for visual‐books). The translation process is driven by the philosophy of preserving the book metaphor in order to guarantee that electronic information is presented in a familiar way. Another important feature of our research is that hyper‐books and visual‐books are conceived not as isolated objects but as entities within an electronic library, which inherits most of the features of a paper‐based library but introduces a number of new properties resulting from its non‐physical nature

    Hierarchical Attention Network for Visually-aware Food Recommendation

    Full text link
    Food recommender systems play an important role in assisting users to identify the desired food to eat. Deciding what food to eat is a complex and multi-faceted process, which is influenced by many factors such as the ingredients, appearance of the recipe, the user's personal preference on food, and various contexts like what had been eaten in the past meals. In this work, we formulate the food recommendation problem as predicting user preference on recipes based on three key factors that determine a user's choice on food, namely, 1) the user's (and other users') history; 2) the ingredients of a recipe; and 3) the descriptive image of a recipe. To address this challenging problem, we develop a dedicated neural network based solution Hierarchical Attention based Food Recommendation (HAFR) which is capable of: 1) capturing the collaborative filtering effect like what similar users tend to eat; 2) inferring a user's preference at the ingredient level; and 3) learning user preference from the recipe's visual images. To evaluate our proposed method, we construct a large-scale dataset consisting of millions of ratings from AllRecipes.com. Extensive experiments show that our method outperforms several competing recommender solutions like Factorization Machine and Visual Bayesian Personalized Ranking with an average improvement of 12%, offering promising results in predicting user preference for food. Codes and dataset will be released upon acceptance

    Econometrics meets sentiment : an overview of methodology and applications

    Get PDF
    The advent of massive amounts of textual, audio, and visual data has spurred the development of econometric methodology to transform qualitative sentiment data into quantitative sentiment variables, and to use those variables in an econometric analysis of the relationships between sentiment and other variables. We survey this emerging research field and refer to it as sentometrics, which is a portmanteau of sentiment and econometrics. We provide a synthesis of the relevant methodological approaches, illustrate with empirical results, and discuss useful software

    Effects of captioning on video comprehension and incidental vocabulary learning

    Get PDF
    This study examines how three captioning types (i.e., on-screen text in the same language as the video) can assist L2 learners in the incidental acquisition of target vocabulary words and in the comprehension of L2 video. A sample of 133 Flemish undergraduate students watched three French clips twice. The control group (n = 32) watched the clips without captioning; the second group (n = 30) watched fully captioned clips; the third group (n = 34) watched keyword captioned clips; and the fourth group (n = 37) watched fully captioned clips with highlighted keywords. Prior to the learning session, participants completed a vocabulary size test. During the learning session, they completed three comprehension tests; four vocabulary tests measuring (a) form recognition, (b) meaning recognition, (c) meaning recall, and (d) clip association, which assessed whether participants associated words with the corresponding clip; and a final questionnaire. Our findings reveal that the captioning groups scored equally well on form recognition and clip association and significantly outperformed the control group. Only the keyword captioning and full captioning with highlighted keywords groups outperformed the control group on meaning recognition. Captioning did not affect comprehension nor meaning recall. Participants' vocabulary size correlated significantly with their comprehension scores as well as with their vocabulary test scores

    Conference Reports

    Get PDF
    corecore