392 research outputs found

    Knowledge Graph based Question and Answer System for Cosmetic Domain

    Get PDF
    With the development of E-commerce, the requirements of customers for products become more detailed, and the workload of customer service consultants will increase massively. However, the manufacturer is not obliged to provide specific product ingredients on the website. Therefore, it is necessary to construct a KBQA system to relieve the pressure of online customer service and effectively help customers to find suitable skincare production. For the cosmetic filed, the different basic cosmetics may have varied effects depending on its ingredients. In this paper, we utilize CosDNA website and online cosmetic websites to construct a cosmetic product knowledge graph to broaden the relationship between cosmetics, ingredients, skin type, and effects. Besides, we build the question answering system based on the cosmetic knowledge graph to allow users to understand product details directly and make the decision quickly

    Review of Intent Diversity in Information Retrieval : Approaches, Models and Trends

    Get PDF
    The fast increasing volume of information databases made some difficulties for a user to find the information that they need. Its important for researchers to find the best method for challenging this problem. user intention detection can be used to increase the relevancies of information delivered from the information retrieval system. This research used a systematic mapping process to identify what area, approaches, and models that mostly used to detect user intention in information retrieval in four years later. the result of this research identified that item-based approach is still the most approach researched by researchers to identify intent diversity in information retrieval. The used of item-based approach still increasing from 2015 until 2017. 34% paper used topic models in their research. It means that Topic models still the necessary models explored by the researchers in this study

    Entity Recommendation for Everyday Digital Tasks

    Get PDF
    Recommender systems can support everyday digital tasks by retrieving and recommending useful information contextually. This is becoming increasingly relevant in services and operating systems. Previous research often focuses on specific recommendation tasks with data captured from interactions with an individual application. The quality of recommendations is also often evaluated addressing only computational measures of accuracy, without investigating the usefulness of recommendations in realistic tasks. The aim of this work is to synthesize the research in this area through a novel approach by (1) demonstrating comprehensive digital activity monitoring, (2) introducing entity-based computing and interaction, and (3) investigating the previously overlooked usefulness of entity recommendations and their actual impact on user behavior in real tasks. The methodology exploits context from screen frames recorded every 2 seconds to recommend information entities related to the current task. We embodied this methodology in an interactive system and investigated the relevance and influence of the recommended entities in a study with participants resuming their real-world tasks after a 14-day monitoring phase. Results show that the recommendations allowed participants to find more relevant entities than in a control without the system. In addition, the recommended entities were also used in the actual tasks. In the discussion, we reflect on a research agenda for entity recommendation in context, revisiting comprehensive monitoring to include the physical world, considering entities as actionable recommendations, capturing drifting intent and routines, and considering explainability and transparency of recommendations, ethics, and ownership of data

    Understanding User Intent Modeling for Conversational Recommender Systems: A Systematic Literature Review

    Full text link
    Context: User intent modeling is a crucial process in Natural Language Processing that aims to identify the underlying purpose behind a user's request, enabling personalized responses. With a vast array of approaches introduced in the literature (over 13,000 papers in the last decade), understanding the related concepts and commonly used models in AI-based systems is essential. Method: We conducted a systematic literature review to gather data on models typically employed in designing conversational recommender systems. From the collected data, we developed a decision model to assist researchers in selecting the most suitable models for their systems. Additionally, we performed two case studies to evaluate the effectiveness of our proposed decision model. Results: Our study analyzed 59 distinct models and identified 74 commonly used features. We provided insights into potential model combinations, trends in model selection, quality concerns, evaluation measures, and frequently used datasets for training and evaluating these models. Contribution: Our study contributes practical insights and a comprehensive understanding of user intent modeling, empowering the development of more effective and personalized conversational recommender systems. With the Conversational Recommender System, researchers can perform a more systematic and efficient assessment of fitting intent modeling frameworks

    Leveraging Large Models for Crafting Narrative Visualization: A Survey

    Full text link
    Narrative visualization effectively transforms data into engaging stories, making complex information accessible to a broad audience. Large models, essential for narrative visualization, inherently facilitate this process through their superior ability to handle natural language queries and answers, generate cohesive narratives, and enhance visual communication. Inspired by previous work in narrative visualization and recent advances in large models, we synthesized potential tasks and opportunities for large models at various stages of narrative visualization. In our study, we surveyed 79 papers to explore the role of large models in automating narrative visualization creation. We propose a comprehensive pipeline that leverages large models for crafting narrative visualization, categorizing the reviewed literature into four essential phases: Data, Narration, Visualization, and Presentation. Additionally, we identify nine specific tasks where large models are applied across these stages. This study maps out the landscape of challenges and opportunities in the LM4NV process, providing insightful directions for future research and valuable guidance for scholars in the field.Comment: 20 pages,6 figures, 2 table

    Evaluating Information Retrieval and Access Tasks

    Get PDF
    This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, today’s smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students—anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one

    Slang feature extraction by analysing topic change on social media

    Get PDF
    Recently, the authors often see words such as youth slang, neologism and Internet slang on social networking sites (SNSs) that are not registered on dictionaries. Since the documents posted to SNSs include a lot of fresh information, they are thought to be useful for collecting information. It is important to analyse these words (hereinafter referred to as ‘slang’) and capture their features for the improvement of the accuracy of automatic information collection. This study aims to analyse what features can be observed in slang by focusing on the topic. They construct topic models from document groups including target slang on Twitter by latent Dirichlet allocation. With the models, they chronologically the analyse change of topics during a certain period of time to find out the difference in the features between slang and general words. Then, they propose a slang classification method based on the change of features

    A Decision Support System For The Intelligence Satellite Analyst

    Get PDF
    The study developed a decision support system known as Visual Analytic Cognitive Model (VACOM) to support the Intelligence Analyst (IA) in satellite information processing task within a Geospatial Intelligence (GEOINT) domain. As a visual analytics, VACOM contains the image processing algorithms, a cognitive network of the IA mental model, and a Bayesian belief model for satellite information processing. A cognitive analysis tool helps to identify eight knowledge levels in a satellite information processing. These are, spatial, prototypical, contextual, temporal, semantic, pragmatic, intentional, and inferential knowledge levels, respectively. A cognitive network was developed for each knowledge level with data input from the subjective questionnaires that probed the analysts’ mental model. VACOM interface was designed to allow the analysts have a transparent view of the processes, including, visualization model, and signal processing model applied to the images, geospatial data representation, and the cognitive network of expert beliefs. VACOM interface allows the user to select a satellite image of interest, select each of the image analysis methods for visualization, and compare ‘ground-truth’ information against the recommendation of VACOM. The interface was designed to enhance perception, cognition, and even comprehension to the multi and complex image analyses by the analysts. A usability analysis on VACOM showed many advantages for the human analysts. These include, reduction in cognitive workload as a result of less information search, the IA can conduct an interactive experiment on each of his/her belief space and guesses, and selection of best image processing algorithms to apply to an image context

    Evaluating and Comparing Generative-based Chatbots Based on Process Requirements

    Get PDF
    Business processes refer to the sequences of tasks and information flows needed to achieve a specific goal. Such processes are used in multiple sectors, such as healthcare, manufacturing, banking, among others. They can be represented using diverse notations such as Event-driven Process Chain (EPC) and Yet Another Workflow Language (YAWL). A popular standard notation for modelling business processes is the Business Process Model and Notation (BPMN), due to its breadth of features and intuitiveness. Despite organizations increasingly turning to automated processes to enhance efficiency and precision, business processes might still contain human-dependent tasks, which present several challenges, such as complex task orders, task dependencies, and the need for contextual adaptability. These challenges often make it difficult for process participants to understand process execution, and it is common for them to get lost in the process they are trying to execute. Since the advent of process modelling languages, there have been multiple strategies to help users execute business processes. Most recently, chatbots, programs that allow users to interact with a machine using natural language, have been increasingly used for process execution support. A recent category of chatbots worth mentioning is generative-based chatbots, powered by Large Language Models (LLMs), which are trained on billions of parameters and support conversational intelligence. However, several challenges remain unaddressed: (i) how to incorporate process knowledge into generative-based chatbots; (ii) how to evaluate if a generative-based chatbot is meeting the requirements of BPMN constructs for process execution support; (iii) how different generative-based chatbot models compare in terms of meeting the requirements of BPMN constructs. To address these challenges, this thesis presents an exploratory approach to evaluate and compare generative-based chatbots in terms of meeting the requirements of BPMN constructs. This research comprises three distinct phases. The first phase presents a literature review that examines how chatbots are used in conjunction with business processes. In the second phase, the focus shifts to retrieval-based chatbots and their potential for process execution support when maintaining direct communication with a process engine system, in an attempt to achieve a gold standard of process-aware chatbots. Finally, the third phase delves into the realm of using generative-based models as process-aware chatbots to enhance business process execution support, including an evaluation of how two distinct generative models (GPT and PaLM) perform when helping in the execution of two business processes (Trip Planning and Wedding Planning). This thesis makes several contributions. First, it offers a comprehensive exploration of the challenges and gaps in the existing literature regarding the development of process-aware chatbots. Second, it provides an exploratory study about the use of generative-based chatbots for process execution support that meets the requirements of BPMN constructs. Finally, through comparative qualitative and quantitative evaluations, it sheds light on the performance of prominent generative models, GPT and PaLM, in the context of process execution support, contributing valuable insights for future research and development in this evolving field

    Ami-deu : un cadre sémantique pour des applications adaptables dans des environnements intelligents

    Get PDF
    Cette thèse vise à étendre l’utilisation de l'Internet des objets (IdO) en facilitant le développement d’applications par des personnes non experts en développement logiciel. La thèse propose une nouvelle approche pour augmenter la sémantique des applications d’IdO et l’implication des experts du domaine dans le développement d’applications sensibles au contexte. Notre approche permet de gérer le contexte changeant de l’environnement et de générer des applications qui s’exécutent dans plusieurs environnements intelligents pour fournir des actions requises dans divers contextes. Notre approche est mise en œuvre dans un cadriciel (AmI-DEU) qui inclut les composants pour le développement d’applications IdO. AmI-DEU intègre les services d’environnement, favorise l’interaction de l’utilisateur et fournit les moyens de représenter le domaine d’application, le profil de l’utilisateur et les intentions de l’utilisateur. Le cadriciel permet la définition d’applications IoT avec une intention d’activité autodécrite qui contient les connaissances requises pour réaliser l’activité. Ensuite, le cadriciel génère Intention as a Context (IaaC), qui comprend une intention d’activité autodécrite avec des connaissances colligées à évaluer pour une meilleure adaptation dans des environnements intelligents. La sémantique de l’AmI-DEU est basée sur celle du ContextAA (Context-Aware Agents) – une plateforme pour fournir une connaissance du contexte dans plusieurs environnements. Le cadriciel effectue une compilation des connaissances par des règles et l'appariement sémantique pour produire des applications IdO autonomes capables de s’exécuter en ContextAA. AmI- DEU inclut également un outil de développement visuel pour le développement et le déploiement rapide d'applications sur ContextAA. L'interface graphique d’AmI-DEU adopte la métaphore du flux avec des aides visuelles pour simplifier le développement d'applications en permettant des définitions de règles étape par étape. Dans le cadre de l’expérimentation, AmI-DEU comprend un banc d’essai pour le développement d’applications IdO. Les résultats expérimentaux montrent une optimisation sémantique potentielle des ressources pour les applications IoT dynamiques dans les maisons intelligentes et les villes intelligentes. Notre approche favorise l'adoption de la technologie pour améliorer le bienêtre et la qualité de vie des personnes. Cette thèse se termine par des orientations de recherche que le cadriciel AmI-DEU dévoile pour réaliser des environnements intelligents omniprésents fournissant des adaptations appropriées pour soutenir les intentions des personnes.Abstract: This thesis aims at expanding the use of the Internet of Things (IoT) by facilitating the development of applications by people who are not experts in software development. The thesis proposes a new approach to augment IoT applications’ semantics and domain expert involvement in context-aware application development. Our approach enables us to manage the changing environment context and generate applications that run in multiple smart environments to provide required actions in diverse settings. Our approach is implemented in a framework (AmI-DEU) that includes the components for IoT application development. AmI- DEU integrates environment services, promotes end-user interaction, and provides the means to represent the application domain, end-user profile, and end-user intentions. The framework enables the definition of IoT applications with a self-described activity intention that contains the required knowledge to achieve the activity. Then, the framework generates Intention as a Context (IaaC), which includes a self-described activity intention with compiled knowledge to be assessed for augmented adaptations in smart environments. AmI-DEU framework semantics adopts ContextAA (Context-Aware Agents) – a platform to provide context-awareness in multiple environments. The framework performs a knowledge compilation by rules and semantic matching to produce autonomic IoT applications to run in ContextAA. AmI-DEU also includes a visual tool for quick application development and deployment to ContextAA. The AmI-DEU GUI adopts the flow metaphor with visual aids to simplify developing applications by allowing step-by-step rule definitions. As part of the experimentation, AmI-DEU includes a testbed for IoT application development. Experimental results show a potential semantic optimization for dynamic IoT applications in smart homes and smart cities. Our approach promotes technology adoption to improve people’s well-being and quality of life. This thesis concludes with research directions that the AmI-DEU framework uncovers to achieve pervasive smart environments providing suitable adaptations to support people’s intentions
    • …
    corecore