29 research outputs found

    Recommending messages to users in participatory media environments: a Bayesian credibility approach

    Get PDF
    In this thesis, we address the challenge of information overload in online participatory messaging environments using an artificial intelligence approach drawn from research in multiagent systems trust modeling. In particular, we reason about which messages to show to users based on modeling both credibility and similarity, motivated by a need to discriminate between (false) popular and truly beneficial messages. Our work focuses on environments wherein users' ratings on messages reveal their preferences and where the trustworthiness of those ratings then needs to be modeled, in order to make effective recommendations. We first present one solution, CredTrust, and demonstrate its efficacy in comparison with LOAR --- an established trust-based recommender system applicable to participatory media networks which fails to incorporate the modeling of credibility. Validation for our framework is provided through the simulation of an environment where the ground truth of the benefit of a message to a user is known. We are able to show that our approach performs well in terms of successfully recommending those messages with high predicted benefit and avoiding those messages with low predicted benefit. We continue by developing a new model for making recommendations that is grounded in Bayesian statistics and uses Partially Observable Markov Decision Processes (POMDPs). This model is an important next step, as both CredTrust and LOAR encode particular functions of user features (viz., similarity and credibility) when making recommendations; our new model, denoted POMDPTrust, learns the appropriate evaluation functions in order to make ``correct" belief updates about the usefulness of messages. We validate our new approach in simulation, showing that it outperforms both LOAR and CredTrust in a variety of agent scenarios. Furthermore, we demonstrate how POMDPTrust performs well against real world data sets from Reddit.com and Epinions.com. In all, we offer a novel trust model which is shown, through simulation and real-world experimentation, to be an effective agent-based solution to the problem of managing the messages posted by users in participatory media networks

    Framework for Human Computer Interaction for Learning Dialogue Strategies using Controlled Natural Language in Information Systems

    Get PDF
    Spoken Language systems are going to have a tremendous impact in all the real world applications, be it healthcare enquiry, public transportation system or airline booking system maintaining the language ethnicity for interaction among users across the globe. These system have the capability of interacting with the user in di erent languages that the system supports. Normally when a person interacts with another person there are many non-verbal clues which guide the dialogue and all the utterances have a contextual relationship, which manage the dialogue as its mixed by the two speakers. Human Computer Interaction has a wide impact on the design of the applications and has become one of the emerging interest area of the researchers. All of us are witness to an explosive electronic revolution where lots of gadgets and gizmo's have surrounded us, advanced not only in power, design, applications but the ease of access or what we call user friendly interfaces are designed that we can easily use and control all the functionality of the devices. Since speech is one of the most intuitive form of interaction that humans use. It provides potential bene ts such as handfree access to machines, ergonomics and greater e ciency of interaction. Yet, speech-based interfaces design has been an expert job for a long time. Lot of research has been done in building real spoken Dialogue Systems which can interact with humans using voice interactions and help in performing various tasks as are done by humans. Last two decades have seen utmost advanced research in the automatic speech recognition, dialogue management, text to speech synthesis and Natural Language Processing for various applications which have shown positive results. This dissertation proposes to apply machine learning (ML) techniques to the problem of optimizing the dialogue management strategy selection in the Spoken Dialogue system prototype design. Although automatic speech recognition and system initiated dialogues where the system expects an answer in the form of `yes' or `no' have already been applied to Spoken Dialogue Systems( SDS), no real attempt to use those techniques in order to design a new system from scratch has been made. In this dissertation, we propose some novel ideas in order to achieve the goal of easing the design of Spoken Dialogue Systems and allow novices to have access to voice technologies. A framework for simulating and evaluating dialogues and learning optimal dialogue strategies in a controlled Natural Language is proposed. The simulation process is based on a probabilistic description of a dialogue and on the stochastic modelling of both arti cial NLP modules composing a SDS and the user. This probabilistic model is based on a set of parameters that can be tuned from the prior knowledge from the discourse or learned from data. The evaluation is part of the simulation process and is based on objective measures provided by each module. Finally, the simulation environment is connected to a learning agent using the supplied evaluation metrics as an objective function in order to generate an optimal behaviour for the SDS

    Emotion-Aware and Human-Like Autonomous Agents

    Get PDF
    In human-computer interaction (HCI), one of the technological goals is to build human-like artificial agents that can think, decide and behave like humans during the interaction. A prime example is a dialogue system, where the agent should converse fluently and coherently with a user and connect with them emotionally. Humanness and emotion-awareness of interactive artificial agents have been shown to improve user experience and help attain application-specific goals more quickly. However, achieving human-likeness in HCI systems is contingent on addressing several philosophical and scientific challenges. In this thesis, I address two such challenges: replicating the human ability to 1) correctly perceive and adopt emotions, and 2) communicate effectively through language. Several research studies in neuroscience, economics, psychology and sociology show that both language and emotional reasoning are essential to the human cognitive deliberation process. These studies establish that any human-like AI should necessarily be equipped with adequate emotional and linguistic cognizance. To this end, I explore the following research directions. - I study how agents can reason emotionally in various human-interactive settings for decision-making. I use Bayesian Affect Control Theory, a probabilistic model of human-human affective interactions, to build a decision-theoretic reasoning algorithm about affect. This approach is validated on several applications: two-person social dilemma games, an assistive healthcare device, and robot navigation. - I develop several techniques to understand and generate emotions/affect in language. The proposed methods include affect-based feature augmentation of neural conversational models, training regularization using affective objectives, and affectively diverse sequential inference. - I devise an active learning technique that elicits user feedback during a conversation. This enables the agent to learn in real time, and to produce natural and coherent language during the interaction. - I explore incremental domain adaptation in language classification and generation models. The proposed method seeks to replicate the human ability to continually learn from new environments without forgetting old experiences

    EDM 2011: 4th international conference on educational data mining : Eindhoven, July 6-8, 2011 : proceedings

    Get PDF

    Decision-Theoretic Planning for User-Adaptive Systems: Dealing With Multiple Goals and Resource Limitations

    Get PDF
    While there exists a number of user-adaptive systems that use decision-theoretic methods to make individual decisions, decision-theoretic planning has hardly been exploited in the context of useradaptive systems so far. This thesis focuses on the application of decision-theoretic planning in user-adaptive systems and demonstrates how competing goals and resource limitations of the user can be considered in such an approach. The approach is illustrated with examples from the following domains: user-adaptive assistance for operating a technical device, user-adaptive navigation recommendations in an airport scenario, and finally user-adaptive and location-aware shopping assistance. With the shopping assistant, we have analyzed usability issues of a system based on decision-theoretic planning in two user studies. We describe how hard time constraints, as they are induced, for example, by the boarding of the passenger in an airport navigation scenario, can be considered in a decision-theoretic approach. Moreover, we propose a hierarchical decision-theoretic planning approach based on goal priorization, which keeps the complexity of dealing with realistic problems tractable. Furthermore, we specify the general workflow for the development and application of Markov decision processes to be applied in user-adaptive systems, and we describe possibilities to enhance a user-adaptive system based on decision-theoretic planning by an explanation component

    Policy Explanation and Model Refinement in Decision-Theoretic Planning

    Get PDF
    Decision-theoretic systems, such as Markov Decision Processes (MDPs), are used for sequential decision-making under uncertainty. MDPs provide a generic framework that can be applied in various domains to compute optimal policies. This thesis presents techniques that offer explanations of optimal policies for MDPs and then refine decision theoretic models (Bayesian networks and MDPs) based on feedback from experts. Explaining policies for sequential decision-making problems is difficult due to the presence of stochastic effects, multiple possibly competing objectives and long-range effects of actions. However, explanations are needed to assist experts in validating that the policy is correct and to help users in developing trust in the choices recommended by the policy. A set of domain-independent templates to justify a policy recommendation is presented along with a process to identify the minimum possible number of templates that need to be populated to completely justify the policy. The rejection of an explanation by a domain expert indicates a deficiency in the model which led to the generation of the rejected policy. Techniques to refine the model parameters such that the optimal policy calculated using the refined parameters would conform with the expert feedback are presented in this thesis. The expert feedback is translated into constraints on the model parameters that are used during refinement. These constraints are non-convex for both Bayesian networks and MDPs. For Bayesian networks, the refinement approach is based on Gibbs sampling and stochastic hill climbing, and it learns a model that obeys expert constraints. For MDPs, the parameter space is partitioned such that alternating linear optimization can be applied to learn model parameters that lead to a policy in accordance with expert feedback. In practice, the state space of MDPs can often be very large, which can be an issue for real-world problems. Factored MDPs are often used to deal with this issue. In Factored MDPs, state variables represent the state space and dynamic Bayesian networks model the transition functions. This helps to avoid the exponential growth in the state space associated with large and complex problems. The approaches for explanation and refinement presented in this thesis are also extended for the factored case to demonstrate their use in real-world applications. The domains of course advising to undergraduate students, assisted hand-washing for people with dementia and diagnostics for manufacturing are used to present empirical evaluations

    Reinforcement Learning

    Get PDF
    Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field
    corecore