147,380 research outputs found

    Modelling the Interaction Levels in HCI Using an Intelligent Hybrid System with Interactive Agents: A Case Study of an Interactive Museum Exhibition Module in Mexico

    Get PDF
    Technology has become a necessity in our everyday lives and essential for completing activities we typically take for granted; technologies can assist us by completing set tasks or achieving desired goals with optimal affect and in the most efficient way, thereby improving our interactive experiences. This paper presents research that explores the representation of user interaction levels using an intelligent hybrid system approach with agents. We evaluate interaction levels of Human-Computer Interaction (HCI) with the aim of enhancing user experiences. We consider the description of interaction levels using an intelligent hybrid system to provide a decision-making system to an agent that evaluates interaction levels when using interactive modules of a museum exhibition. The agents represent a high-level abstraction of the system, where communication takes place between the user, the exhibition and the environment. In this paper, we provide a means to measure the interaction levels and natural behaviour of users, based on museum user-exhibition interaction. We consider that, by analysing user interaction in a museum, we can help to design better ways to interact with exhibition modules according to the properties and behaviour of the users. An interaction-evaluator agent is proposed to achieve the most suitable representation of the interaction levels with the aim of improving user interactions to offer the most appropriate directions, services, content and information, thereby improving the quality of interaction experienced between the user-agent and exhibition-agent

    A collective intelligence approach for building student's trustworthiness profile in online learning

    Get PDF
    (c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Information and communication technologies have been widely adopted in most of educational institutions to support e-Learning through different learning methodologies such as computer supported collaborative learning, which has become one of the most influencing learning paradigms. In this context, e-Learning stakeholders, are increasingly demanding new requirements, among them, information security is considered as a critical factor involved in on-line collaborative processes. Information security determines the accurate development of learning activities, especially when a group of students carries out on-line assessment, which conducts to grades or certificates, in these cases, IS is an essential issue that has to be considered. To date, even most advances security technological solutions have drawbacks that impede the development of overall security e-Learning frameworks. For this reason, this paper suggests enhancing technological security models with functional approaches, namely, we propose a functional security model based on trustworthiness and collective intelligence. Both of these topics are closely related to on-line collaborative learning and on-line assessment models. Therefore, the main goal of this paper is to discover how security can be enhanced with trustworthiness in an on-line collaborative learning scenario through the study of the collective intelligence processes that occur on on-line assessment activities. To this end, a peer-to-peer public student's profile model, based on trustworthiness is proposed, and the main collective intelligence processes involved in the collaborative on-line assessments activities, are presented.Peer ReviewedPostprint (author's final draft

    Robot Mindreading and the Problem of Trust

    Get PDF
    This paper raises three questions regarding the attribution of beliefs, desires, and intentions to robots. The first one is whether humans in fact engage in robot mindreading. If they do, this raises a second question: does robot mindreading foster trust towards robots? Both of these questions are empirical, and I show that the available evidence is insufficient to answer them. Now, if we assume that the answer to both questions is affirmative, a third and more important question arises: should developers and engineers promote robot mindreading in view of their stated goal of enhancing transparency? My worry here is that by attempting to make robots more mind-readable, they are abandoning the project of understanding automatic decision processes. Features that enhance mind-readability are prone to make the factors that determine automatic decisions even more opaque than they already are. And current strategies to eliminate opacity do not enhance mind-readability. The last part of the paper discusses different ways to analyze this apparent trade-off and suggests that a possible solution must adopt tolerable degrees of opacity that depend on pragmatic factors connected to the level of trust required for the intended uses of the robot

    The display of electronic commerce within virtual environments

    Get PDF
    In today’s competitive business environment, the majority of companies are expected to be represented on the Internet in the form of an electronic commerce site. In an effort to keep up with current business trends, certain aspects of interface design such as those related to navigation and perception may be overlooked. For instance, the manner in which a visitor to the site might perceive the information displayed or the ease with which they navigate through the site may not be taken into consideration. This paper reports on the evaluation of the electronic commerce sites of three different companies, focusing specifically on the human factors issues such as perception and navigation. Heuristic evaluation, the most popular method for investigating user interface design, is the technique employed to assess each of these sites. In light of the results from the analysis of the evaluation data, virtual environments are suggested as a way of improving the navigation and perception display constraints

    PIWeCS: enhancing human/machine agency in an interactive composition system

    Get PDF
    This paper focuses on the infrastructure and aesthetic approach used in PIWeCS: a Public Space Interactive Web-based Composition System. The concern was to increase the sense of dialogue between human and machine agency in an interactive work by adapting Paine's (2002) notion of a conversational model of interaction as a ‘complex system’. The machine implementation of PIWeCS is achieved through integrating intelligent agent programming with MAX/MSP. Human input is through a web infrastructure. The conversation is initiated and continued by participants through arrangements and composition based on short performed samples of traditional New Zealand Maori instruments. The system allows the extension of a composition through the electroacoustic manipulation of the source material

    CLARA: Classifying and Disambiguating User Commands for Reliable Interactive Robotic Agents

    Full text link
    In this paper, we focus on inferring whether the given user command is clear, ambiguous, or infeasible in the context of interactive robotic agents utilizing large language models (LLMs). To tackle this problem, we first present an uncertainty estimation method for LLMs to classify whether the command is certain (i.e., clear) or not (i.e., ambiguous or infeasible). Once the command is classified as uncertain, we further distinguish it between ambiguous or infeasible commands leveraging LLMs with situational aware context in a zero-shot manner. For ambiguous commands, we disambiguate the command by interacting with users via question generation with LLMs. We believe that proper recognition of the given commands could lead to a decrease in malfunction and undesired actions of the robot, enhancing the reliability of interactive robot agents. We present a dataset for robotic situational awareness, consisting pair of high-level commands, scene descriptions, and labels of command type (i.e., clear, ambiguous, or infeasible). We validate the proposed method on the collected dataset, pick-and-place tabletop simulation. Finally, we demonstrate the proposed approach in real-world human-robot interaction experiments, i.e., handover scenarios

    Understanding consumer needs and preferences in new product development: the case of functional food innovations

    Get PDF
    As the majority of new products fail it is important to focus on the needs and preferences of the consumers in new product development. Consumers are increasingly recognised as important co-developers of innovations, often developing new functions for technologies, solving unforeseen problems and demanding innovative solutions. The central research question of the paper is: How to understand consumer needs and preferences in the context of new product development in order to improve the success of emerging innovations, such as functional foods. Important variables appear to be domestication, trust and distance, intermediate agents, user representations and the consumer- and product specific characteristics. Using survey and focus group data, we find that consumers need and prefer easy-to-use new products, transparent and accessible information supply by the producer, independent control of efficacy and safety, and introduction of a quality symbol for functional foods. Intermediate agents are not important in information diffusion. Producers should concentrate on consumers with specific needs, like athletes, women, obese persons, and stressed people. This will support developing products in line with the needs and mode of living of the users.consumer needs, preferences, new product development, functional foods

    Multimodal Signal Processing and Learning Aspects of Human-Robot Interaction for an Assistive Bathing Robot

    Full text link
    We explore new aspects of assistive living on smart human-robot interaction (HRI) that involve automatic recognition and online validation of speech and gestures in a natural interface, providing social features for HRI. We introduce a whole framework and resources of a real-life scenario for elderly subjects supported by an assistive bathing robot, addressing health and hygiene care issues. We contribute a new dataset and a suite of tools used for data acquisition and a state-of-the-art pipeline for multimodal learning within the framework of the I-Support bathing robot, with emphasis on audio and RGB-D visual streams. We consider privacy issues by evaluating the depth visual stream along with the RGB, using Kinect sensors. The audio-gestural recognition task on this new dataset yields up to 84.5%, while the online validation of the I-Support system on elderly users accomplishes up to 84% when the two modalities are fused together. The results are promising enough to support further research in the area of multimodal recognition for assistive social HRI, considering the difficulties of the specific task. Upon acceptance of the paper part of the data will be publicly available
    • 

    corecore