10,339 research outputs found

    Computational Emotion Model for Virtual Characters

    Get PDF

    Cognitive Maps

    Get PDF
    undefine

    A virtual diary companion

    Get PDF
    Chatbots and embodied conversational agents show turn based conversation behaviour. In current research we almost always assume that each utterance of a human conversational partner should be followed by an intelligent and/or empathetic reaction of chatbot or embodied agent. They are assumed to be alert, trying to please the user. There are other applications which have not yet received much attention and which require a more patient or relaxed attitude, waiting for the right moment to provide feedback to the human partner. Being able and willing to listen is one of the conditions for being successful. In this paper we have some observations on listening behaviour research and introduce one of our applications, the virtual diary companion

    A Model Of Visual Recognition Implemented Using Neural Networks

    Get PDF
    The ability to recognise and classify objects in the environment is an important property of biological vision. It is highly desirable that artificial vision systems also have this ability. This thesis documents research into the use of artificial neural networks to implement a prototype model of visual object recognition. The prototype model, describing a computtional architecture, is derived from relevant physiological and psychological data, and attempts to resolve the use of structural decomposition and invariant feature detection. To validate the research a partial implementation of the model has been constructed using multiple neural networks. A linear feed-forward network performs pre-procesing after being trained to approximate a conventional statistical data compression algorithm. The output of this pre-processing forms a feature vector that is categorised using an Adaptive Resonance Theory network capable of recognising arbitrary analog patterns. The implementation has been applied to the task of recognising static images of human faces. Experimental results show that the implementation is able to achieve a 100% successful recognition rate with performance that degrades gracefully. The implentation is robust against facial changes minor occlusions and it is flexible enough to categorise data from any domain

    Neural Signals of Video Advertisement Liking:Insights into Psychological Processes and their Temporal Dynamics

    Get PDF
    What drives the liking of video advertisements? The authors analyzed neural signals during ad exposure from three functional magnetic resonance imaging (fMRI) data sets (113 participants from two countries watching 85 video ads) with automated meta-analytic decoding (Neurosynth). These brain-based measures of psychological processes—including perception and language (information processing), executive function and memory (cognitive functions), and social cognition and emotion (social-affective response)—predicted subsequent self-report ad liking, with emotion and memory being the earliest predictorsafter the first three seconds. Over the span of ad exposure, while the predictiveness of emotion peaked early and fell, that of social cognition had a peak-and-stable pattern, followed by a late peak of predictiveness in perception and executive function.At the aggregate level, neural signals—especially those associated with social-affective response—improved the prediction of out-of-sample ad liking compared with traditional anatomically based neuroimaging analysis and self-report liking. Finally, earlyonset social-affective response predicted population ad liking in a behavioral replication. Overall, this study helps delineate the psychological mechanisms underlying ad processing and ad liking and proposes a novel neuroscience-based approach for generating psychological insights and improving out-of-sample predictions

    Data Cube Approximation and Mining using Probabilistic Modeling

    Get PDF
    On-line Analytical Processing (OLAP) techniques commonly used in data warehouses allow the exploration of data cubes according to different analysis axes (dimensions) and under different abstraction levels in a dimension hierarchy. However, such techniques are not aimed at mining multidimensional data. Since data cubes are nothing but multi-way tables, we propose to analyze the potential of two probabilistic modeling techniques, namely non-negative multi-way array factorization and log-linear modeling, with the ultimate objective of compressing and mining aggregate and multidimensional values. With the first technique, we compute the set of components that best fit the initial data set and whose superposition coincides with the original data; with the second technique we identify a parsimonious model (i.e., one with a reduced set of parameters), highlight strong associations among dimensions and discover possible outliers in data cells. A real life example will be used to (i) discuss the potential benefits of the modeling output on cube exploration and mining, (ii) show how OLAP queries can be answered in an approximate way, and (iii) illustrate the strengths and limitations of these modeling approaches

    EmoCo: Visual analysis of emotion coherence in presentation videos

    Get PDF
    Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations.Comment: 11 pages, 8 figures. Accepted by IEEE VAST 201

    Goal-based analytic composition for on- and off-line execution at scale

    Get PDF
    Crafting scalable analytics in order to extract actionable business intelligence is a challenging endeavour, requiring multiple layers of expertise and experience. Often, this expertise is irreconcilably split between an organisation’s engineers and subject matter or domain experts. Previous approaches to this problem have relied on technically adept users with tool-specific training. These approaches have generally not targeted the levels of performance and scalability required to harness the sheer volume and velocity of large-scale data analytics. In this paper, we present a novel approach to the automated planning of scalable analytics using a semantically rich type system, the use of which requires little programming expertise from the user. This approach is the first of its kind to permit domain experts with little or no technical expertise to assemble complex and scalable analytics, for execution both on- and offline, with no lower-level engineering support. We describe in detail (i) an abstract model of analytic assembly and execution; (ii) goal-based planning and (iii) code generation using this model for both on- and off-line analytics. Our implementation of this model, MENDELEEV, is used to (iv) demonstrate the applicability of our approach through a series of case studies, in which a single interface is used to create analytics that can be run in real-time (on-line) and batch (off-line) environments. We (v) analyse the performance of the planner, and (vi) show that the performance of MENDELEEV’s generated code is comparable with that of hand-written analytics
    • …
    corecore