17,873 research outputs found

    Robust Modeling of Epistemic Mental States

    Full text link
    This work identifies and advances some research challenges in the analysis of facial features and their temporal dynamics with epistemic mental states in dyadic conversations. Epistemic states are: Agreement, Concentration, Thoughtful, Certain, and Interest. In this paper, we perform a number of statistical analyses and simulations to identify the relationship between facial features and epistemic states. Non-linear relations are found to be more prevalent, while temporal features derived from original facial features have demonstrated a strong correlation with intensity changes. Then, we propose a novel prediction framework that takes facial features and their nonlinear relation scores as input and predict different epistemic states in videos. The prediction of epistemic states is boosted when the classification of emotion changing regions such as rising, falling, or steady-state are incorporated with the temporal features. The proposed predictive models can predict the epistemic states with significantly improved accuracy: correlation coefficient (CoERR) for Agreement is 0.827, for Concentration 0.901, for Thoughtful 0.794, for Certain 0.854, and for Interest 0.913.Comment: Accepted for Publication in Multimedia Tools and Application, Special Issue: Socio-Affective Technologie

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    Information Processing in Decisions under Risk: Evidence for Compensatory Strategies based on Automatic Processes

    Get PDF
    Many everyday decisions have to be made under risk and can be interpreted as choices between gambles with different outcomes that are realized with specific probabilities. The underlying cognitive processes were investigated by testing six sets of hypotheses concerning choices, decision times, and information search derived from cumulative prospect theory, decision field theory, priority heuristic and parallel constraint satisfaction models. Our participants completed forty decision tasks of two gambles with two non-negative outcomes each. Information search was recorded using eye-tracking technology. Results for all dependent measures conflict with the prediction of the non-compensatory priority heuristic and indicate that individuals use compensatory strategies. Choice proportions are well predicted by a cumulative prospect theory. Process measures, however, indicate that individuals do not rely on deliberate calculations of weighted sums. Information integration processes seem to be better explained by models that partially rely on automatic processes such as decision field theory or parallel constraint satisfaction models.Risky Decisions, Cumulative Prospect Theory, Decision Field Theory, Priority Heuristic, Parallel Constraint Satisfaction, Eye Tracking, Intuition

    Temporal Dynamics of Decision-Making during Motion Perception in the Visual Cortex

    Get PDF
    How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.National Science Foundation (SBE-0354378, IIS-02-05271); Office of Naval Research (N00014-01-1-0624); National Institutes of Health (R01-DC-02852

    Towards a theory of heuristic and optimal planning for sequential information search

    No full text

    Why study movement variability in autism?

    Get PDF
    Autism has been defined as a disorder of social cognition, interaction and communication where ritualistic, repetitive behaviors are commonly observed. But how should we understand the behavioral and cognitive differences that have been the main focus of so much autism research? Can high-level cognitive processes and behaviors be identified as the core issues people with autism face, or do these characteristics perhaps often rather reflect individual attempts to cope with underlying physiological issues? Much research presented in this volume will point to the latter possibility, i.e. that people on the autism spectrum cope with issues at much lower physiological levels pertaining not only to Central Nervous Systems (CNS) function, but also to peripheral and autonomic systems (PNS, ANS) (Torres, Brincker, et al. 2013). The question that we pursue in this chapter is what might be fruitful ways of gaining objective measures of the large-scale systemic and heterogeneous effects of early atypical neurodevelopment; how to track their evolution over time and how to identify critical changes along the continuum of human development and aging. We suggest that the study of movement variability—very broadly conceived as including all minute fluctuations in bodily rhythms and their rates of change over time (coined micro-movements (Figure 1A-B) (Torres, Brincker, et al. 2013))—offers a uniquely valuable and entirely objectively quantifiable lens to better assess, understand and track not only autism but cognitive development and degeneration in general. This chapter presents the rationale firstly behind this focus on micro-movements and secondly behind the choice of specific kinds of data collection and statistical metrics as tools of analysis (Figure 1C). In brief the proposal is that the micro-movements (defined in Part I – Chapter 1), obtained using various time scales applied to different physiological data-types (Figure 1), contain information about layered influences and temporal adaptations, transformations and integrations across anatomically semi-independent subsystems that crosstalk and interact. Further, the notion of sensorimotor re-afference is used to highlight the fact that these layered micro-motions are sensed and that this sensory feedback plays a crucial role in the generation and control of movements in the first place. In other words, the measurements of various motoric and rhythmic variations provide an access point not only to the “motor systems”, but also access to much broader central and peripheral sensorimotor and regulatory systems. Lastly, we posit that this new lens can also be used to capture influences from systems of multiple entry points or collaborative control and regulation, such as those that emerge during dyadic social interactions

    Comparison of LSTM, Transformers, and MLP-mixer neural networks for gaze based human intention prediction

    Get PDF
    Collaborative robots have gained popularity in industries, providing flexibility and increased productivity for complex tasks. However, their ability to interact with humans and adapt to their behavior is still limited. Prediction of human movement intentions is one way to improve the robots adaptation. This paper investigates the performance of using Transformers and MLP-Mixer based neural networks to predict the intended human arm movement direction, based on gaze data obtained in a virtual reality environment, and compares the results to using an LSTM network. The comparison will evaluate the networks based on accuracy on several metrics, time ahead of movement completion, and execution time. It is shown in the paper that there exists several network configurations and architectures that achieve comparable accuracy scores. The best performing Transformers encoder presented in this paper achieved an accuracy of 82.74%, for predictions with high certainty, on continuous data and correctly classifies 80.06% of the movements at least once. The movements are, in 99% of the cases, correctly predicted the first time, before the hand reaches the target and more than 19% ahead of movement completion in 75% of the cases. The results shows that there are multiple ways to utilize neural networks to perform gaze based arm movement intention prediction and it is a promising step toward enabling efficient human-robot collaboration

    Gaze Based Human Intention Analysis

    Get PDF
    The ability to determine an upcoming action or what decision a human is about to take, can be useful in multiple areas, for example during human-robot collaboration in manufacturing, where knowing the intent of the operator could provide the robot with important information to help it navigate more safely. Another field that could benefit from a system that provides information regarding human intentions is the field of psychological testing where such a system could be used as a platform for new research or be one way to provide information in the diagnostic process. The work presented in this thesis investigates the potential use of virtual reality as a safe, measurable, and customizable environment to collect gaze and movement data, eye tracking as the non-invasive system input that gives insight into the human mind, and deep machine learning as one tool to analyze the data. The thesis defines an experimental procedure that can be used to construct a virtual reality based testing system that gathers gaze and movement data, carry out a test study to gather data from human participants, and implement artificial neural networks in order to analyze human behaviour. This is followed by two studies that gives evidence to the decisions that were made in the experimental procedure and shows the potential uses of such a system

    Datadriven Human Intention Analysis : Supported by Virtual Reality and Eye Tracking

    Get PDF
    The ability to determine an upcoming action or what decision a human is about to take, can be useful in multiple areas, for example in manufacturing where humans working with collaborative robots, where knowing the intent of the operator could provide the robot with important information to help it navigate more safely. Another field that could benefit from a system that provides information regarding human intentions is the field of psychological testing where such a system could be used as a platform for new research or be one way to provide information in the diagnostic process. The work presented in this thesis investigates the potential use of virtual reality as a safe, customizable environment to collect gaze and movement data, eye tracking as the non-invasive system input that gives insight into the human mind, and deep machine learning as the tool that analyzes the data. The thesis defines an experimental procedure that can be used to construct a virtual reality based testing system that gathers gaze and movement data, carries out a test study to gather data from human participants, and implements an artificial neural network in order to analyze human behaviour. This is followed by four studies that gives evidence to the decisions that were made in the experimental procedure and shows the potential uses of such a system
    • …
    corecore