3,840 research outputs found

    Behavior-Based Early Language Development on a Humanoid Robot

    Get PDF
    We are exploring the idea that early language acquisition could be better modelled on an artifcial creature by considering the pragmatic aspect of natural language and of its development in human infants. We have implemented a system of vocal behaviors on Kismet in which "words" or concepts are behaviors in a competitive hierarchy. This paper reports on the framework, the vocal system's architecture and algorithms, and some preliminary results from vocal label learning and concept formation

    Automatic Recognition Systems and Human Computer Interaction in Face Matching

    Get PDF

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Voiko syväoppiva neuroverkko ennustaa poliittisen suuntautumisen suomalaisten vasemmisto- ja oikeistopoliitikkojen kasvokuvista?

    Get PDF
    This master's thesis seeks to conceptually replicate psychologist Michael Kosinski's study, published in 2021 in Nature Scientific Reports, in which he trained a cross-validated logistic regression model to predict political orientations from facial images. Kosinski reported that his model achieved an accuracy of 72\%, which is significantly higher than the 55\% accuracy measured in humans for the same task. Kosinski's research attracted a huge amount of attention and also accusations of pseudoscience. Where Kosinski trained his model with facial features containing information for example about head position and emotions, in this thesis I use a deep learning convolutional neural network for the same task. Also, I train my model with Finnish data, consisting of photos of the faces of Finnish left- and right-wing candidates gathered from the 2021 municipal elections. I research whether a convolutional neural network can learn to predict from candidates' faces whether a member of a Finnish party belongs to either the right-wing Coalition Party (Coalition) or the left-wing Left Alliance (Left Alliance) with better than 55\% accuracy, and what is the possible role of color information on the classification accuracy of the model. On this basis, I also consider the wider ethical issues surrounding these types of models and the technological advances they bring. There has been a recent ethical debate on the widespread use of facial recognition technology in relation to issues such as human autonomy, privacy, and civil liberties. In the context of previous scientific findings, there has also been debate about the potential ability of facial recognition technologies to reveal information about our most personal traits, such as sexual orientation, personality, and emotional states. Thus, facial recognition technologies are also closely related to privacy issues. In his original article, Michael Kosinski did not underestimate the many problematic ethical issues that the use of facial recognition technology can raise. He did, however, underline the role of science in trying to determine the function, capability, and accuracy of these technologies. Only through research can we gain insights into these technologies, which can then potentially be used to inform societal decision-making. This research approach is also the aim of this Master's thesis

    Learning and Evaluating Human Preferences for Conversational Head Generation

    Full text link
    A reliable and comprehensive evaluation metric that aligns with manual preference assessments is crucial for conversational head video synthesis method development. Existing quantitative evaluations often fail to capture the full complexity of human preference, as they only consider limited evaluation dimensions. Qualitative evaluations and user studies offer a solution but are time-consuming and labor-intensive. This limitation hinders the advancement of conversational head generation algorithms and systems. In this paper, we propose a novel learning-based evaluation metric named Preference Score (PS) for fitting human preference according to the quantitative evaluations across different dimensions. PS can serve as a quantitative evaluation without the need for human annotation. Experimental results validate the superiority of Preference Score in aligning with human perception, and also demonstrates robustness and generalizability to unseen data, making it a valuable tool for advancing conversation head generation. We expect this metric could facilitate new advances in conversational head generation

    Attribution Biases and Trust Development in Physical Human-Machine Coordination: Blaming Yourself, Your Partner or an Unexpected Event

    Get PDF
    abstract: Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including machine partners. The increasing capabilities and complexity of machines allow them to work physically with humans. However, their improvements may interfere with the accuracy for people to calibrate trust in machines and their capabilities, which requires an understanding of attribution biases’ effect on human-machine coordination. Specifically, the current thesis explores how the development of trust in a partner is influenced by attribution biases and people’s assignment of blame for a negative outcome. This study can also suggest how a machine partner should be designed to react to environmental disturbances and report the appropriate level of information about external conditions.Dissertation/ThesisMasters Thesis Human Systems Engineering 201
    • …
    corecore