61,074 research outputs found

    Perception Driven Texture Generation

    Full text link
    This paper investigates a novel task of generating texture images from perceptual descriptions. Previous work on texture generation focused on either synthesis from examples or generation from procedural models. Generating textures from perceptual attributes have not been well studied yet. Meanwhile, perceptual attributes, such as directionality, regularity and roughness are important factors for human observers to describe a texture. In this paper, we propose a joint deep network model that combines adversarial training and perceptual feature regression for texture generation, while only random noise and user-defined perceptual attributes are required as input. In this model, a preliminary trained convolutional neural network is essentially integrated with the adversarial framework, which can drive the generated textures to possess given perceptual attributes. An important aspect of the proposed model is that, if we change one of the input perceptual features, the corresponding appearance of the generated textures will also be changed. We design several experiments to validate the effectiveness of the proposed method. The results show that the proposed method can produce high quality texture images with desired perceptual properties.Comment: 7 pages, 4 figures, icme201

    Reverse Engineering Psychologically Valid Facial Expressions of Emotion into Social Robots

    Get PDF
    Social robots are now part of human society, destined for schools, hospitals, and homes to perform a variety of tasks. To engage their human users, social robots must be equipped with the essential social skill of facial expression communication. Yet, even state-of-the-art social robots are limited in this ability because they often rely on a restricted set of facial expressions derived from theory with well-known limitations such as lacking naturalistic dynamics. With no agreed methodology to objectively engineer a broader variance of more psychologically impactful facial expressions into the social robots' repertoire, human-robot interactions remain restricted. Here, we address this generic challenge with new methodologies that can reverse-engineer dynamic facial expressions into a social robot head. Our data-driven, user-centered approach, which combines human perception with psychophysical methods, produced highly recognizable and human-like dynamic facial expressions of the six classic emotions that generally outperformed state-of-art social robot facial expressions. Our data demonstrates the feasibility of our method applied to social robotics and highlights the benefits of using a data-driven approach that puts human users as central to deriving facial expressions for social robots. We also discuss future work to reverse-engineer a wider range of socially relevant facial expressions including conversational messages (e.g., interest, confusion) and personality traits (e.g., trustworthiness, attractiveness). Together, our results highlight the key role that psychology must continue to play in the design of social robots

    Emotional Qualities of VR Space

    Full text link
    The emotional response a person has to a living space is predominantly affected by light, color and texture as space-making elements. In order to verify whether this phenomenon could be replicated in a simulated environment, we conducted a user study in a six-sided projected immersive display that utilized equivalent design attributes of brightness, color and texture in order to assess to which extent the emotional response in a simulated environment is affected by the same parameters affecting real environments. Since emotional response depends upon the context, we evaluated the emotional responses of two groups of users: inactive (passive) and active (performing a typical daily activity). The results from the perceptual study generated data from which design principles for a virtual living space are articulated. Such a space, as an alternative to expensive built dwellings, could potentially support new, minimalist lifestyles of occupants, defined as the neo-nomads, aligned with their work experience in the digital domain through the generation of emotional experiences of spaces. Data from the experiments confirmed the hypothesis that perceivable emotional aspects of real-world spaces could be successfully generated through simulation of design attributes in the virtual space. The subjective response to the virtual space was consistent with corresponding responses from real-world color and brightness emotional perception. Our data could serve the virtual reality (VR) community in its attempt to conceive of further applications of virtual spaces for well-defined activities.Comment: 12 figure

    3D Face Synthesis Driven by Personality Impression

    Full text link
    Synthesizing 3D faces that give certain personality impressions is commonly needed in computer games, animations, and virtual world applications for producing realistic virtual characters. In this paper, we propose a novel approach to synthesize 3D faces based on personality impression for creating virtual characters. Our approach consists of two major steps. In the first step, we train classifiers using deep convolutional neural networks on a dataset of images with personality impression annotations, which are capable of predicting the personality impression of a face. In the second step, given a 3D face and a desired personality impression type as user inputs, our approach optimizes the facial details against the trained classifiers, so as to synthesize a face which gives the desired personality impression. We demonstrate our approach for synthesizing 3D faces giving desired personality impressions on a variety of 3D face models. Perceptual studies show that the perceived personality impressions of the synthesized faces agree with the target personality impressions specified for synthesizing the faces. Please refer to the supplementary materials for all results.Comment: 8pages;6 figure
    • …
    corecore