976 research outputs found

    What your Facebook Profile Picture Reveals about your Personality

    Get PDF
    People spend considerable effort managing the impressions they give others. Social psychologists have shown that people manage these impressions differently depending upon their personality. Facebook and other social media provide a new forum for this fundamental process; hence, understanding people's behaviour on social media could provide interesting insights on their personality. In this paper we investigate automatic personality recognition from Facebook profile pictures. We analyze the effectiveness of four families of visual features and we discuss some human interpretable patterns that explain the personality traits of the individuals. For example, extroverts and agreeable individuals tend to have warm colored pictures and to exhibit many faces in their portraits, mirroring their inclination to socialize; while neurotic ones have a prevalence of pictures of indoor places. Then, we propose a classification approach to automatically recognize personality traits from these visual features. Finally, we compare the performance of our classification approach to the one obtained by human raters and we show that computer-based classifications are significantly more accurate than averaged human-based classifications for Extraversion and Neuroticism

    Personalised aesthetics with residual adapters

    Full text link
    The use of computational methods to evaluate aesthetics in photography has gained interest in recent years due to the popularization of convolutional neural networks and the availability of new annotated datasets. Most studies in this area have focused on designing models that do not take into account individual preferences for the prediction of the aesthetic value of pictures. We propose a model based on residual learning that is capable of learning subjective, user specific preferences over aesthetics in photography, while surpassing the state-of-the-art methods and keeping a limited number of user-specific parameters in the model. Our model can also be used for picture enhancement, and it is suitable for content-based or hybrid recommender systems in which the amount of computational resources is limited.Comment: 12 pages, 4 figures. In Iberian Conference on Pattern Recognition and Image Analysis proceeding

    Preference Modeling in Data-Driven Product Design: Application in Visual Aesthetics

    Full text link
    Creating a form that is attractive to the intended market audience is one of the greatest challenges in product development given the subjective nature of preference and heterogeneous market segments with potentially different product preferences. Accordingly, product designers use a variety of qualitative and quantitative research tools to assess product preferences across market segments, such as design theme clinics, focus groups, customer surveys, and design reviews; however, these tools are still limited due to their dependence on subjective judgment, and being time and resource intensive. In this dissertation, we focus on a key research question: how can we understand and predict more reliably the preference for a future product in heterogeneous markets, so that this understanding can inform designers' decision-making? We present a number of data-driven approaches to model product preference. Instead of depending on any subjective judgment from human, the proposed preference models investigate the mathematical patterns behind users’ choice and behavior. This allows a more objective translation of customers' perception and preference into analytical relations that can inform design decision-making. Moreover, these models are scalable in that they have the capacity to analyze large-scale data and model customer heterogeneity accurately across market segments. In particular, we use feature representation as an intermediate step in our preference model, so that we can not only increase the predictive accuracy of the model but also capture in-depth insight into customers' preference. We tested our data-driven approaches with applications in visual aesthetics preference. Our results show that the proposed approaches can obtain an objective measurement of aesthetic perception and preference for a given market segment. This measurement enables designers to reliably evaluate and predict the aesthetic appeal of their designs. We also quantify the relative importance of aesthetic attributes when both aesthetic attributes and functional attributes are considered by customers. This quantification has great utility in helping product designers and executives in design reviews and selection of designs. Moreover, we visualize the possible factors affecting customers' perception of product aesthetics and how these factors differ across different market segments. Those visualizations are incredibly important to designers as they relate physical design details to psychological customer reactions. The main contribution of this dissertation is to present purely data-driven approaches that enable designers to quantify and interpret more reliably the product preference. Methodological contributions include using modern probabilistic approaches and feature learning algorithms to quantitatively model the design process involving product aesthetics. These novel approaches can not only increase the predictive accuracy but also capture insights to inform design decision-making.PHDDesign ScienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145987/1/yanxinp_1.pd

    Drawing as a versatile cognitive tool

    Get PDF
    Drawing is a cognitive tool that makes the invisible contents of mental life visible. Humans use this tool to produce a remarkable variety of pictures, from realistic portraits to schematic diagrams. Despite this variety and the prevalence of drawn images, the psychological mechanisms that enable drawings to be so versatile have yet to be fully explored. In this Review, we synthesize contemporary work in multiple areas of psychology, computer science and neuroscience that examines the cognitive processes involved in drawing production and comprehension. This body of findings suggests that the balance of contributions from perception, memory and social inference during drawing production varies depending on the situation, resulting in some drawings that are more realistic and other drawings that are more abstract. We also consider the use of drawings as a research tool for investigating various aspects of cognition, as well as the role that drawing has in facilitating learning and communication. Taken together, information about how drawings are used in different contexts illuminates the central role of visually grounded abstractions in human thought and behaviour

    Deep Learning of Individual Aesthetics

    Get PDF
    Accurate evaluation of human aesthetic preferences represents a major challenge for creative evolutionary and generative systems research. Prior work has tended to focus on feature measures of the artefact, such as symmetry, complexity and coherence. However, research models from Psychology suggest that human aesthetic experiences encapsulate factors beyond the artefact, making accurate computational models very difficult to design. The interactive genetic algorithm (IGA) circumvents the problem through human-in-the-loop, subjective evaluation of aesthetics, but is limited due to user fatigue and small population sizes. In this paper we look at how recent advances in deep learning can assist in automating personal aesthetic judgement. Using a leading artist's computer art dataset, we investigate the relationship between image measures, such as complexity, and human aesthetic evaluation. We use dimension reduction methods to visualise both genotype and phenotype space in order to support the exploration of new territory in a generative system. Convolutional Neural Networks trained on the artist's prior aesthetic evaluations are used to suggest new possibilities similar or between known high quality genotype-phenotype mappings. We integrate this classification and discovery system into a software tool for evolving complex generative art and design

    Aesthetic preference for art emerges from a weighted integration over hierarchically structured visual features in the brain

    Get PDF
    It is an open question whether preferences for visual art can be lawfully predicted from the basic constituent elements of a visual image. Moreover, little is known about how such preferences are actually constructed in the brain. Here we developed and tested a computational framework to gain an understanding of how the human brain constructs aesthetic value. We show that it is possible to explain human preferences for a piece of art based on an analysis of features present in the image. This was achieved by analyzing the visual properties of drawings and photographs by multiple means, ranging from image statistics extracted by computer vision tools, subjective human ratings about attributes, to a deep convolutional neural network. Crucially, it is possible to predict subjective value ratings not only within but also across individuals, speaking to the possibility that much of the variance in human visual preference is shared across individuals. Neuroimaging data revealed that preference computations occur in the brain by means of a graded hierarchical representation of lower and higher level features in the visual system. These features are in turn integrated to compute an overall subjective preference in the parietal and prefrontal cortex. Our findings suggest that rather than being idiosyncratic, human preferences for art can be explained at least in part as a product of a systematic neural integration over underlying visual features of an image. This work not only advances our understanding of the brain-wide computations underlying value construction but also brings new mechanistic insights to the study of visual aesthetics and art appreciation

    Generative Art: Between the Nodes of Neuron Networks

    Get PDF
    This article uses the exhibition “Infinite Skulls”, which happened in Paris in the beginning of 2019, as a starting point to discuss art created by artificial intelligence and, by extension, unique pieces of art generated by algorithms. We detail the development of DCGAN, the deep learning neural network used in the show, from its cybernetics origin. The show and its creation process are described, identifying elements of creativity and technique, as well as question of the authorship of works. Then it frames these works in the context of generative art, pointing affinities and differences, and the issues of representing through procedures and abstractions. It describes the major breakthrough of neural network for technical images as the ability to represent categories through an abstraction, rather than images themselves. Finally, it tries to understand neural networks more as a tool for artists than an autonomous art creator

    Learning the Designer's Preferences to Drive Evolution

    Full text link
    This paper presents the Designer Preference Model, a data-driven solution that pursues to learn from user generated data in a Quality-Diversity Mixed-Initiative Co-Creativity (QD MI-CC) tool, with the aims of modelling the user's design style to better assess the tool's procedurally generated content with respect to that user's preferences. Through this approach, we aim for increasing the user's agency over the generated content in a way that neither stalls the user-tool reciprocal stimuli loop nor fatigues the user with periodical suggestion handpicking. We describe the details of this novel solution, as well as its implementation in the MI-CC tool the Evolutionary Dungeon Designer. We present and discuss our findings out of the initial tests carried out, spotting the open challenges for this combined line of research that integrates MI-CC with Procedural Content Generation through Machine Learning.Comment: 16 pages, Accepted and to appear in proceedings of the 23rd European Conference on the Applications of Evolutionary and bio-inspired Computation, EvoApplications 202
    • …
    corecore