1,323 research outputs found

    COMPUTATIONAL MODELLING OF HUMAN AESTHETIC PREFERENCES IN THE VISUAL DOMAIN: A BRAIN-INSPIRED APPROACH

    Get PDF
    Following the rise of neuroaesthetics as a research domain, computational aesthetics has also known a regain in popularity over the past decade with many works using novel computer vision and machine learning techniques to evaluate the aesthetic value of visual information. This thesis presents a new approach where low-level features inspired from the human visual system are extracted from images to train a machine learning-based system to classify visual information depending on its aesthetics, regardless of the type of visual media. Extensive tests are developed to highlight strengths and weaknesses of such low-level features while establishing good practices in the domain of study of computational aesthetics. The aesthetic classification system is not only tested on the most widely used dataset of photographs, called AVA, on which it is trained initially, but also on other photographic datasets to evaluate the robustness of the learnt aesthetic preferences over other rating communities. The system is then assessed in terms of aesthetic classification on other types of visual media to investigate whether the learnt aesthetic preferences represent photography rules or more general aesthetic rules. The skill transfer from aesthetic classification of photos to videos demonstrates a satisfying correct classification rate of videos without any prior training on the test set created by Tzelepis et al. Moreover, the initial photograph classifier can also be used on feature films to investigate the classifier’s learnt visual preferences, due to films providing a large number of frames easily labellable. The study on aesthetic classification of videos concludes with a case study on the work by an online content creator. The classifier recognised a significantly greater percentage of aesthetically high frames in videos filmed in studios than on-the-go. The results obtained across datasets containing videos of diverse natures manifest the extent of the system’s aesthetic knowledge. To conclude, the evolution of low-level visual features is studied in popular culture such as in paintings and brand logos. The work attempts to link aesthetic preferences during contemplation tasks such as aesthetic rating of photographs with preferred low-level visual features in art creation. It questions whether favoured visual features usage varies over the life of a painter, implicitly showing a relationship with artistic expertise. Findings display significant changes in use of universally preferred features over influential vi abstract painters’ careers such an increase in cardinal lines and the colour blue; changes that were not observed in landscape painters. Regarding brand logos, only a few features evolved in a significant manner, most of them being colour-related features. Despite the incredible amount of data available online, phenomena developing over an entire life are still complicated to study. These computational experiments show that simple approaches focusing on the fundamentals instead of high-level measures allow to analyse artists’ visual preferences, as well as extract a community’s visual preferences from photos or videos while limiting impact from cultural and personal experiences

    An Information Theory Approach to Aesthetic Assessment of Visual Patterns.

    Get PDF
    The question of beauty has inspired philosophers and scientists for centuries. Today, the study of aesthetics is an active research topic in fields as diverse as computer science, neuroscience, and psychology. Measuring the aesthetic appeal of images is beneficial for many applications. In this paper, we will study the aesthetic assessment of simple visual patterns. The proposed approach suggests that aesthetically appealing patterns are more likely to deliver a higher amount of information over multiple levels in comparison with less aesthetically appealing patterns when the same amount of energy is used. The proposed approach is evaluated using two datasets; the results show that the proposed approach is more accurate in classifying aesthetically appealing patterns compared to some related approaches that use different complexity measures

    Representations and representation learning for image aesthetics prediction and image enhancement

    Get PDF
    With the continual improvement in cell phone cameras and improvements in the connectivity of mobile devices, we have seen an exponential increase in the images that are captured, stored and shared on social media. For example, as of July 1st 2017 Instagram had over 715 million registered users which had posted just shy of 35 billion images. This represented approximately seven and nine-fold increase in the number of users and photos present on Instagram since 2012. Whether the images are stored on personal computers or reside on social networks (e.g. Instagram, Flickr), the sheer number of images calls for methods to determine various image properties, such as object presence or appeal, for the purpose of automatic image management and curation. One of the central problems in consumer photography centers around determining the aesthetic appeal of an image and motivates us to explore questions related to understanding aesthetic preferences, image enhancement and the possibility of using such models on devices with constrained resources. In this dissertation, we present our work on exploring representations and representation learning approaches for aesthetic inference, composition ranking and its application to image enhancement. Firstly, we discuss early representations that mainly consisted of expert features, and their possibility to enhance Convolutional Neural Networks (CNN). Secondly, we discuss the ability of resource-constrained CNNs, and the different architecture choices (inputs size and layer depth) in solving various aesthetic inference tasks: binary classification, regression, and image cropping. We show that if trained for solving fine-grained aesthetics inference, such models can rival the cropping performance of other aesthetics-based croppers, however they fall short in comparison to models trained for composition ranking. Lastly, we discuss our work on exploring and identifying the design choices in training composition ranking functions, with the goal of using them for image composition enhancement

    Media aesthetics based multimedia storytelling.

    Get PDF
    Since the earliest of times, humans have been interested in recording their life experiences, for future reference and for storytelling purposes. This task of recording experiences --i.e., both image and video capture-- has never before in history been as easy as it is today. This is creating a digital information overload that is becoming a great concern for the people that are trying to preserve their life experiences. As high-resolution digital still and video cameras become increasingly pervasive, unprecedented amounts of multimedia, are being downloaded to personal hard drives, and also uploaded to online social networks on a daily basis. The work presented in this dissertation is a contribution in the area of multimedia organization, as well as automatic selection of media for storytelling purposes, which eases the human task of summarizing a collection of images or videos in order to be shared with other people. As opposed to some prior art in this area, we have taken an approach in which neither user generated tags nor comments --that describe the photographs, either in their local or on-line repositories-- are taken into account, and also no user interaction with the algorithms is expected. We take an image analysis approach where both the context images --e.g. images from online social networks to which the image stories are going to be uploaded--, and the collection images --i.e., the collection of images or videos that needs to be summarized into a story--, are analyzed using image processing algorithms. This allows us to extract relevant metadata that can be used in the summarization process. Multimedia-storytellers usually follow three main steps when preparing their stories: first they choose the main story characters, the main events to describe, and finally from these media sub-groups, they choose the media based on their relevance to the story as well as based on their aesthetic value. Therefore, one of the main contributions of our work has been the design of computational models --both regression based, as well as classification based-- that correlate well with human perception of the aesthetic value of images and videos. These computational aesthetics models have been integrated into automatic selection algorithms for multimedia storytelling, which are another important contribution of our work. A human centric approach has been used in all experiments where it was feasible, and also in order to assess the final summarization results, i.e., humans are always the final judges of our algorithms, either by inspecting the aesthetic quality of the media, or by inspecting the final story generated by our algorithms. We are aware that a perfect automatically generated story summary is very hard to obtain, given the many subjective factors that play a role in such a creative process; rather, the presented approach should be seen as a first step in the storytelling creative process which removes some of the ground work that would be tedious and time consuming for the user. Overall, the main contributions of this work can be capitalized in three: (1) new media aesthetics models for both images and videos that correlate with human perception, (2) new scalable multimedia collection structures that ease the process of media summarization, and finally, (3) new media selection algorithms that are optimized for multimedia storytelling purposes.Postprint (published version

    What makes a good picture?

    Get PDF
    Trabalho de investigação desenvolvido na Cranfield University. School of EngineeringTese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201
    • …
    corecore