2,612 research outputs found

    Multimedia information technology and the annotation of video

    Get PDF
    The state of the art in multimedia information technology has not progressed to the point where a single solution is available to meet all reasonable needs of documentalists and users of video archives. In general, we do not have an optimistic view of the usability of new technology in this domain, but digitization and digital power can be expected to cause a small revolution in the area of video archiving. The volume of data leads to two views of the future: on the pessimistic side, overload of data will cause lack of annotation capacity, and on the optimistic side, there will be enough data from which to learn selected concepts that can be deployed to support automatic annotation. At the threshold of this interesting era, we make an attempt to describe the state of the art in technology. We sample the progress in text, sound, and image processing, as well as in machine learning

    COMPUTATIONAL MODELLING OF HUMAN AESTHETIC PREFERENCES IN THE VISUAL DOMAIN: A BRAIN-INSPIRED APPROACH

    Get PDF
    Following the rise of neuroaesthetics as a research domain, computational aesthetics has also known a regain in popularity over the past decade with many works using novel computer vision and machine learning techniques to evaluate the aesthetic value of visual information. This thesis presents a new approach where low-level features inspired from the human visual system are extracted from images to train a machine learning-based system to classify visual information depending on its aesthetics, regardless of the type of visual media. Extensive tests are developed to highlight strengths and weaknesses of such low-level features while establishing good practices in the domain of study of computational aesthetics. The aesthetic classification system is not only tested on the most widely used dataset of photographs, called AVA, on which it is trained initially, but also on other photographic datasets to evaluate the robustness of the learnt aesthetic preferences over other rating communities. The system is then assessed in terms of aesthetic classification on other types of visual media to investigate whether the learnt aesthetic preferences represent photography rules or more general aesthetic rules. The skill transfer from aesthetic classification of photos to videos demonstrates a satisfying correct classification rate of videos without any prior training on the test set created by Tzelepis et al. Moreover, the initial photograph classifier can also be used on feature films to investigate the classifier’s learnt visual preferences, due to films providing a large number of frames easily labellable. The study on aesthetic classification of videos concludes with a case study on the work by an online content creator. The classifier recognised a significantly greater percentage of aesthetically high frames in videos filmed in studios than on-the-go. The results obtained across datasets containing videos of diverse natures manifest the extent of the system’s aesthetic knowledge. To conclude, the evolution of low-level visual features is studied in popular culture such as in paintings and brand logos. The work attempts to link aesthetic preferences during contemplation tasks such as aesthetic rating of photographs with preferred low-level visual features in art creation. It questions whether favoured visual features usage varies over the life of a painter, implicitly showing a relationship with artistic expertise. Findings display significant changes in use of universally preferred features over influential vi abstract painters’ careers such an increase in cardinal lines and the colour blue; changes that were not observed in landscape painters. Regarding brand logos, only a few features evolved in a significant manner, most of them being colour-related features. Despite the incredible amount of data available online, phenomena developing over an entire life are still complicated to study. These computational experiments show that simple approaches focusing on the fundamentals instead of high-level measures allow to analyse artists’ visual preferences, as well as extract a community’s visual preferences from photos or videos while limiting impact from cultural and personal experiences

    A view on the iconic turn from a semiotic perspective

    Get PDF
    Media are not only a means of communication. From a cognitive perspective, they may be viewed as components of an external, auxiliary memory system (Schönpflug 1997), and contemporary cognitive science “construes cognition as a complex system in which cognitive processes are ‘embodied, situated’ in environments, and ‘distributed’ across people and artifacts” (Nersessian 2007: 2). In man-machine communication, man-man-communication via digital machinery and especially in the World Wide Web (Heintz 2006, Steels 2006) the “external” components of this system have taken on more and more of the characteristics of our individual, “internal”, living and active memory with its richness of sensual and symbolic formats. The intellectual challenge in the drafts of the “masterminds” of hypertext (Eisenstein) and multimedia (Lintsbakh) was the detection of temporal/spatial, mathematical and linguistic correspondences between such different sensual and symbolic representations (Bulgakova 2007, Tsivian 2007). The so called “iconic” or “pictorial turn” was pulled along by the digital turn, and it may in turn have stimulated and accelerated the digital turn

    Component-based Attention for Large-scale Trademark Retrieval

    Full text link
    The demand for large-scale trademark retrieval (TR) systems has significantly increased to combat the rise in international trademark infringement. Unfortunately, the ranking accuracy of current approaches using either hand-crafted or pre-trained deep convolution neural network (DCNN) features is inadequate for large-scale deployments. We show in this paper that the ranking accuracy of TR systems can be significantly improved by incorporating hard and soft attention mechanisms, which direct attention to critical information such as figurative elements and reduce attention given to distracting and uninformative elements such as text and background. Our proposed approach achieves state-of-the-art results on a challenging large-scale trademark dataset.Comment: Fix typos related to authors' informatio

    Some Issues in the Art Image Database Systems

    Get PDF
    In this paper we illustrate several aspects of art databases, such as: the spread of the multimedia art images; the main characteristics of art images; main art images search models; unique characteristics for art image retrieval; the importance of the sensory and semantic gaps. In addition, we present several interesting features of an art image database, such as: image indexing; feature extraction; analysis on various levels of precision; style classification. We stress color features and their base, painting analysis and painting styles. We study also which MPEG-7 descriptors are best for fine painting images retrieval. An experimental system is developed to see how these descriptors work on 900 art images from several remarkable art periods. On the base of our experiments some suggestions for improving the process of searching and analysis of fine art images are given

    Multi-Label Logo Classification using Convolutional Neural Networks

    Get PDF
    The classification of logos is a particular case within computer vision since they have their own characteristics. Logos can contain only text, iconic images or a combination of both, and they usually include figurative symbols designed by experts that vary substantially besides they may share the same semantics. This work presents a method for multi-label classification and retrieval of logo images. For this, Convolutional Neural Networks (CNN) are trained to classify logos from the European Union TradeMark (EUTM) dataset according to their colors, shapes, sectors and figurative designs. An auto-encoder is also trained to learn representations of the input images. Once trained, the neural codes from the last convolutional layers in the CNN and the central layer of the auto-encoder can be used to perform similarity search through kNN, allowing us to obtain the most similar logos based on their color, shape, sector, figurative elements, overall features, or a weighted combination of them provided by the user. To the best of our knowledge, this is the first multi-label classification method for logos, and the only one that allows retrieving a ranking of images with these criteria provided by the user.This work is supported by the Spanish Ministry HISPAMUS project with code TIN2017-86576-R, partially funded by the EU
    • 

    corecore