1,241 research outputs found
06221 Abstracts Collection -- Computational Aestethics in Graphics, Visualization and Imaging
From 28.05.06 to 02.06.06, the Dagstuhl Seminar 06221 ``Computational Aesthetics in Graphics, Visualization and Imaging\u27\u27 was held
in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Visual saliency guided high dynamic range image compression
Recent years have seen the emergence of the visual saliency-based image and video compression for low dynamic range (LDR) visual content. The high dynamic range (HDR) imaging is yet to follow such an approach for compression as the state-of-the-art visual saliency detection models are mainly concerned with LDR content. Although a few HDR saliency detection models have been proposed in the recent years, they lack the comprehensive validation. Current HDR image compression schemes do not differentiate salient and non-salient regions, which has been proved redundant in terms of the Human Visual System. In this paper, we propose a novel visual saliency guided layered compression scheme for HDR images. The proposed saliency detection model is robust and highly correlates with the ground truth saliency maps obtained from eye tracker. The results show a reduction of bit-rates up to 50% while retaining the same high visual quality in terms of HDR-Visual Difference Predictor (HDR-VDP) and the visual saliency-induced index for perceptual image quality assessment (VSI) metrics in the salient regions
Intelligent visual media processing: when graphics meets vision
The computer graphics and computer vision communities have been working closely together in recent
years, and a variety of algorithms and applications have been developed to analyze and manipulate the visual media
around us. There are three major driving forces behind this phenomenon: i) the availability of big data from the
Internet has created a demand for dealing with the ever increasing, vast amount of resources; ii) powerful processing
tools, such as deep neural networks, provide e�ective ways for learning how to deal with heterogeneous visual data;
iii) new data capture devices, such as the Kinect, bridge between algorithms for 2D image understanding and
3D model analysis. These driving forces have emerged only recently, and we believe that the computer graphics
and computer vision communities are still in the beginning of their honeymoon phase. In this work we survey
recent research on how computer vision techniques bene�t computer graphics techniques and vice versa, and cover
research on analysis, manipulation, synthesis, and interaction. We also discuss existing problems and suggest
possible further research directions
ClearPhoto - augmented photography
The widespread use of mobile devices has made known to the general public new areas
that were hitherto confined to specialized devices. In general, the smartphone came to
give all users the ability to execute multiple tasks, and among them, take photographs using the integrated cameras.
Although these devices are continuously receiving improved cameras, their manufacturers do not take advantage of their full potential, since the operating systems normally offer simple APIs and applications for shooting. Therefore, taking advantage of
this environment for mobile devices, we find ourselves in the best scenario to develop
applications that help the user obtaining a good result when shooting.
In an attempt to provide a set of techniques and tools more applied to the task, this
dissertation presents, as a contribution, a set of tools for mobile devices that provides
information in real-time on the composition of the scene before capturing an image.
Thus, the proposed solution gives support to a user while capturing a scene with a
mobile device. The user will be able to receive multiple suggestions on the composition of the scene, which will be based on rules of photography or other useful tools for photographers. The tools include horizon detection and graphical visualization of the color palette presented on the scenario being photographed. These tools were evaluated regarding the mobile device implementation and how users assess their usefulness
What’s in a Photograph? The Perspectives of Composition Experts on Factors Impacting Visual Scene Display Complexity for Augmentative and Alternative Communication and Strategies for Improving Visual Communication
Purpose: Visual scene displays (VSDs) can support augmentative and alternative communication (AAC) success for children and adults with complex communication needs. Static VSDs incorporate contextual photographs that include meaningful events, places, and people. Although the processing of VSDs has been studied, their power as a medium to effectively convey meaning may benefit from the perspective of individuals who regularly engage in visual storytelling. The aim of this study was to evaluate the perspectives of individuals with expertise in photographic and/or artistic composition regarding factors contributing to VSD complexity and how to limit the time and effort required to apply principles of photographic composition.
Method: Semistructured interviews were completed with 13 participants with expertise in photographic and/or artistic composition.
Results: Four main themes were noted, including (a) factors increasing photographic image complexity and decreasing cohesion, (b) how complexity impacts the viewer, (c) composition strategies to decrease photographic image complexity and increase cohesion, and (d) strategies to support the quick application of composition strategies in a just-in-time setting. Findings both support and extend existing research regarding best practice for VSD design.
Conclusions: Findings provide an initial framework for understanding photographic image complexity and how it differs from drawn AAC symbols. Furthermore, findings outline a toolbox of composition principles that may help limit VSD complexity, along with providing recommendations for AAC development to support the quick application of compositional principles to limit burdens associated with capturing photographic images.
Includes Supplemental Materials (2
- …