4,258 research outputs found
Recommended from our members
Recent advances in the user evaluation methods and studies of non-photorealistic visualisation and rendering techniques
Recommended from our members
Evaluation of Non-photorealistic 3D Urban Models for Mobile Device Navigation.
Painterly rendering techniques: A state-of-the-art review of current approaches
In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd
Recommended from our members
Towards Rapid Generation and Visualisation of Large 3D Urban Landscapes for Mobile Device Navigation
In this paper a procedural 3D modelling solution for mobile devices is presented based on scripting algorithms allowing for both the automatic and also semi-automatic creation of photorealistic quality virtual urban content. The combination of aerial images, GIS data, 2D ground maps and terrestrial photographs as input data coupled with a user-friendly customized interface permits the automatic and interactive generation of large-scale, accurate, georeferenced and fully-textured 3D virtual city content, content that can be specially optimized for use with mobile devices but also with navigational tasks in mind. Furthermore, a user-centred mobile virtual reality (VR) visualisation and interaction tool operating on PDAs (Personal Digital Assistants) for pedestrian navigation is also discussed. Via this engine, the import and display of various navigational file formats (2D and 3D) is supported, including a comprehensive front-end user-friendly graphical user interface providing immersive virtual 3D navigation
Play and Learn: Using Video Games to Train Computer Vision Models
Video games are a compelling source of annotated data as they can readily
provide fine-grained groundtruth for diverse tasks. However, it is not clear
whether the synthetically generated data has enough resemblance to the
real-world images to improve the performance of computer vision models in
practice. We present experiments assessing the effectiveness on real-world data
of systems trained on synthetic RGB images that are extracted from a video
game. We collected over 60000 synthetic samples from a modern video game with
similar conditions to the real-world CamVid and Cityscapes datasets. We provide
several experiments to demonstrate that the synthetically generated RGB images
can be used to improve the performance of deep neural networks on both image
segmentation and depth estimation. These results show that a convolutional
network trained on synthetic data achieves a similar test error to a network
that is trained on real-world data for dense image classification. Furthermore,
the synthetically generated RGB images can provide similar or better results
compared to the real-world datasets if a simple domain adaptation technique is
applied. Our results suggest that collaboration with game developers for an
accessible interface to gather data is potentially a fruitful direction for
future work in computer vision.Comment: To appear in the British Machine Vision Conference (BMVC), September
2016. -v2: fixed a typo in the reference
Recommended from our members
A Framework for the Development of Online, Location-Specific, Expressive 3D Social Worlds
Fidelity metrics for virtual environment simulations based on spatial memory awareness states
This paper describes a methodology based on human judgments of memory awareness
states for assessing the simulation fidelity of a virtual environment (VE) in relation
to its real scene counterpart. To demonstrate the distinction between task
performance-based approaches and additional human evaluation of cognitive awareness
states, a photorealistic VE was created. Resulting scenes displayed on a headmounted
display (HMD) with or without head tracking and desktop monitor were
then compared to the real-world task situation they represented, investigating spatial
memory after exposure. Participants described how they completed their spatial
recollections by selecting one of four choices of awareness states after retrieval in
an initial test and a retention test a week after exposure to the environment. These
reflected the level of visual mental imagery involved during retrieval, the familiarity
of the recollection and also included guesses, even if informed. Experimental results
revealed variations in the distribution of participants’ awareness states across conditions
while, in certain cases, task performance failed to reveal any. Experimental
conditions that incorporated head tracking were not associated with visually induced
recollections. Generally, simulation of task performance does not necessarily
lead to simulation of the awareness states involved when completing a memory
task. The general premise of this research focuses on how tasks are achieved,
rather than only on what is achieved. The extent to which judgments of human
memory recall, memory awareness states, and presence in the physical and VE are
similar provides a fidelity metric of the simulation in question
- …