391 research outputs found

    Intelligent Generation of Graphical Game Assets: A Conceptual Framework and Systematic Review of the State of the Art

    Full text link
    Procedural content generation (PCG) can be applied to a wide variety of tasks in games, from narratives, levels and sounds, to trees and weapons. A large amount of game content is comprised of graphical assets, such as clouds, buildings or vegetation, that do not require gameplay function considerations. There is also a breadth of literature examining the procedural generation of such elements for purposes outside of games. The body of research, focused on specific methods for generating specific assets, provides a narrow view of the available possibilities. Hence, it is difficult to have a clear picture of all approaches and possibilities, with no guide for interested parties to discover possible methods and approaches for their needs, and no facility to guide them through each technique or approach to map out the process of using them. Therefore, a systematic literature review has been conducted, yielding 200 accepted papers. This paper explores state-of-the-art approaches to graphical asset generation, examining research from a wide range of applications, inside and outside of games. Informed by the literature, a conceptual framework has been derived to address the aforementioned gaps

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Sound Aesthetic: A Form of Narrative

    Get PDF
    This research presents an exploration into a novel design methodology that incorporates architecture, multimedia, and interactive digital technologies to create an immersive experience that encourages a spatial and sensorial discourse between user and their built environment. This immersive design method creates a continuous narrative that allows a multi-directional interaction between the two. This interaction creates a “sound” architectural aesthetic that changes the experience of space. The target of the interaction between user and space is the five human senses resulting in an immersive aesthetic. In order to illustrate this immersive aesthetic, five architectural prototypes were created using an assorted design workflow of parametric programming environment and interactive prototyping platform. This workflow is employed for the creation of five prototypes used for the simulation that has user interaction as an input and formal geometries as an output. These five prototypes target various human senses in order to enhance the immersive aesthetic. Each protoype is evaluated according to individual prototype’s ability to stimulate user’s senses. Finally, future research based on the outcomes of this research is suggested

    Variations and Application Conditions Of the Data Type »Image« - The Foundation of Computational Visualistics

    Get PDF
    Few years ago, the department of computer science of the University Magdeburg invented a completely new diploma programme called 'computational visualistics', a curriculum dealing with all aspects of computational pictures. Only isolated aspects had been studied so far in computer science, particularly in the independent domains of computer graphics, image processing, information visualization, and computer vision. So is there indeed a coherent domain of research behind such a curriculum? The answer to that question depends crucially on a data structure that acts as a mediator between general visualistics and computer science: the data structure "image". The present text investigates that data structure, its components, and its application conditions, and thus elaborates the very foundations of computational visualistics as a unique and homogenous field of research. Before concentrating on that data structure, the theory of pictures in general and the definition of pictures as perceptoid signs in particular are closely examined. This includes an act-theoretic consideration about resemblance as the crucial link between image and object, the communicative function of context building as the central concept for comparing pictures and language, and several modes of reflection underlying the relation between image and image user. In the main chapter, the data structure "image" is extendedly analyzed under the perspectives of syntax, semantics, and pragmatics. While syntactic aspects mostly concern image processing, semantic questions form the core of computer graphics and computer vision. Pragmatic considerations are particularly involved with interactive pictures but also extend to the field of information visualization and even to computer art. Four case studies provide practical applications of various aspects of the analysis

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Muscle activation mapping of skeletal hand motion: an evolutionary approach.

    Get PDF
    Creating controlled dynamic character animation consists of mathe- matical modelling of muscles and solving the activation dynamics that form the key to coordination. But biomechanical simulation and control is com- putationally expensive involving complex di erential equations and is not suitable for real-time platforms like games. Performing such computations at every time-step reduces frame rate. Modern games use generic soft- ware packages called physics engines to perform a wide variety of in-game physical e ects. The physics engines are optimized for gaming platforms. Therefore, a physics engine compatible model of anatomical muscles and an alternative control architecture is essential to create biomechanical charac- ters in games. This thesis presents a system that generates muscle activations from captured motion by borrowing principles from biomechanics and neural con- trol. A generic physics engine compliant muscle model primitive is also de- veloped. The muscle model primitive forms the motion actuator and is an integral part of the physical model used in the simulation. This thesis investigates a stochastic solution to create a controller that mimics the neural control system employed in the human body. The control system uses evolutionary neural networks that evolve its weights using genetic algorithms. Examples and guidance often act as templates in muscle training during all stages of human life. Similarly, the neural con- troller attempts to learn muscle coordination through input motion samples. The thesis also explores the objective functions developed that aids in the genetic evolution of the neural network. Character interaction with the game world is still a pre-animated behaviour in most current games. Physically-based procedural hand ani- mation is a step towards autonomous interaction of game characters with the game world. The neural controller and the muscle primitive developed are used to animate a dynamic model of a human hand within a real-time physics engine environment

    Learning to Generate 3D Training Data

    Full text link
    Human-level visual 3D perception ability has long been pursued by researchers in computer vision, computer graphics, and robotics. Recent years have seen an emerging line of works using synthetic images to train deep networks for single image 3D perception. Synthetic images rendered by graphics engines are a promising source for training deep neural networks because it comes with perfect 3D ground truth for free. However, the 3D shapes and scenes to be rendered are largely made manual. Besides, it is challenging to ensure that synthetic images collected this way can help train a deep network to perform well on real images. This is because graphics generation pipelines require numerous design decisions such as the selection of 3D shapes and the placement of the camera. In this dissertation, we propose automatic generation pipelines of synthetic data that aim to improve the task performance of a trained network. We explore both supervised and unsupervised directions for automatic optimization of 3D decisions. For supervised learning, we demonstrate how to optimize 3D parameters such that a trained network can generalize well to real images. We first show that we can construct a pure synthetic 3D shape to achieve state-of-the-art performance on a shape-from-shading benchmark. We further parameterize the decisions as a vector and propose a hybrid gradient approach to efficiently optimize the vector towards usefulness. Our hybrid gradient is able to outperform classic black-box approaches on a wide selection of 3D perception tasks. For unsupervised learning, we propose a novelty metric for 3D parameter evolution based on deep autoregressive models. We show that without any extrinsic motivation, the novelty computed from autoregressive models alone is helpful. Our novelty metric can consistently encourage a random synthetic generator to produce more useful training data for downstream 3D perception tasks.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163240/1/ydawei_1.pd
    • 

    corecore