3,509 research outputs found

    Sketching-out virtual humans: From 2d storyboarding to immediate 3d character animation

    Get PDF
    Virtual beings are playing a remarkable role in today’s public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive “stick figurefleshing-outskin mapping” graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A “creative model-based method” is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    THE EFFECT OF COLOR ON EMOTIONS IN ANIMATED FILMS

    Get PDF
    Lighting color in animated films is usually chosen very carefully in order to portray a specific mood or emotion. Artists follow conventional techniques with color choices with the intention to create a greater emotional response in the viewer. This study examined the relationship between color variations in videos and emotional arousal as indicated by physiological response. Subjects wore a galvanic skin response (GSR) sensor and watched two different videos: one portraying love and one portraying sadness. The videos were watched multiple times, each with variations in the lighting color. No significant effects on emotion for either hue or saturation were observed from the GSR sensor data. It was concluded that the hue and saturation of lighting are not likely to cause a significant impact in the strength of emotions being portrayed in animated films to a degree in which it can be measured by electrodermal activity

    Inter-color NPR lines: A comparison of rendering techniques

    Get PDF
    Renders of 3D scenes can feature lines drawn automatically along sharp edges between colored areas on object textures, in order to imitate certain conventional styles of hand-drawn line art. However, such inter-color lines have been studied very little. Two algorithms for rendering these lines were compared in this study - a faster one utilizing lines baked into the textures themselves and a more complex one that dynamically generated the lines in image space on each frame - for the purpose of determining which of the two better imitated traditional, hand-drawn art styles and which was more visually appealing. Test subjects compared results of the two algorithms side by side in a real-time rendering program, which let them view a 3D scene both passively and interactively from a moving camera, and they noted the differences between each technique\u27s relative line thicknesses - the key visual disparity - in order to reach final judgments as to which better adhered to artistic conventions and which was more appealing. Statistical analysis of the sample proportions that preferred each algorithm failed to prove that any significant difference existed between the two algorithms in terms of either of the above metrics. Thus the algorithm using baked lines appeared to be more recommendable overall, as it was known to be computationally faster, whereas the dynamic algorithm was not shown to be preferred by viewers in terms of conventionality or aesthetics

    Spartan Daily, May 9, 2017

    Get PDF
    Volume 148, Issue 41https://scholarworks.sjsu.edu/spartan_daily_2017/1039/thumbnail.jp

    Would You Like to Save Your Game?: Establishing a Legal Framework for Long-Term Digital Game Preservation

    Get PDF

    Unconventional Methods for a Traditional Setting: The Use of Virtual Reality to Reduce Implicit Racial Bias in the Courtroom

    Get PDF
    The presumption of innocence and the right to a fair trial lie at the core of the United States justice system. While existing rules and practices serve to uphold these principles, the administration of justice is significantly compromised by a covert but influential factor: namely, implicit racial biases. These biases can lead to automatic associations between race and guilt, as well as impact the way in which judges and jurors interpret information throughout a trial. Despite the well-documented presence of implicit racial biases, few steps have been taken to ameliorate the problem in the courtroom setting. This Article discusses the potential of virtual reality to reduce these biases among judges and jurors. Through analyzing the various ethical and legal considerations, this Article contends that implementing virtual reality training with judges and jurors would be justifiable and advisable should effective means become available. Given that implicit racial biases can seriously undermine the fairness of the justice system, this Article ultimately asserts that unconventional de-biasing methods warrant legitimate attention and consideration

    Real-time cartoon-like stylization of AR video streams on the GPU

    Get PDF
    The ultimate goal of many applications of augmented reality is to immerse the user into the augmented scene, which is enriched with virtual models. In order to achieve this immersion, it is necessary to create the visual impression that the graphical objects are a natural part of the user’s environment. Producing this effect with conventional computer graphics algorithms is a complex task. Various rendering artifacts in the three-dimensional graphics create a noticeable visual discrepancy between the real background image and virtual objects. We have recently proposed a novel approach to generating an augmented video stream. With this new method, the output images are a non-photorealistic reproduction of the augmented environment. Special stylization methods are applied to both the background camera image and the virtual objects. This way the visual realism of both the graphical foreground and the real background image is reduced, so that they are less distinguishable from each other. Here, we present a new method for the cartoon-like stylization of augmented reality images, which uses a novel post-processing filter for cartoon-like color segmentation and high-contrast silhouettes. In order to make a fast postprocessing of rendered images possible, the programmability of modern graphics hardware is exploited. We describe an implementation of the algorithm using the OpenGL Shading Language. The system is capable of generating a stylized augmented video stream of high visual quality at real-time frame rates. As an example application, we demonstrate the visualization of dinosaur bone datasets in stylized augmented reality

    Video-Based Stylized Rendering using Frame Difference

    Get PDF
    In this paper, we suggest video based stylized rendering using frame difference. Stylized rendering using video frame has a temporal problem that occurs a difference between the previous and current frame. To reduce the temporal problem, we generate reference maps using temporal frame difference in correction and rendering steps. A correction method using reference maps can be reduced flickering effect caused by frame difference between the previous and current frame. We use a background map, an average map, and a quadtree-based summed area table as reference maps. Among these reference maps, the method using quadtree based summed area table can completely remove a flickering and popping effect. Also, a post-blurring method using bilateral filtering can be represented smooth, stylized rendering by removing unnecessary noise. Suggested stylized rendering system can be used in various fields such as visual art, advertisement, game and movie for stylized image contents generation
    corecore