11 research outputs found

    An Optimized Soft 3D Mobile Graphics Library Based on JIT Backend Compiler

    No full text

    Illuminating the past : state of the art

    No full text
    Virtual reconstruction and representation of historical environments and objects have been of research interest for nearly two decades. Physically based and historically accurate illumination allows archaeologists and historians to authentically visualise a past environment to deduce new knowledge. This report reviews the current state of illuminating cultural heritage sites and objects using computer graphics for scientific, preservation and research purposes. We present the most noteworthy and up-to-date examples of reconstructions employing appropriate illumination models in object and image space, and in the visual perception domain. Finally, we also discuss the difficulties in rendering, documentation, validation and identify probable research challenges for the future. The report is aimed for researchers new to cultural heritage reconstruction who wish to learn about methods to illuminate the past

    Detection of Affective States From Text and Speech for Real-Time Human–Computer Interaction

    No full text
    Objective: The goal of this work is to develop and test an automated system methodology that can detect emotion from text and speech features. Background: Affective human-computer interaction will be critical for the success of new systems that will be prevalent in the 21st century. Such systems will need to properly deduce human emotional state before they can determine how to best interact with people. Method: Corpora and machine learning classification models are used to train and test a methodology for emotion detection. The methodology uses a stepwise approach to detect sentiment in sentences by first filtering out neutral sentences, then distinguishing among positive, negative, and five emotion classes. Results: Results of the classification between emotion and neutral sentences achieved recall accuracies as high as 77% in the University of Illinois at Urbana-Champaign (UIUC) corpus and 61% in the Louisiana State University medical drama (LSU-MD) corpus for emotion samples. Once neutral sentences were filtered out, the methodology achieved accuracy scores for detecting negative sentences as high as 92.3%. Conclusion: Results of the feature analysis indicate that speech spectral features are better than speech prosodic features for emotion detection. Accumulated sentiment composition text features appear to be very important as well. This work contributes to the study of human communication by providing a better understanding of how language factors help to best convey human emotion and how to best automate this process. Application: Results of this study can be used to develop better automated assistive systems that interpret human language and respond to emotions through 3-D computer graphics. Copyright © 2012, Human Factors and Ergonomics Society

    Rendering Soft Shadows using Multilayered Shadow Fins

    No full text
    Generating soft shadows in real time is difficult. Exact methods (such as ray tracing, and multiple light source simulation) are too slow, while approximate methods often overestimate the umbra regions. In this paper, we introduce a new algorithm based on the shadow map method to quickly and highly accurately render soft shadows produced by a light source. Our method builds inner and outer translucent fins on objects to represent the penumbra area inside and outside hard shadows, respectively. The fins are traced into multilayered light space maps to store illuminance adjustment to shadows. The viewing space illuminance buffer is then calculated using those maps. Finally, by blending illuminance and shading, a scene with highly accurate soft shadow effects is produced. Our method does not suffer from umbra overestimation. Physical relations between light, objects and shadows demonstrate the soundness of our approach
    corecore