638 research outputs found

    Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

    Get PDF
    This major component of the research described in this thesis is 3D computer graphics, specifically the realistic physics-based softbody simulation and haptic responsive environments. Minor components include advanced human-computer interaction environments, non-linear documentary storytelling, and theatre performance. The journey of this research has been unusual because it requires a researcher with solid knowledge and background in multiple disciplines; who also has to be creative and sensitive in order to combine the possible areas into a new research direction. [...] It focuses on the advanced computer graphics and emerges from experimental cinematic works and theatrical artistic practices. Some development content and installations are completed to prove and evaluate the described concepts and to be convincing. [...] To summarize, the resulting work involves not only artistic creativity, but solving or combining technological hurdles in motion tracking, pattern recognition, force feedback control, etc., with the available documentary footage on film, video, or images, and text via a variety of devices [....] and programming, and installing all the needed interfaces such that it all works in real-time. Thus, the contribution to the knowledge advancement is in solving these interfacing problems and the real-time aspects of the interaction that have uses in film industry, fashion industry, new age interactive theatre, computer games, and web-based technologies and services for entertainment and education. It also includes building up on this experience to integrate Kinect- and haptic-based interaction, artistic scenery rendering, and other forms of control. This research work connects all the research disciplines, seemingly disjoint fields of research, such as computer graphics, documentary film, interactive media, and theatre performance together.Comment: PhD thesis copy; 272 pages, 83 figures, 6 algorithm

    Platform Independent Real-Time X3D Shaders and their Applications in Bioinformatics Visualization

    Get PDF
    Since the introduction of programmable Graphics Processing Units (GPUs) and procedural shaders, hardware vendors have each developed their own individual real-time shading language standard. None of these shading languages is fully platform independent. Although this real-time programmable shader technology could be developed into 3D application on a single system, this platform dependent limitation keeps the shader technology away from 3D Internet applications. The primary purpose of this dissertation is to design a framework for translating different shader formats to platform independent shaders and embed them into the eXtensible 3D (X3D) scene for 3D web applications. This framework includes a back-end core shader converter, which translates shaders among different shading languages with a middle XML layer. Also included is a shader library containing a basic set of shaders that developers can load and add shaders to. This framework will then be applied to some applications in Biomolecular Visualization

    A Mobile Authoring Tool for AR Content Generation Using Images as Annotations

    Get PDF
    Augmented Reality (AR) is a technology that allows the superimposition of virtual objects onto the real world environment. Various fields, such as education, medicine, and architecture, have started adapting AR technology. However, developing AR applications, along with their contents, requires a specific skillset, which limits the number of AR-based applications that can be developed. Various authoring tools are available for desktop systems to ease the development of AR applications and content, yet only few attempts have been made to develop these kinds of tools for mobile systems. This paper describes a mobile application that allows users to author content for AR viewing using 2D images. Furthermore, the tool allows users to produce and to edit AR content on the spot. After the application was developed, a usability test was conducted with eight teachers in order to assess the difficulty of using the application. The user testing showed that the application developed was generally easy to use, and that further addition of features can improve the application

    Identifying Complex Cultural Interactions in the Instructional Design Process: A Case Study of a Cross-Border, Cross-Sector Training for Innovation Program

    Get PDF
    The purpose of this research is to identify complex cultural dynamics in theinstructional design process of a cross-sector, cross-border training environment by applying Young’s (2009) Culture-Based Model (CBM) as a theoretical framework and taxonomy for description of the instructional design process under the conditions of one case. The guiding question of this study is: How does culture, as defined by Young’s (2009) CBM framework, interact with the instructional design process in this case of a cross-sector, cross-border training program? This research uses the qualitative approach of case study and applies a cultural design framework to examine the process of instructional design by a team of designers-by-assignment in a NASA/university consortium program to train applied research and development teams for an education software company headquartered in India. Fifteen representative participants were chosen to reflect each role involved in the training program and instructional design process, including management, instructors and students. In over two years of engagement with participants, data was gathered at a NASA space center and in Mumbai, India through interviews, observation and artifact analysis. Data was analyzed to identify where components of the design process, decisions of the design team, and perceptions of the stakeholders overlap with culture as defined by Young’s CBM framework. The findings indicate that at least twenty-three distinguishable elements of culture interact across the design process in the: 1) goals and funding decisions of the client; 2) goals and design decisions of the design team; 3) perceptions of the training program of all stakeholders; and 4) the observable outcomes of the training program. The findings also offer insight into what stakeholders do or do not consciously attribute to culture. By empirically illuminating the pervasive presence of cultural interactions across the instructional design process, this study advocates for culture to be recognized as a construct of importance in our field and demonstrates the powerful capabilities of using a comprehensive descriptive model as a lens for exploring cultural dynamics in the instructional design process

    Virtual Blocks: a serious game for spatial ability improvement on mobile devices

    Full text link
    This paper presents a novel spatial instruction system for improving spatial abilities of engineering students. A 3D mobile game application called Virtual Blocks has been designed to provide a 3D virtual environment to build models with cubes that help students to perform visualization tasks to promote the development of their spatial ability during a short remedial course. A validation study with 26 freshman engineering students at La Laguna University (Spain) has concluded that the training had a measurable and positive impact on students spatial ability. In addition, the results obtained using a satisfaction questionnaire show that Virtual Blocks is considered an easy to use and stimulating application.This work has been partially supported by the (Spanish) National Program for Studies and Analysis project "Evaluation and development of competencies associated to the spatial ability in the new engineering undergraduate courses" (Ref. EA2009-0025) and the (Spanish) National Science Project "Enhancing Spatial REasoning and VIsual Cognition with advanced technological tools (ESREVIC)" (Ref TIN2010-21296-C02-02)Martín Dorta, NN.; Sanchez Berriel, I.; Bravo, M.; Hernández, J.; Saorin, JL.; Contero, M. (2014). Virtual Blocks: a serious game for spatial ability improvement on mobile devices. Multimedia Tools and Applications. 73(3):1575-1595. https://doi.org/10.1007/s11042-013-1652-0S15751595733Baartmans BG, Sorby SA (1996) Introduction to 3-D spatial visualization. Prentice Hall, Englewood CliffsClements D, Battista M (1992) Geometry and spatial reasoning. In: Grouws DA (ed) Handbook of research on mathematics teaching and learning. New York, pp 420–464Cohen J (1988) Statistical power analysis for the behavioral sciences, 2nd edn. Erlbaum, HillsdaleDe Lisi R, Cammarano DM (1996) Computer experience and gender differences in undergraduate mental rotation performance. Comput Hum Behav 12:351–361Deno JA (1995) The relationship of previous experiences to spatial visualization ability. Eng Des Graph J 59(3):5–17Feng J, Spence I, Pratt J (2007) Playing an action video game reduces gender differences in spatial cognition. Psychol Sci 18(10):850–855French JW (1951) The description of aptitude and achievement tests in terms of rotated factors. Psychometric monograph 5Guilford JP, Lacy JI (1947) Printed classification tests, A.A.F. Aviation Psychological Progress Research Report, 5. US. Government Printing Office, Washington DCHalpern DF (2000) Sex differences and cognitive abilities. Erlbaum, MahwahHöfele C (2007) Mobile 3D graphics: learning 3D graphics with the Java Micro Edition. Editorial ThomsonKajiya JT, Kay TL (1989) Rendering fur with three dimensional textures. In Proceedings of the 16th Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH ’89. ACM Press, New York pp 271–280Linn MC, Petersen AC (1985) Emergence and characterization of gender differences in spatial abilities: a meta-analysis. Child Dev 56:1479–1498Martin-Dorta N, Sanchez-Berriel I, Bravo M, Hernandez J, Saorin JL, Contero M (2010) A 3D educational mobile game to enhance student’s spatial skills, ICALT, pp.6–10, 2010 10th IEEE International Conference on Advanced Learning TechnologiesMartin-Dorta N, Saorin J, Contero M (2008) Development of a fast remedial course to improve the spatial abilities of engineering students. J Eng Educ 27(4):505–514Martin-Dorta N, Saorin JL, Contero M (2011) Web-based spatial training using handheld touch screen devices. Educ Technol Soc 14(3):163–177McGee MG (1979) Human spatial abilities: psychometric studies and environmental, genetic, hormonal, and neurological influences. Psychol Bull 86:889–918Noguera JM, Segura RJ, Ogayar CJ, Joan-Arinyo R (2011) Navigating large terrains using commodity mobile devices. Comput Geosci 37:1218–1233Okagaki L, Frensch PA (1994) Effects of video game playing on measures of spatial performance: gender effects in late adolescence. J Appl Dev Psychol 15(1):33–58Pulli K, Aarnio T, Miettinen V, Roimela K, Vaarala J (2007) Mobile 3D graphics with OpenGL ES and M3G. Editorial Morgan KaufmannQuaiser-Pohl C, Geiser C, Lehmann W (2005) The relationship between computer-game preference, gender, and mental-rotation ability. Personal Individ Differ 40(3):609–619Smith IM (1964) Spatial ability- its educational and social significance. The University of London Press, LondonSorby S (2007) Developing 3D spatial skills for engineering students. Australas Assoc Eng Educ 13(1):1–11Terlecki MS, Newcombe NS (2005) How important is the digital divide? The relation of computer and videogame usage to gender differences in mental rotation ability. Sex Roles 53(5/6):433–441Terlecki MS, Newcombe NS, Little M (2008) Durable and generalized effects of spatial experience on mental rotation: gender differences in growth patterns. Appl Cogn Psychol 22:996–1013Thurstone LL (1950) Some primary abilities in visual thinking (Tech. Rep. No. 59). IL University of Chicago Psychometric Laboratory, ChicagoThurstone LL, Thurstone TG (1941) Factorial studies of intelligence. Psychometric monographs. Chicago Press, ChicagoVanderberg S, Kuse A (1978) Mental Rotation, a group test of three dimensional spatial visualization. Percept Mot Skills 47:599–604Zimmerman WS (1954) Hypotheses concerning the nature of the spatial factors. Educ Psychol Meas 14:396–40

    Scalable ray tracing with multiple GPGPUs

    Get PDF
    Rapid development in the field of computer graphics over the last 40 years has brought forth different techniques to render scenes. Rasterization is today’s most widely used technique, which in its most basic form sequentially draws thousands of polygons and applies texture on them. Ray tracing is an alternative method that mimics light transport by using rays to sample a scene in memory and render the color found at each ray’s scene intersection point. Although mainstream hardware directly supports rasterization, ray tracing would be the preferred technique due to its ability to produce highly crisp and realistic graphics, if hardware were not a limitation. Making an immediate hardware transition from rasterization to ray tracing would have a severe impact on the computer graphics industry since it would require redevelopment of existing 3D graphics-employing software, so any transition to ray tracing would be gradual. Previous efforts to perform ray tracing on mainstream rasterizing hardware platforms with a single processor have performed poorly. This thesis explores how a multiple GPGPU system can be used to render scenes via ray tracing. A ray tracing engine and API groundwork was developed using NVIDIA’s CUDA (Compute Unified Device Architecture) GPGPU programming environment and was used to evaluate performance scalability across a multi-GPGPU system. This engine supports triangle, sphere, disc, rectangle, and torus rendering. It also allows independent activation of graphics features including procedural texturing, Phong illumination, reflections, translucency, and shadows. Correctness of rendered images validates the ray traced results, and timing of rendered scenes benchmarks performance. The main test scene contains all object types, has a total of 32 Abstract objects, and applies all graphics features. Ray tracing this scene using two GPGPUs outperformed the single-GPGPU and single-CPU systems, yielding respective speedups of up to 1.8 and 31.25. The results demonstrate how much potential exists in treating a modern dual-GPU architecture as a dual-GPGPU system in order to facilitate a transition from rasterization to ray tracing

    Conceptual model of mobile augmented reality for cultural heritage site towards enjoyable informal learning (Marchsteil)

    Get PDF
    A mobile augmented reality (AR) is one of the emerging technologies that may provide interactive content to tourists at cultural heritage sites. Past studies show enjoyable informal learning experience is highly needed for tourists to broaden knowledge for tourists. Although many mobile AR applications have been developed to expose cultural heritage site information, they are still lacking in providing such experience due to lack of comprehensive models which taking into consideration the elements of enjoyable informal learning experience in the development of such applications. Therefore, this study proposes a comprehensive conceptual model of mobile AR where it considers the components of enjoyable informal learning experience at cultural heritage site. This study followed design science research methodology. The proposed conceptual model is reviewed and validated through expert review and focus group discussion The review was analysed based on frequency of the responses on each component. As a proof-of-concept, the prototype (named as AR@Melaka) was developed and then evaluated on its enjoyable informal learning aspects to 200 tourists of a renowned cultural heritage site. From user perspective, it is proven that AR@Melaka provides enjoyable informal learning. In conclusion, these findings proved that the conceptual model is useful for assisting tourists in learning at cultural heritage site in an enjoyable way. This study contributes a conceptual model to serve as guidelines for developing a mobile augmented reality that considers an enjoyable informal learning component

    Visualization and Animation of a Missile/Target Encounter

    Get PDF
    Existing missile/target encounter modeling and simulation systems focus on improving probability of kill models. Little research has been done to visualize these encounters. These systems can be made more useful to the engineers by incorporating current computer graphics technology for visualizing and animating the encounter. Our research has been to develop a graphical simulation package for visualizing both endgame and full fly-out encounters. Endgame visualization includes showing the interaction of a missile, its fuze cone proximity sensors, and its target during the final fraction of a second of the missile/target encounter. Additionally, this system displays dynamic effects such as the warhead fragmentation pattern and the specific skewing of the fragment scattering due to missile yaw at the point of detonation. Fly-out visualization, on the other hand, involves full animation of a missile from launch to target. Animating the results of VisSim fly-out simulations provides the engineer a more efficient means of analyzing his data. This research also involves investigating fly-out animation via the World Wide Web

    Collaborative Augmented Reality

    Get PDF
    Over the past number of years augmented reality (AR) has become an increasingly pervasive as a consumer level technology. The principal drivers of its recent development has been the evolution of mobile and handheld devices, in conjunction with algorithms and techniques from fields such as 3D computer vision. Various commercial platforms and SDKs are now available that allow developers to quickly develop mobile AR apps requiring minimal understanding of the underlying technology. Much of the focus to date, both in the research and commercial environment, has been on single user AR applications. Just as collaborative mobile applications have a demonstrated role in the increasing popularity of mobile devices, and we believe collaborative AR systems present a compelling use-case for AR technology. The aim of this thesis is the development a mobile collaborative augmented reality framework. We identify the elements required in the design and implementation stages of collaborative AR applications. Our solution enables developers to easily create multi-user mobile AR applications in which the users can cooperatively interact with the real environment in real time. It increases the sense of collaborative spatial interaction without requiring complex infrastructure. Assuming the given low level communication and AR libraries have modular structures, the proposed approach is also modular and flexible enough to adapt to their requirements without requiring any major changes
    • …
    corecore