5 research outputs found

    Model-Driven Development of Interactive Multimedia Applications

    Get PDF
    The development of highly interactive multimedia applications is still a challenging and complex task. In addition to the application logic, multimedia applications typically provide a sophisticated user interface with integrated media objects. As a consequence, the development process involves different experts for software design, user interface design, and media design. There is still a lack of concepts for a systematic development which integrates these aspects. This thesis provides a model-driven development approach addressing this problem. Therefore it introduces the Multimedia Modeling Language (MML), a visual modeling language supporting a design phase in multimedia application development. The language is oriented on well-established software engineering concepts, like UML 2, and integrates concepts from the areas of multimedia development and model-based user interface development. MML allows the generation of code skeletons from the models. Thereby, the core idea is to generate code skeletons which can be directly processed in multimedia authoring tools. In this way, the strengths of both are combined: Authoring tools are used to perform the creative development tasks while models are used to design the overall application structure and to enable a well-coordinated development process. This is demonstrated using the professional authoring tool Adobe Flash. MML is supported by modeling and code generation tools which have been used to validate the approach over several years in various student projects and teaching courses. Additional prototypes have been developed to demonstrate, e.g., the ability to generate code for different target platforms. Finally, it is discussed how models can contribute in general to a better integration of well-structured software development and creative visual design

    Utilising the grid for augmented reality

    Get PDF

    Utilising the grid for augmented reality

    Get PDF
    Traditionally registration and tracking within Augmented Reality (AR) applications have been built around specific markers which have been added into the user’s viewpoint and allow for their position to be tracked and their orientation to be estimated in real-time. All attempts to implement AR without specific markers have increased the computational requirements and some information about the environment is still needed in order to match the registration between the real world and the virtual artifacts. This thesis describes a novel method that not only provides a generic platform for AR but also seamlessly deploys High Performance Computing (HPC) resources to deal with the additional computational load, as part of the distributed High Performance Visualization (HPV) pipeline used to render the virtual artifacts. The developed AR framework is then applied to a real world application of a marker-less AR interface for Transcranial Magnetic Stimulation (TMS), named BART (Bangor Augmented Reality for TMS). Three prototypes of BART are presented, along with a discussion of the subsequent limitations and solutions of each. First by using a proprietary tracking system it is possible to achieve accurate tracking, but with the limitations of having to use bold markers and being unable to render the virtual artifacts in real time. Second, BART v2 implements a novel tracking system using computer vision techniques. Repeatable feature points are extracted from the users view point to build a description of the object or plane that the virtual artifact is aligned with. Then as each frame is updated we use the changing position of the feature points to estimate how the object has moved. Third, the e-Viz framework is used to autonomously deploy HPV resources to ensure that the virtual objects are rendered in real-time. e-Viz also enables the allocation of remote High Performance Computing (HPC) resources to handle the computational requirements of the object tracking and pose estimation
    corecore