584 research outputs found

    Exploring 3D Chemical Plant Using VRML

    Get PDF
    The research project focused on how virtual reality could create an immersive environment and improve in designing a chemical plant. The main problem is the difficulties in designing chemical plant since 2D plant layout cannot provide the real walking-through. The aim of this project is to develop and design 3D Chemical Plant which allows users to explore the virtual plant environment freely. The objectives of this project are to design and develop 3D Chemical Plant in the virtual environment; to enable user to walkthrough the chemical plant; and at the same time evaluate the effectiveness of the implementation of 3D Chemical Plant. In completion the project, the framework used is based on the waterfall modeling theory. This study also examines the structure and existing use of VRML (International standard for 3D modelling on the internet) in constmction and architectural practice as a means of investigating its role and potential for extensible construction information visualization in chemical plant. The phases involved in the framework used for project development is the initiation phase, design specification, project development, integration and testing and lastly project implementation. Developments tools have been used in the project are VRML and 3D Max 6. As a result from the evaluation conducted, the mean of 3.5 from level of satisfaction ranking shows that mostly the evaluators are satisfied with the project and feel that the realism of 3D chemical plant and suitability of color and textures will improve the designing of chemical plant in virtual environment. As conclusion, the research project show that VR!VE are very useful and give a good impact for the chemical Engineer in designing a chemical plant

    Sketching-out virtual humans: From 2d storyboarding to immediate 3d character animation

    Get PDF
    Virtual beings are playing a remarkable role in today’s public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive “stick figurefleshing-outskin mapping” graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A “creative model-based method” is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    Rapid Prototyping for Virtual Environments

    Get PDF
    Development of Virtual Environment (VE) applications is challenging where application developers are required to have expertise in the target VE technologies along with the problem domain expertise. New VE technologies impose a significant learning curve to even the most experienced VE developer. The proposed solution relies on synthesis to automate the migration of a VE application to a new unfamiliar VE platform/technology. To solve the problem, the Common Scene Definition Framework (CSDF) is developed, that serves as a superset/model representation of the target virtual world. Input modules are developed to populate the framework with the capabilities of the virtual world imported from VRML 2.0 and X3D formats. The synthesis capability is built into the framework to synthesize the virtual world into a subset of VRML 2.0, VRML 1.0, X3D, Java3D, JavaFX, JavaME, and OpenGL technologies, which may reside on different platforms. Interfaces are designed to keep the framework extensible to different and new VE formats/technologies. The framework demonstrated the ability to quickly synthesize a working prototype of the input virtual environment in different VE formats

    Design of an Anatomy Information System

    Get PDF
    Biology and medicine rely fundamentally on anatomy. Not only do you need anatomical knowledge to understand normal and abnormal function, anatomy also provides a framework for organizing other kinds of biomedical data. That’s why medical and other health sciences students take anatomy as one of their first courses. The Digital Anatomist Project undertaken by members of the University of Washington Structural Informatics Group aims to “put anatomy on a computer” in such a way that anatomical information becomes as fundamental to biomedical information management as the study of anatomy is to medical students. To do this we need to develop methods for representing anatomical information, accessing it, and reusing it in multiple applications ranging from education to clinical practice. This development process engenders many of the core research areas in biological structural informatics, which we have defined as a subfield of medical informatics dealing with information about the physical organization of the body. By its nature, structural information proves highly amenable to representation and visualization by computer graphics methods. In fact, computer graphics offers the first real breakthrough in anatomical knowledge representation since publication of the first scholarly anatomical treatise in 1546, in that it provides a means for capturing the 3D dynamic nature of the human body. In this article we explain the nature of anatomical information and discuss the design of a system to organize and access it. Example applications show the potential for reusing the same information in contexts ranging from education to clinical medicine, as well as the role of graphics in visualizing and interacting with anatomical representations

    Real-time haptic modeling and simulation for prosthetic insertion

    Get PDF
    In this work a surgical simulator is produced which enables a training otologist to conduct a virtual, real-time prosthetic insertion. The simulator provides the Ear, Nose and Throat surgeon with real-time visual and haptic responses during virtual cochlear implantation into a 3D model of the human Scala Tympani (ST). The parametric model is derived from measured data as published in the literature and accounts for human morphological variance, such as differences in cochlear shape, enabling patient-specific pre- operative assessment. Haptic modeling techniques use real physical data and insertion force measurements, to develop a force model which mimics the physical behavior of an implant as it collides with the ST walls during an insertion. Output force profiles are acquired from the insertion studies conducted in the work, to validate the haptic model. The simulator provides the user with real-time, quantitative insertion force information and associated electrode position as user inserts the virtual implant into the ST model. The information provided by this study may also be of use to implant manufacturers for design enhancements as well as for training specialists in optimal force administration, using the simulator. The paper reports on the methods for anatomical modeling and haptic algorithm development, with focus on simulator design, development, optimization and validation. The techniques may be transferrable to other medical applications that involve prosthetic device insertions where user vision is obstructed

    The design of 3D cyberspace as user interface: Advantages and limitations

    Get PDF
    Virtual reality propagandists, technologists and the Internet community have long debated the issue of the usability of online three-dimensional (3D) environments. A lot of work was published about the benefits of 3D spaces for human-computer interaction and information visualisation due to their realism (Anders, Kalawsky, Crossley, Davies, McGrath, Rejman-Greene, 1998, Hamit, 1993, Heim, 1992, Aukstakalnis, Blatner, Roth, 1992). This topic also receives continuous industry support including standardisation of Virtual Reality Modeling Language ( VRML, VRML Consortium, 1997) and the more recent Macromedia & Intel alliance to bring web 3D to the mainstream (200 1, Intel Corporation). The actual implementation of this technology is, however, still challenging (McCarthy & Descartes, 1998) and minimal because 3D is too new and waiting for good design to be discovered (Nielsen, 1998). The practical aim of this project is to fulfil the niche by creating a functional 3D interface for the access of two-dimensional (2D) information, such as text, using VRML. The theoretical aim is to contribute to further research into 3D usability by describing and analysing the design process in terms of possibilities, challenges and limitations

    COMPARATIVE STUDY OF HAPTIC AND VISUAL FEEDBACK FOR KINESTHETIC TRAINING TASKS

    Get PDF
    Haptics is the sense of simulating and applying the sense of human touch. Application of touch sensations is done with haptic interface devices. The past few years has seen the development of several haptic interface devices with a wide variety of technologies used in their design. This thesis introduces haptic technologies and includes a survey of haptic interface devices and technologies. An improvement in simulating and applying touch sensation when using the Quanser Haptic Wand with proSense is suggested in this work using a novel five degree-of-freedom algorithm. This approach uses two additional torques to enhance the three degree-of-freedom of force feedback currently available with these products. Modern surgical trainers for performing laparoscopic surgery are incorporating haptic feedback in addition to visual feedback for training. This work presents a quantitative comparison of haptic versus visual training. One of the key results of the study is that haptic feedback is better than visual feedback for kinesthetic navigation tasks
    • 

    corecore