220 research outputs found

    A multi-projector CAVE system with commodity hardware and gesture-based interaction

    Get PDF
    Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever combination of skeletal data from multiple Kinect sensors.Preprin

    SURGNET: An Integrated Surgical Data Transmission System for Telesurgery

    Get PDF
    Remote surgery information requires quick and reliable transmission between the surgeon and the patient site. However, the networks that interconnect the surgeon and patient sites are usually time varying and lossy which can cause packet loss and delay jitter. In this paper we propose SURGNET, a telesurgery system for which we developed the architecture, algorithms and implemented it on a testbed. The algorithms include adaptive packet prediction and buffer time adjustment techniques which reduce the negative effects caused by the lossy and time varying networks. To evaluate the proposed SURGNET system, at the therapist site, we implemented a therapist panel which controls the force feedback device movements and provides image analysis functionality. At the patient site we controlled a virtual reality applet built in Matlab. The varying network conditions were emulated using NISTNet emulator. Our results show that even for severe packet loss and variable delay jitter, the proposed integrated synchronization techniques significantly improve SURGNET performance

    Case study of virtual reality in CNC machine tool exhibition

    Full text link
    Exhibition and demonstration are generally used in the promotion and sale-assistance of manufactured products. However, the transportation cost of the real goods from the vender factory to the exposition venue is generally expensive for huge and heavy commodity. With the advancement of computing, graphics, mobile apps, and mobile hardware the 3D visibility technology is getting more and more popular to be adopted in visual-assisted communication such as amusement games. Virtual reality (VR) technology has therefore being paid great attention in emulating expensive small and/or huge and heavy equipment. Virtual reality can be characterized as 3D extension with Immersion, Interaction and Imagination. This paper was then be focused on the study of virtual reality in the assistance of CNC machine tool demonstration and exhibition. A commercial CNC machine tool was used in this study to illustrate the effectiveness and usability of using virtual reality for an exhibition. The adopted CNC machine tool is a large and heavy mill-turn machine with the width up to eleven meters and weighted about 35 tons. A head-mounted display (HMD) was attached to the developed VR CNC machine tool for the immersion viewing. A user can see around the 3D scene of the large mill-turn machine and the operation of the virtual CNC machine can be actuated by bare hand. Coolant was added to demonstrate more realistic operation while collision detection function was also added to remind the operator. The developed VR demonstration system has been presented in the 2017 Taipei International Machine Tool Show (TIMTOS 2017). This case study has shown that young engineers and/or students are very impressed by the VR-based demonstration while elder persons could not adapt themselves easily to the VR-based scene because of eyesight issues. However, virtual reality has successfully being adopted and integrated with the CNC machine tool in an international show. Another machine tool on laser-assisted milling machine motion simulation has also been successfully conducted to show the expandability of the VR based technology. One can conclude that VR will be adopted and paid more and more attention in the future in helping CNC machine tool promotion. Further study could be extended to education and training system, and also for the maintenance system, too

    Simulation of Visual Servoing in Grasping Objects Moving by Newtonian Dynamics

    Get PDF
    Robot control systems and other manufacturing equipment are traditionally closed systems. This circumstance has hampered system integration of manipulators, sensors as well as other equipment, and such system integration has often been made at an unsuitably high hierarchical level. With the aid of vision, visual feedback is used to guide the robot manipulator to the target. This hand-to-target task is fairly easy if the target is static in Cartesian space. However, if the target is dynamic in motion, a model of the dynamics behaviour is required in order for the robot to track and intercept the target. The purpose of this project is to simulate in a virtual environment to show how to organise robot control systems with sensor integration. This project is a simulation that involves catching a thrown virtual ball using a six degree-of-freedom virtual robot and two virtual digital cameras. Tasks to be executed in this project include placement of virtual digital cameras, segmentation and tracking of the moving virtual ball as well as model-based prediction of the virtual ball's trajectory. Consideration have to be given to the placement of the virtual digital cameras so that the whole trajectory of the ball can be captured by both the virtual digital cameras simultaneously. In order to track the trajectory of the virtual ball, the image of the ball captured by the digital cameras has to be segmented from its background. Then a model is to be developed to predict the trajectory of the virtual ball so that the virtual robot can be controlled to align itself to grasp the moving virtual ball

    Real-time Immersive human-computer interaction based on tracking and recognition of dynamic hand gestures

    Get PDF
    With fast developing and ever growing use of computer based technologies, human-computer interaction (HCI) plays an increasingly pivotal role. In virtual reality (VR), HCI technologies provide not only a better understanding of three-dimensional shapes and spaces, but also sensory immersion and physical interaction. With the hand based HCI being a key HCI modality for object manipulation and gesture based communication, challenges are presented to provide users a natural, intuitive, effortless, precise, and real-time method for HCI based on dynamic hand gestures, due to the complexity of hand postures formed by multiple joints with high degrees-of-freedom, the speed of hand movements with highly variable trajectories and rapid direction changes, and the precision required for interaction between hands and objects in the virtual world. Presented in this thesis is the design and development of a novel real-time HCI system based on a unique combination of a pair of data gloves based on fibre-optic curvature sensors to acquire finger joint angles, a hybrid tracking system based on inertia and ultrasound to capture hand position and orientation, and a stereoscopic display system to provide an immersive visual feedback. The potential and effectiveness of the proposed system is demonstrated through a number of applications, namely, hand gesture based virtual object manipulation and visualisation, hand gesture based direct sign writing, and hand gesture based finger spelling. For virtual object manipulation and visualisation, the system is shown to allow a user to select, translate, rotate, scale, release and visualise virtual objects (presented using graphics and volume data) in three-dimensional space using natural hand gestures in real-time. For direct sign writing, the system is shown to be able to display immediately the corresponding SignWriting symbols signed by a user using three different signing sequences and a range of complex hand gestures, which consist of various combinations of hand postures (with each finger open, half-bent, closed, adduction and abduction), eight hand orientations in horizontal/vertical plans, three palm facing directions, and various hand movements (which can have eight directions in horizontal/vertical plans, and can be repetitive, straight/curve, clockwise/anti-clockwise). The development includes a special visual interface to give not only a stereoscopic view of hand gestures and movements, but also a structured visual feedback for each stage of the signing sequence. An excellent basis is therefore formed to develop a full HCI based on all human gestures by integrating the proposed system with facial expression and body posture recognition methods. Furthermore, for finger spelling, the system is shown to be able to recognise five vowels signed by two hands using the British Sign Language in real-time

    B.O.G.G.L.E.S.: Boundary Optical GeoGraphic Lidar Environment System

    Get PDF
    The purpose of this paper is to describe a pseudo X-ray vision system that pairs a Lidar scanner with a visualization device. The system as a whole is referred to as B.O.G.G.L.E.S. There are several key factors that went into the development of this system and the background information and design approach are thoroughly described. B.O.G.G.L.E.S functionality is depicted through the use of design constraints and the analysis of test results. Additionally, many possible developments for B.O.G.G.L.E.S are proposed in the paper. This indicates that there are various avenues of improvement for this project that could be implemented in the future

    Shader optimization and specialization

    Get PDF
    In the field of real-time graphics for computer games, performance has a significant effect on the player’s enjoyment and immersion. Graphics processing units (GPUs) are hardware accelerators that run small parallelized shader programs to speed up computationally expensive rendering calculations. This thesis examines optimizing shader programs and explores ways in which data patterns on both the CPU and GPU can be analyzed to automatically speed up rendering in games. Initially, the effect of traditional compiler optimizations on shader source-code was explored. Techniques such as loop unrolling or arithmetic reassociation provided speed-ups on several devices, but different GPU hardware responded differently to each set of optimizations. Analyzing execution traces from numerous popular PC games revealed that much of the data passed from CPU-based API calls to GPU-based shaders is either unused, or remains constant. A system was developed to capture this constant data and fold it into the shaders’ source-code. Re-running the game’s rendering code using these specialized shader variants resulted in performance improvements in several commercial games without impacting their visual quality

    PolyVR - A Virtual Reality Authoring Framework for Engineering Applications

    Get PDF
    Die virtuelle RealitĂ€t ist ein fantastischer Ort, frei von EinschrĂ€nkungen und vielen Möglichkeiten. FĂŒr Ingenieure ist dies der perfekte Ort, um Wissenschaft und Technik zu erleben, es fehlt jedoch die Infrastruktur, um die virtuelle RealitĂ€t zugĂ€nglich zu machen, insbesondere fĂŒr technische Anwendungen. Diese Arbeit bescheibt die Entstehung einer Softwareumgebung, die eine einfachere Entwicklung von Virtual-Reality-Anwendungen und deren Implementierung in immersiven Hardware-Setups ermöglicht. Virtual Engineering, die Verwendung virtueller Umgebungen fĂŒr Design-Reviews wĂ€hrend des Produktentwicklungsprozesses, wird insbesondere von kleinen und mittleren Unternehmen nur Ă€ußerst selten eingesetzt. Die HauptgrĂŒnde sind nicht mehr die hohen Kosten fĂŒr professionelle Virtual-Reality-Hardware, sondern das Fehlen automatisierter VirtualisierungsablĂ€ufe und die hohen Wartungs- und Softwareentwicklungskosten. Ein wichtiger Aspekt bei der Automatisierung von Virtualisierung ist die Integration von Intelligenz in kĂŒnstlichen Umgebungen. Ontologien sind die Grundlage des menschlichen Verstehens und der Intelligenz. Die Kategorisierung unseres Universums in Begriffe, Eigenschaften und Regeln ist ein grundlegender Schritt von Prozessen wie Beobachtung, Lernen oder Wissen. Diese Arbeit zielt darauf ab, einen Schritt zu einem breiteren Einsatz von Virtual-Reality-Anwendungen in allen Bereichen der Wissenschaft und Technik zu entwickeln. Der Ansatz ist der Aufbau eines Virtual-Reality-Authoring-Tools, eines Softwarepakets zur Vereinfachung der Erstellung von virtuellen Welten und der Implementierung dieser Welten in fortschrittlichen immersiven Hardware-Umgebungen wie verteilten Visualisierungssystemen. Ein weiteres Ziel dieser Arbeit ist es, das intuitive Authoring von semantischen Elementen in virtuellen Welten zu ermöglichen. Dies sollte die Erstellung von virtuellen Inhalten und die Interaktionsmöglichkeiten revolutionieren. Intelligente immersive Umgebungen sind der SchlĂŒssel, um das Lernen und Trainieren in virtuellen Welten zu fördern, Prozesse zu planen und zu ĂŒberwachen oder den Weg fĂŒr völlig neue Interaktionsparadigmen zu ebnen
    • 

    corecore