8,358 research outputs found

    Realization Of A Spatial Augmented Reality System - A Digital Whiteboard Using a Kinect Sensor and a PC Projector

    Get PDF
    Recent rapid development of cost-effective, accurate digital imaging sensors, high-speed computational hardware, and tractable design software has given rise to the growing field of augmented reality in the computer vision realm. The system design of a 'Digital Whiteboard' system is presented with the intention of realizing a practical, cost-effective and publicly available spatial augmented reality system. A Microsoft Kinect sensor and a PC projector coupled with a desktop computer form a type of spatial augmented reality system that creates a projection based graphical user interface that can turn any wall or planar surface into a 'Digital Whiteboard'. The system supports two kinds of user inputs consisting of depth and infra-red information. An infra-red collimated light source, like that of a laser pointer pen, serves as a stylus for user input. The user can point and shine the infra-red stylus on the selected planar region and the reflection of the infra-red light source is registered by the system using the infra-red camera of the Kinect. Using the geometric transformation between the Kinect and the projector, obtained with system calibration, the projector displays contours corresponding to the movement of the stylus on the 'Digital Whiteboard' region, according to a smooth curve fitting algorithm. The described projector-based spatial augmented reality system provides new unique possibilities for user interaction with digital content

    Realization Of A Spatial Augmented Reality System - A Digital Whiteboard Using a Kinect Sensor and a PC Projector

    Get PDF
    Recent rapid development of cost-effective, accurate digital imaging sensors, high-speed computational hardware, and tractable design software has given rise to the growing field of augmented reality in the computer vision realm. The system design of a 'Digital Whiteboard' system is presented with the intention of realizing a practical, cost-effective and publicly available spatial augmented reality system. A Microsoft Kinect sensor and a PC projector coupled with a desktop computer form a type of spatial augmented reality system that creates a projection based graphical user interface that can turn any wall or planar surface into a 'Digital Whiteboard'. The system supports two kinds of user inputs consisting of depth and infra-red information. An infra-red collimated light source, like that of a laser pointer pen, serves as a stylus for user input. The user can point and shine the infra-red stylus on the selected planar region and the reflection of the infra-red light source is registered by the system using the infra-red camera of the Kinect. Using the geometric transformation between the Kinect and the projector, obtained with system calibration, the projector displays contours corresponding to the movement of the stylus on the 'Digital Whiteboard' region, according to a smooth curve fitting algorithm. The described projector-based spatial augmented reality system provides new unique possibilities for user interaction with digital content

    A software framework for the development of projection-based augmented reality systems

    Get PDF
    Despite the large amount of methods and applications of augmented reality, there is little homogenization on the software platforms that support them. An exception may be the low level control software that is provided by some high profile vendors such as Qualcomm and Metaio. However, these provide fine grain modules for e.g. element tracking. We are more concerned on the application framework, that includes the control of the devices working together for the development of the AR experience. In this paper we present a software framework that can be used for the development of AR applications based on camera-projector pairs, that is suitable for both fixed, and nomadic setups.Peer ReviewedPostprint (author's final draft

    Micro Fourier Transform Profilometry (μ\muFTP): 3D shape measurement at 10,000 frames per second

    Full text link
    Recent advances in imaging sensors and digital light projection technology have facilitated a rapid progress in 3D optical sensing, enabling 3D surfaces of complex-shaped objects to be captured with improved resolution and accuracy. However, due to the large number of projection patterns required for phase recovery and disambiguation, the maximum fame rates of current 3D shape measurement techniques are still limited to the range of hundreds of frames per second (fps). Here, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μ\muFTP), which can capture 3D surfaces of transient events at up to 10,000 fps based on our newly developed high-speed fringe projection system. Compared with existing techniques, μ\muFTP has the prominent advantage of recovering an accurate, unambiguous, and dense 3D point cloud with only two projected patterns. Furthermore, the phase information is encoded within a single high-frequency fringe image, thereby allowing motion-artifact-free reconstruction of transient events with temporal resolution of 50 microseconds. To show μ\muFTP's broad utility, we use it to reconstruct 3D videos of 4 transient scenes: vibrating cantilevers, rotating fan blades, bullet fired from a toy gun, and balloon's explosion triggered by a flying dart, which were previously difficult or even unable to be captured with conventional approaches.Comment: This manuscript was originally submitted on 30th January 1

    Developing Interaction 3D Models for E-Learning Applications

    Get PDF
    Some issues concerning the development of interactive 3D models for e-learning applications are considered. Given that 3D data sets are normally large and interactive display demands high performance computation, a natural solution would be placing the computational burden on the client machine rather than on the server. Mozilla and Google opted for a combination of client-side languages, JavaScript and OpenGL, to handle 3D graphics in a web browser (Mozilla 3D and O3D respectively). Based on the O3D model, core web technologies are considered and an example of the full process involving the generation of a 3D model and their interactive visualization in a web browser is described. The challenging issue of creating realistic 3D models of objects in the real world is discussed and a method based on line projection for fast 3D reconstruction is presented. The generated model is then visualized in a web browser. The experiments demonstrate that visualization of 3D data in a web browser can provide quality user experience. Moreover, the development of web applications are facilitated by O3D JavaScript extension allowing web designers to focus on 3D contents generation

    Description and performance of the Langley differential maneuvering simulator

    Get PDF
    The differential maneuvering simulator for simulating two aircraft or spacecraft operating in a differential mode is described. Tests made to verify that the system could provide the required simulated aircraft motions are given. The mathematical model which converts computed aircraft motions into the required motions of the various projector gimbals is described

    Intelligent composite layup by the application of low cost tracking and projection technologies

    Get PDF
    Hand layup is still the dominant forming process for the creation of the widest range of complex geometry and mixed material composite parts. However, this process is still poorly understood and informed, limiting productivity. This paper seeks to address this issue by proposing a novel and low cost system enabling a laminator to be guided in real-time, based on a predetermined instruction set, thus improving the standardisation of produced components. Within this paper the current methodologies are critiqued and future trends are predicted, prior to introducing the required input and outputs, and developing the implemented system. As a demonstrator a U-Shaped component typical of the complex geometry found in many difficult to manufacture composite parts was chosen, and its drapeability assessed by the use of a kinematic drape simulation tool. An experienced laminator's knowledgebase was then used to divide the tool into a finite number of features, with layup conducted by projecting and sequentially highlighting target features while tracking a laminator's hand movements across the ply. The system has been implemented with affordable hardware and demonstrates tangible benefits in comparison to currently employed laser-based systems. It has shown remarkable success to date, with rapid Technology Readiness Level advancement. This is a major stepping stone towards augmenting manual labour, with further benefits including more appropriate automation

    A new protocol for texture mapping process and 2d representation of rupestrian architecture

    Get PDF
    The development of the survey techniques for architecture and archaeology requires a general review in the methods used for the representation of numerical data. The possibilities offered by data processing allow to find new paths for studying issues connected to the drawing discipline. The research project aimed at experimenting different approaches for the representation of the rupestrian architecture and the texture mapping process. The nature of the rupestrian architecture does not allow a traditional representation of sections and projections of edges and outlines. The paper presents a method, the Equidistant Multiple Sections (EMS), inspired by cartography and based on the use of isohipses generated from different geometric plane. A specific paragraph is dedicated to the texture mapping process for unstructured surface models. One of the main difficulty in the image projection consists in the recognition of homologous points between image and point cloud, above all in the areas with most deformations. With the aid of the “virtual scan” tool a different procedure was developed for improving the correspondences of the image. The result show a sensible improvement of the entire process above all for the architectural vaults. A detailed study concerned the unfolding of the straight line surfaces; the barrel vault of the analyzed chapel has been unfolded for observing the paintings in the real shapes out of the morphological context

    A multi-projector CAVE system with commodity hardware and gesture-based interaction

    Get PDF
    Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever combination of skeletal data from multiple Kinect sensors.Preprin
    corecore