266 research outputs found

    Capture4VR: From VR Photography to VR Video

    Get PDF
    Virtual reality (VR) enables the display of dynamic visual content with unparalleled realism and immersion. However, VR is also still a relatively young medium that requires new ways to author content, particularly for visual content that is captured from the real world. This course, therefore, provides a comprehensive overview of the latest progress in bringing photographs and video into VR. Ultimately, the techniques, approaches and systems we discuss aim to faithfully capture the visual appearance and dynamics of the real world, and to bring it into virtual reality to create unparalleled realism and immersion by providing freedom of head motion and motion parallax, which is a vital depth cue for the human visual system. In this half-day course, we take the audience on a journey from VR photography to VR video that began more than a century ago but which has accelerated tremendously in the last five years. We discuss both commercial state-of-the-art systems by Facebook, Google and Microsoft, as well as the latest research techniques and prototypes

    Lightfield Analysis and Its Applications in Adaptive Optics and Surveillance Systems

    Get PDF
    An image can only be as good as the optics of a camera or any other imaging system allows it to be. An imaging system is merely a transformation that takes a 3D world coordinate to a 2D image plane. This can be done through both linear/non-linear transfer functions. Depending on the application at hand it is easier to use some models of imaging systems over the others in certain situations. The most well-known models are the 1) Pinhole model, 2) Thin Lens Model and 3) Thick lens model for optical systems. Using light-field analysis the connection through these different models is described. A novel figure of merit is presented on using one optical model over the other for certain applications. After analyzing these optical systems, their applications in plenoptic cameras for adaptive optics applications are introduced. A new technique to use a plenoptic camera to extract information about a localized distorted planar wave front is described. CODEV simulations conducted in this thesis show that its performance is comparable to those of a Shack-Hartmann sensor and that they can potentially increase the dynamic range of angles that can be extracted assuming a paraxial imaging system. As a final application, a novel dual PTZ-surveillance system to track a target through space is presented. 22X optic zoom lenses on high resolution pan/tilt platforms recalibrate a master-slave relationship based on encoder readouts rather than complicated image processing algorithms for real-time target tracking. As the target moves out of a region of interest in the master camera, it is moved to force the target back into the region of interest. Once the master camera is moved, a precalibrated lookup table is interpolated to compute the relationship between the master/slave cameras. The homography that relates the pixels of the master camera to the pan/tilt settings of the slave camera then continue to follow the planar trajectories of targets as they move through space at high accuracies

    Capture4VR: From VR Photography to VR Video

    Get PDF
    Virtual reality (VR) enables the display of dynamic visual content with unparalleled realism and immersion. However, VR is also still a relatively young medium that requires new ways to author content, particularly for visual content that is captured from the real world. This course, therefore, provides a comprehensive overview of the latest progress in bringing photographs and video into VR. Ultimately, the techniques, approaches and systems we discuss aim to faithfully capture the visual appearance and dynamics of the real world, and to bring it into virtual reality to create unparalleled realism and immersion by providing freedom of head motion and motion parallax, which is a vital depth cue for the human visual system. In this half-day course, we take the audience on a journey from VR photography to VR video that began more than a century ago but which has accelerated tremendously in the last five years. We discuss both commercial state-of-the-art systems by Facebook, Google and Microsoft, as well as the latest research techniques and prototypes

    Icarus : aerial recording system

    Get PDF
    The goal of Project ICARUS is to create an aerial videography system that is easy to set up, inexpensive, portable, and highly adaptable to any situation. This is accomplished by using a balloon mounted camera rig that is grounded by a number of winches. This system is able to obtain a higher altitude than similar systems and is much more cost effective because the system can be applied to a number of circumstances such as sporting events, disaster relief, wildlife videography, and aerial monitoring, to name a few. The ICARUS system allows the user to control the position of an aerial camera as well as its orientation in three dimensional space using minimal infrastructure. This system features a control system that allows the user to specify an input in cartesian space as well as a live view from the aerial camera

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    LOW-RESOLUTION CUSTOMIZABLE UBIQUITOUS DISPLAYS

    Get PDF
    In a conventional display, pixels are constrained within the rectangular or circular boundaries of the device. This thesis explores moving pixels from a screen into the surrounding environment to form ubiquitous displays. The surrounding environment can include a human, walls, ceiling, and floor. To achieve this goal, we explore the idea of customizable displays: displays that can be customized in terms of shapes, sizes, resolutions, and locations to fit into the existing infrastructure. These displays require pixels that can easily combine to create different display layouts and provide installation flexibility. To build highly customizable displays, we need to design pixels with a higher level of independence in its operation. This thesis shows different display designs that use pixels with pixel independence ranging from low to high. Firstly, we explore integrating pixels into clothing using battery-powered tethered LEDs to shine information through pockets. Secondly, to enable integrating pixels into the architectural surroundings, we explore using battery-powered untethered pixels that allow building displays of different shapes and sizes on a desired surface. The display can show images and animations on the custom display configuration. Thirdly, we explore the design of a solar-powered independent pixel that can integrate into walls or construction materials to form a display. These pixels overcome the need to recharge them explicitly. Lastly, we explore the design of a mechanical pixel element that can be embedded into construction material to form display panels. The information on these displays is updated manually when a user brushes over the pixels. Our work takes a step forward in designing pixels with higher operation independence to envision a future of displays anywhere and everywhere

    Design and Implementation of the Kinect Controlled Electro-Mechanical Skeleton (K.C.E.M.S)

    Get PDF
    Mimicking real-time human motion with a low cost solution has been an extremely difficult task in the past but with the release of the Microsoft Kinect motion capture system, this problem has been simplified. This thesis discusses the feasibility and design behind a simple robotic skeleton that utilizes the Kinect to mimic human movements in near real-time. The goal of this project is to construct a 1/3-scale model of a robotically enhanced skeleton and demonstrate the abilities of the Kinect as a tool for human movement mimicry. The resulting robot was able to mimic many human movements but was mechanically limited in the shoulders. Its movements were slower then real-time due to the inability for the controller to handle real-time motions. This research was presented and published at the 2012 SouthEastCon. Along with this, research papers about the formula hybrid accumulator design and the 2010 autonomous surface vehicle were presented and published

    Efficient Distance Accuracy Estimation Of Real-World Environments In Virtual Reality Head-Mounted Displays

    Get PDF
    Virtual reality (VR) is a very promising technology with many compelling industrial applications. As many advancements have been made recently to deploy and use VR technology in virtual environments, they are still less mature to be used to render real environments. The current VR systems settings, which are developed for virtual environments rendering, fail to adequately address the challenges of capturing and displaying real-world virtual reality that these systems entail. Before these systems can be used in real life settings, their performance needs to be investigated, more specifically, depth perception and how distances to objects in the rendered scenes are estimated. The perceived depth is influenced by Head Mounted Displays (HMD) that inevitability decrease the virtual content’s depth perception. Distances are consistently underestimated in virtual environments (VEs) compared to the real world. The reason behind this underestimation is still not understood. This thesis investigates another version of this kind of system, that to the best of authors knowledge has not been explored by any previous research. Previous research used a computer-generated scene. This work is examining distance estimation in real environments rendered to Head-Mounted Displays, where distance estimations is among the most challenging issues that are still investigated and not fully understood.This thesis introduces a dual-camera video feed system through a virtual reality head mounted display with two models: a video-based and a static photo-based model, in which, the purpose is to explore whether the misjudgment of distances in HMDs could be due to a lack of realism, or not, with the use of a real-world scene rendering system. Distance judgments performance in the real world and these two evaluated VE models were compared using protocols already proven to accurately measure real-world distance estimations. An improved model based on enhancing the field of view (FOV) of the displayed scenes to improve distance judgements when displaying real-world VR content to HMDs was developed; allowing to mitigate the limited FOV, which is among the first potential causes of distance underestimation, specially, the mismatch of FOV between the camera and the HMD field of views. The proposed model is using a set of two cameras to generate the video instead of hundreds of input cameras or tens of cameras mounted on a circular rig as previous works from the literature. First Results from the first implementation of this system found that when the model was rendered as static photo-based, the underestimation was less as compared with the live video feed rendering. The video-based (real + HMD) model and the static photo-based (real + photo + HMD) model averaged 80.2% of the actual distance, and 81.4% respectively compared to the Real-World estimations that averaged 92.4%. The improved developed approach (Real + HMD + FOV) was compared to these two models and showed an improvement of 11%, increasing the estimation accuracy from 80% to 91% and reducing the estimation error from 1.29% to 0.56%. This thesis results present strong evidence of the need for novel distance estimation improvements methods for real world VR content systems and provides effective initial work towards this goal
    • …
    corecore