2,356 research outputs found

    Pseudo-Dolly-In Video Generation Combining 3D Modeling and Image Reconstruction

    Get PDF
    This paper proposes a pseudo-dolly-in video generation method that reproduces motion parallax by applying image reconstruction processing to multi-view videos. Since dolly-in video is taken by moving a camera forward to reproduce motion parallax, we can present a sense of immersion. However, at a sporting event in a large-scale space, moving a camera is difficult. Our research generates dolly-in video from multi-view images captured by fixed cameras. By applying the Image-Based Modeling technique, dolly-in video can be generated. Unfortunately, the video quality is often damaged by the 3D estimation error. On the other hand, Bullet-Time realizes high-quality video observation. However, moving the virtual-viewpoint from the capturing positions is difficult. To solve these problems, we propose a method to generate a pseudo-dolly-in image by installing 3D estimation and image reconstruction techniques into Bullet-Time and show its effectiveness by applying it to multi-view videos captured at an actual soccer stadium. In the experiment, we compared the proposed method with digital zoom images and with the dolly-in video generated from the Image-Based Modeling and Rendering method.Published in: 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) Date of Conference: 9-13 Oct. 2017 Conference Location: Nantes, Franc

    A compressive light field projection system

    Get PDF
    For about a century, researchers and experimentalists have strived to bring glasses-free 3D experiences to the big screen. Much progress has been made and light field projection systems are now commercially available. Unfortunately, available display systems usually employ dozens of devices making such setups costly, energy inefficient, and bulky. We present a compressive approach to light field synthesis with projection devices. For this purpose, we propose a novel, passive screen design that is inspired by angle-expanding Keplerian telescopes. Combined with high-speed light field projection and nonnegative light field factorization, we demonstrate that compressive light field projection is possible with a single device. We build a prototype light field projector and angle-expanding screen from scratch, evaluate the system in simulation, present a variety of results, and demonstrate that the projector can alternatively achieve super-resolved and high dynamic range 2D image display when used with a conventional screen.MIT Media Lab ConsortiumNatural Sciences and Engineering Research Council of Canada (NSERC Postdoctoral Fellowship)National Science Foundation (U.S.) (Grant NSF grant 0831281

    MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images

    Get PDF
    We introduce a method to convert stereo 360{\deg} (omnidirectional stereo) imagery into a layered, multi-sphere image representation for six degree-of-freedom (6DoF) rendering. Stereo 360{\deg} imagery can be captured from multi-camera systems for virtual reality (VR), but lacks motion parallax and correct-in-all-directions disparity cues. Together, these can quickly lead to VR sickness when viewing content. One solution is to try and generate a format suitable for 6DoF rendering, such as by estimating depth. However, this raises questions as to how to handle disoccluded regions in dynamic scenes. Our approach is to simultaneously learn depth and disocclusions via a multi-sphere image representation, which can be rendered with correct 6DoF disparity and motion parallax in VR. This significantly improves comfort for the viewer, and can be inferred and rendered in real time on modern GPU hardware. Together, these move towards making VR video a more comfortable immersive medium.Comment: 25 pages, 13 figures, Published at European Conference on Computer Vision (ECCV 2020), Project Page: http://visual.cs.brown.edu/matryodshk

    Superar el límite de la pantalla:el futuro integrado del diseño industrial e innovación de la interfaz

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Bellas Artes, leída el 27-11-2019The goals of this thesis are to streamline the design process of CDDs for both theirhardware and software, simplify the process of their conception, creation andproduction, motivate the design and interactive innovations for the next generationof CDDs.Starting with the process of investigating the design history of CDDs, we noticed theincreasing bi-directional influence between the graphical interface design and theindustrial design of these products. We started to work on the hypothesis:“A connection point between classical industrial design theories and moderninnovations in the world of interface design can be found, and the future of CDDrequires a universal design system for both its hardware and software.”In order to put our hypothesis into practice, it is important to clarify the generic andspecific objectives...El fin de esta tesis es mejorar el proceso de diseño de DDC tanto para su hardware como para su software, simplificar el proceso de concepción, creación y producción,así como motivar el diseño y las innovaciones interactivas para la próxima generación de DDC. Comenzando con un proceso de investigación que respete la historia del diseño de los DDC, notamos un incremento en la influencia bidireccional entre el diseño de interfaz gráfica y el diseño industrial de estos productos. Trabajamos sobre esta hipótesis: “Se puede encontrar un punto de conexión entre las teorías clásicas de diseño industrial y las innovaciones modernas en el mundo del diseño de interfaz. El futuro de los DDC requiere un sistema de diseño unificado para ambos: hardware y software.”Para poner nuestra hipótesis en práctica, es importante aclarar los objetivos genéricos y específicos...Fac. de Bellas ArtesTRUEunpu

    Guitars with Ambisonic Spatial Performance (GASP) An immersive guitar system

    Get PDF
    The GASP project investigates the design and realisation of an Immersive Guitar System. It brings together a range of sound processing and spatialising technologies and applies them to a specific musical instrument – the Electric Guitar. GASP is an ongoing innovative audio project, fusing the musical with the technical, combining the processing of each string’s output (which we called timbralisation) with spatial sound. It is also an artistic musical project, where space becomes a performance parameter, providing new experimental immersive sound production techniques for the guitarist and music producer. Several ways of reimagining the electric guitar as an immersive sounding instrument have been considered, the primary method using Ambisonics. However, additionally, some complementary performance and production techniques have emerged from the use of divided pickups, supporting both immersive live performance and studio post-production. GASP Live offers performers and audiences new real-time sonic-spatial perspectives, where the guitarist or a Live GASP producer can have real-time control of timbral, spatial, and other performance features, such as: timbral crossfading, switching of split-timbres across strings, spatial movement where Spatial Patterns may be selected and modulated, control of Spatial Tempo, and real-time performance re-tuning. For GASP recording and post-production, individual string note patterns may be visualised in Reaper DAW,2 from which, analyses and judgements can be made to inform post-production decisions for timbralisation and spatialisation. An appreciation of auditory grouping and perceptual streaming (Bregman, 1994) has informed GASP production ideas. For performance monitoring or recorded playback, the immersive audio would typically be heard over a circular array of loudspeakers, or over headphones with head-tracked binaural reproduction. This paper discusses the design of the system and its elements, investigates other applications of divided pickups, namely GASP’s Guitarpeggiator, and reflects on productions made so far

    Atomic detail visualization of photosynthetic membranes with GPU-accelerated ray tracing

    Get PDF
    The cellular process responsible for providing energy for most life on Earth, namely, photosynthetic light-harvesting, requires the cooperation of hundreds of proteins across an organelle, involving length and time scales spanning several orders of magnitude over quantum and classical regimes. Simulation and visualization of this fundamental energy conversion process pose many unique methodological and computational challenges. We present, in two accompanying movies, light-harvesting in the photosynthetic apparatus found in purple bacteria, the so-called chromatophore. The movies are the culmination of three decades of modeling efforts, featuring the collaboration of theoretical, experimental, and computational scientists. We describe the techniques that were used to build, simulate, analyze, and visualize the structures shown in the movies, and we highlight cases where scientific needs spurred the development of new parallel algorithms that efficiently harness GPU accelerators and petascale computers

    Guitars with Ambisonic Spatial Performance (GASP): An immersive guitar system

    Get PDF
    The GASP project investigates the design and realisation of an Immersive Guitar System. It brings together a range of sound processing and spatialising technologies and applies them to a specific musical instrument ‒ the Electric Guitar. GASP is an ongoing innovative audio project, fusing the musical with the technical, combining the processing of each stringʼs output (which we called timbralisation) with spatial sound. It is also an artistic musical project, where space becomes a performance parameter, providing new experimental immersive sound production techniques for the guitarist and music producer. Several ways of reimagining the electric guitar as an immersive sounding instrument have been considered, the primary method using Ambisonics. However, additionally, some complementary performance and production techniques have emerged from the use of divided pickups, supporting both immersive live performance and studio post-production. GASP Live offers performers and audiences new real-time sonic-spatial perspectives, where the guitarist or a Live GASP producer can have real-time control of timbral, spatial, and other performance features, such as: timbral crossfading, switching of split-timbres across strings, spatial movement where Spatial Patterns may be selected and modulated, control of Spatial Tempo, and real-time performance re-tuning. For GASP recording and post-production, individual string note patterns may be visualised in Reaper DAW,2 from which, analyses and judgements can be made to inform post-production decisions for timbralisation and spatialisation. An appreciation of auditory grouping and perceptual streaming (Bregman, 1994) has informed GASP production ideas. For performance monitoring or recorded playback, the immersive audio would typically be heard over a circular array of loudspeakers, or over headphones with head-tracked binaural reproduction. This paper discusses the design of the system and its elements, investigates other applications of divided pickups, namely GASPʼs Guitarpeggiator, and reflects on productions made so far

    Immersive Telerobotic Modular Framework using stereoscopic HMD's

    Get PDF
    Telepresença é o termo utilizado para descrever o conjunto de tecnologias que proporcionam aos utilizadores a sensação de que se encontram num local onde não estão fisicamente. Telepresença imersiva é o próximo passo e o objetivo passa por proporcionar a sensação de que o utilizador se encontra completamente imerso num ambiente remoto, estimulando para isso o maior número possível de sentidos e utilizando novas tecnologias tais como: visão estereoscópica, visão panorâmica, áudio 3D e Head Mounted Displays (HMDs).Telerobótica é um sub-campo da telepresença ligando a mesma à robótica, e que essencialmente consiste em proporcionar ao utilizador a possibilidade de controlar um robô de forma remota. Nas soluções do estado da arte da telerobótica existe uma falha, uma vez que a telerobótica não tem usufruido, no geral, das recentes evoluções em tecnologias de controlo e interfaces de interação pessoa- computador. Além da falta de estudos que apostam em soluções de imersividade, tais como visão estereoscópica, a telerobótica imersiva pode também incluir controlos mais intuitivos, tais como controladores de toque ou baseados em movimentos e gestos. Estes controlos são mais naturais e podem ser traduzidos de forma mais natural no sistema. Neste documento propomos uma abordagem alternativa a métodos mais comuns encontrados na teleoperação de robôs, como, por exemplo, os que se encontram em robôs de busca e salvamento (SAR). O nosso principal foco é testar o impacto que características imersivas, tais como visão estereoscópica e HMDs podem trazer para os robôs de telepresença e sistemas de telerobótica. Além disso, e tendo em conta que este é um novo e crescente campo, vamos mais além estando também a desenvolver uma framework modular que possuí a capacidade de ser extendida com diferentes robôs, com o fim de proporcionar aos investigadores uma plataforma com que podem testar diferentes casos de estudo.Pretendemos provar que adicionando tecnologias imersivas a um sistema de telerobótica é possível obter uma plataforma mais intuitiva, ou seja, menos propensa a erros induzidos por uma perceção e interação errada com o sistema de teleoperação do robô, por parte do operador. A perceção de profundidade e do ambiente em geral são significativamente melhoradas quando se utiliza esta solução de imersão. E o desempenho, tanto em tempo de operação numa tarefa como numa bem-sucedida identificação de objetos de interesse, é também reforçado. Desenvolvemos uma plataforma modular, de baixo/médio custo, de telerobótica imersiva que pode ser estendida com aplicações Android hardware-based no lado do robô. Esta solução tem por objetivo proporcionar a possibilidade de utilizar a mesma plataforma, em qualquer tipo de caso de estudo, estendendo a plataforma com diferentes tipos de robô. Em adição a uma framework modular e extensível, o projeto conta também com três principais módulos de interação, nomeadamente: - Módulo que contém um head mounted display com suporte a head tracking no ambiente do operador - Stream de visão estereoscópica através de Android - E um módulo que proporciona ao utilizador a possibilidade de interagir com o sistema com positional tracking No que respeita ao hardware não apenas a área móvel (e.g. smartphones, tablets, arduino) expandiu de forma avassaladora nos últimos anos, como também assistimos ao despertar de tecnologias de imersão a baixo custo, tais como o Oculus Rift, Google Cardboard ou Leap Motion.Estas soluções de hardware, de custo acessível, associadas aos avanços em stream de vídeo e áudio fornecidas pelas tecnologias WebRTC, principalmente pelo Google, tornam o desenvolvimento de uma solução de software em tempo real possível. Atualmente existe uma falta de métodos de software em tempo real em estereoscopia, mas acreditamos que a chegada de tecnologias WebRTC vai marcar o ponto de viragem, permitindo um plataforma económica e elevando a fasquia em termos de especificações.Telepresence is the term used to describe the set of technologies that enable people to feel or appear as if they were present in a location which they are not physically in. Immersive telepresence is the next step and the objective is to make the operator feel like he is immersed in a remote location, using as many senses as possible and new technologies such as stereoscopic vision, panoramic vision, 3D audio and Head Mounted Displays (HMDs).Telerobotics is a subfield of telepresence and merge it with robotics, providing the operator with the ability to control a robot remotely. In the current state of the art solutions there is a gap, since telerobotics have not enjoyed, in general, of the recent developments in control and human-computer interfaces technology. Besides the lack of studies investing on immersive solutions, such as stereoscopic vision, immersive telerobotics can also include more intuitive control capabilities such as haptic based controls or movement and gestures that would feel more natural and translated more naturally into the system. In this paper we propose an alternative approach to common teleoperation methods. As an example of common solutions, the reader can think about some of the methods found, for instance, in search and rescue (SAR) robots. Our main focus is to test the impact that immersive characteristics like stereoscopic vision and HMDs can bring to telepresence robots and telerobotics systems. Besides that, and since this is a new and growing field, we are also aiming to a modular framework capable of being extended with different robots in order to test different cases and aid researchers with an extensible platform.We claim that with immersive solutions the operator in a telerobotics system will have a more intuitive perception of the remote environment, and will be less error prone induced by a wrong perception and interaction with the teleoperation of the robot. We believe that the operator's depth perception and situational awareness are significantly improved when using immersive solutions, the performance both in terms of operation time and on successful identification, of particular objects, in remote environments are also enhanced.We have developed a low cost immersive telerobotic modular platform, this platform can be extended with hardware based Android applications in slave side (robot side). This solution provides the possibility of using the same platform, in any type of case study, by just extending it with different robots.In addition to the modular and extensible framework, the project will also features three main modules of interaction, namely:* A module that supports an head mounted display and head tracking in the operator environment* Stream of stereoscopic vision through Android with software synchronization* And a module that enables the operator to control the robot with positional tracking In the hardware side not only the mobile area (e.g. smartphones, tablets, arduino) expanded greatly in the last years but we also saw the raise of low cost immersive technologies, like the Oculus Rift DK2, Google Cardboard or Leap Motion. This cost effective hardware solutions associated with the advances in video and audio streaming provided by WebRTC technologies, achieved mostly by Google, make the development of a real-time software solution possible. Currently there is a lack of real-time software methods in stereoscopy, but the arrival of WebRTC technologies can be a game changer.We take advantage of this recent evolution in hardware and software in order to keep the platform economic and low cost, but at same time raising the flag in terms of performance and technical specifications of this kind of platform
    corecore