77 research outputs found

    Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting

    Get PDF
    We introduce tensor displays: a family of compressive light field displays comprising all architectures employing a stack of time-multiplexed, light-attenuating layers illuminated by uniform or directional backlighting (i.e., any low-resolution light field emitter). We show that the light field emitted by an N-layer, M-frame tensor display can be represented by an Nth-order, rank-M tensor. Using this representation we introduce a unified optimization framework, based on nonnegative tensor factorization (NTF), encompassing all tensor display architectures. This framework is the first to allow joint multilayer, multiframe light field decompositions, significantly reducing artifacts observed with prior multilayer-only and multiframe-only decompositions; it is also the first optimization method for designs combining multiple layers with directional backlighting. We verify the benefits and limitations of tensor displays by constructing a prototype using modified LCD panels and a custom integral imaging backlight. Our efficient, GPU-based NTF implementation enables interactive applications. Through simulations and experiments we show that tensor displays reveal practical architectures with greater depths of field, wider fields of view, and thinner form factors, compared to prior automultiscopic displays.United States. Defense Advanced Research Projects Agency (DARPA SCENICC program)National Science Foundation (U.S.) (NSF Grant IIS-1116452)United States. Defense Advanced Research Projects Agency (DARPA MOSAIC program)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award)Alfred P. Sloan Foundation (Fellowship

    Study of encapsulation and transport of 3DTV by satellite

    Get PDF
    The project was developed in EADS ASTRIUM Toulouse in the framework of the MUSCADE project with the latest technologies in 3DTV. Currently most of the research in satellite broadcasting field is focused in 3DTV transmission as the following of HDTV. MUSCADE is a European project funded by the 7th Framework Program whose objective is to demonstrate a complete multiview 3DTV live chain over wireline, wireless and satellite networks. This project aims to set up a satellite testbed to validate the 3D content format defined by MUSCADE in an emulated satellite environment. The document’s first chapter describes the environment where the internship has taken place and a brief overview of the EADS Company. After, a short description of the whole MUSCADE project can be found in section 5. This allows the reader to achieve a global vision of all the technological concepts involved in the project even if this internship is focused in satellite transmission. Section 6 describes the internship development. By means of conclusion, the new skills achieved, the knowledge applied and a professional and personal balance could be found at the end of this report.Ingeniería de TelecomunicaciónTelekomunikazio Ingeniaritz

    Electronic Imaging & the Visual Arts. EVA 2015 Florence

    Get PDF
    Information Technologies of interest for Culture Heritage are presented: multimedia systems, data-bases, data protection, access to digital content, Virtual Galleries. Particular reference is reserved to digital images (Electronic Imaging & the Visual Arts), regarding Cultural Institutions (Museums, Libraries, Palace – Monuments, Archaeological Sites). The International Conference includes the following Sessions: Strategic Issues; New Technologies & Applications; New 2D-3D Technical Developments & Applications; Virtual Galleries – Museums and Related Initiatives; Access to the Culture Information. Two Workshops regard: International Cooperation; Innovation and Enterprise

    Die Virtuelle Videokamera: ein System zur Blickpunktsynthese in beliebigen, dynamischen Szenen

    Get PDF
    The Virtual Video Camera project strives to create free viewpoint video from casually captured multi-view data. Multiple video streams of a dynamic scene are captured with off-the-shelf camcorders, and the user can re-render the scene from novel perspectives. In this thesis the algorithmic core of the Virtual Video Camera is presented. This includes the algorithm for image correspondence estimation as well as the image-based renderer. Furthermore, its application in the context of an actual video production is showcased, and the rendering and image processing pipeline is extended to incorporate depth information.Das Virtual Video Camera Projekt dient der Erzeugung von Free Viewpoint Video Ansichten von Multi-View Aufnahmen: Material mehrerer Videoströme wird hierzu mit handelsĂŒblichen Camcordern aufgezeichnet. Im Anschluss kann die Szene aus beliebigen, von den ursprĂŒnglichen Kameras nicht abgedeckten Blickwinkeln betrachtet werden. In dieser Dissertation wird der algorithmische Kern der Virtual Video Camera vorgestellt. Dies beinhaltet das Verfahren zur BildkorrespondenzschĂ€tzung sowie den bildbasierten Renderer. DarĂŒber hinaus wird die Anwendung im Kontext einer Videoproduktion beleuchtet. Dazu wird die bildbasierte Erzeugung neuer Blickpunkte um die Erzeugung und Einbindung von Tiefeninformationen erweitert

    Neuromorphic stereo vision: A survey of bio-inspired sensors and algorithms

    Get PDF
    Any visual sensor, whether artificial or biological, maps the 3D-world on a 2D-representation. The missing dimension is depth and most species use stereo vision to recover it. Stereo vision implies multiple perspectives and matching, hence it obtains depth from a pair of images. Algorithms for stereo vision are also used prosperously in robotics. Although, biological systems seem to compute disparities effortless, artificial methods suffer from high energy demands and latency. The crucial part is the correspondence problem; finding the matching points of two images. The development of event-based cameras, inspired by the retina, enables the exploitation of an additional physical constraint—time. Due to their asynchronous course of operation, considering the precise occurrence of spikes, Spiking Neural Networks take advantage of this constraint. In this work, we investigate sensors and algorithms for event-based stereo vision leading to more biologically plausible robots. Hereby, we focus mainly on binocular stereo vision

    3D multiple description coding for error resilience over wireless networks

    Get PDF
    Mobile communications has gained a growing interest from both customers and service providers alike in the last 1-2 decades. Visual information is used in many application domains such as remote health care, video –on demand, broadcasting, video surveillance etc. In order to enhance the visual effects of digital video content, the depth perception needs to be provided with the actual visual content. 3D video has earned a significant interest from the research community in recent years, due to the tremendous impact it leaves on viewers and its enhancement of the user’s quality of experience (QoE). In the near future, 3D video is likely to be used in most video applications, as it offers a greater sense of immersion and perceptual experience. When 3D video is compressed and transmitted over error prone channels, the associated packet loss leads to visual quality degradation. When a picture is lost or corrupted so severely that the concealment result is not acceptable, the receiver typically pauses video playback and waits for the next INTRA picture to resume decoding. Error propagation caused by employing predictive coding may degrade the video quality severely. There are several ways used to mitigate the effects of such transmission errors. One widely used technique in International Video Coding Standards is error resilience. The motivation behind this research work is that, existing schemes for 2D colour video compression such as MPEG, JPEG and H.263 cannot be applied to 3D video content. 3D video signals contain depth as well as colour information and are bandwidth demanding, as they require the transmission of multiple high-bandwidth 3D video streams. On the other hand, the capacity of wireless channels is limited and wireless links are prone to various types of errors caused by noise, interference, fading, handoff, error burst and network congestion. Given the maximum bit rate budget to represent the 3D scene, optimal bit-rate allocation between texture and depth information rendering distortion/losses should be minimised. To mitigate the effect of these errors on the perceptual 3D video quality, error resilience video coding needs to be investigated further to offer better quality of experience (QoE) to end users. This research work aims at enhancing the error resilience capability of compressed 3D video, when transmitted over mobile channels, using Multiple Description Coding (MDC) in order to improve better user’s quality of experience (QoE). Furthermore, this thesis examines the sensitivity of the human visual system (HVS) when employed to view 3D video scenes. The approach used in this study is to use subjective testing in order to rate people’s perception of 3D video under error free and error prone conditions through the use of a carefully designed bespoke questionnaire.EThOS - Electronic Theses Online ServicePetroleum Technology Development Fund (PTDF)GBUnited Kingdo

    Telethrone : a situated display using retro-reflection basedmulti-view toward remote collaboration in small dynamic groups

    Get PDF
    This research identifies a gap in the tele-communication technology. Several novel technology demonstrators are tested experimentally throughout the research. The presented final system allows a remote participant in a conversation to unambiguously address individual members of a group of 5 people using non-verbal cues. The capability to link less formal groups through technology is the primary contribution. Technology-mediated communication is first reviewed, with attention to different supported styles of meetings. A gap is identified for small informal groups. Small dynamic groups which are convened on demand for the solution of specific problems may be called “ad-hoc”. In these meetings it is possible to ‘pull up a chair’. This is poorly supported by current tele-communication tools, that is, it is difficult for one or more members to join such a meeting from a remote location. It is also difficult for physically located parties to reorient themselves in the meeting as goals evolve. As the major contribution toward addressing this the ’Telethrone’ is introduced. Telethrone projects a remote user onto a chair, bringing them into your space. The chair seems to act as a situated display, which can support multi party head gaze, eye gaze, and body torque. Each observer knows where the projected user is looking. It is simpler to implement and cheaper than current comparable systems. The underpinning approach is technology and systems development, with regard to HCI and psychology throughout. Prototypes, refinements, and novel engineered systems are presented. Two experiments to test these systems are peer-reviewed, and further design & experimentation undertaken based on the positive results. The final paper is pending. An initial version of the new technology approach combined retro-reflective material with aligned pairs of cameras, and projectors, connected by IP video. A counterbalanced repeated measures experiment to analyse gaze interactions was undertaken. Results suggest that the remote user is not excluded from triadic poker game-play. Analysis of the multi-view aspect of the system was inconclusive as to whether it shows advantage over a set-up which does not support multi-view. User impressions from the questionnaires suggest that the current implementation still gives the impression of being a display despite its situated nature, although participants did feel the remote user was in the space with them. A refinement of the system using models generated by visual hull reconstruction can better connect eye gaze. An exploration is made of its ability to allow chairs to be moved around the meeting, and what this might enable for the participants of the meeting. The ability to move furniture was earlier identified as an aid to natural interaction, but may also affect highly correlated subgroups in an ad-hoc meeting. This is unsupported by current technologies. Repositioning of several onlooking chairs seems to support ’fault lines’. Performance constraints of the current system are explored. An experiment tests whether it is possible to judge remote participant eye gaze as the viewer changes location, attempting to address concerns raised by the first experiment in which the physical offsets of the IP cameras lenses from the projected eyes of the remote participants (in both directions), may have influenced perception of attention. A third experiment shows that five participants viewing a remote recording, presented through the Telethrone, can judge the attention of the remote participant accurately when the viewpoint is correctly rendered for their location in the room. This is compared to a control in which spatial discrimination is impossible. A figure for how many optically seperate retro-reflected segments is obtained through spatial anlysis and testing. It is possible to render the optical maximum of 5 independent viewpoints supporting an ’ideal’ meeting of 6 people. The tested system uses one computer at the meeting side of the exchange making it potentially deployable from a small flight case. The thesis presents and tests the utility of elements toward a system, and finds that remote users are in the conversation, spatially segmented with a view for each onlooker, that eye gaze can be reconnected through the system using 3D video, and that performance supports scalability up to the theoretical maximum for the material and an ideal meeting size
    • 

    corecore