7,551 research outputs found

    Split-screen single-camera stereoscopic PIV application to a turbulent confined swirling layer with free surface

    Get PDF
    An annular liquid wall jet, or vortex tube, generated by helical injection inside a tube is studied experimentally as a possible means of fusion reactor shielding. The hollow confined vortex/swirling layer exhibits simultaneously all the complexities of swirling turbulence, free surface, droplet formation, bubble entrapment; all posing challenging diagnostic issues. The construction of flow apparatus and the choice of working liquid and seeding particles facilitate unimpeded optical access to the flow field. A split-screen, single-camera stereoscopic particle image velocimetry (SPIV) scheme is employed for flow field characterization. Image calibration and free surface identification issues are discussed. The interference in measurements of laser beam reflection at the interface are identified and discussed. Selected velocity measurements and turbulence statistics are presented at Re_λ = 70 (Re = 3500 based on mean layer thickness)

    Acquisition, compression and rendering of depth and texture for multi-view video

    Get PDF
    Three-dimensional (3D) video and imaging technologies is an emerging trend in the development of digital video systems, as we presently witness the appearance of 3D displays, coding systems, and 3D camera setups. Three-dimensional multi-view video is typically obtained from a set of synchronized cameras, which are capturing the same scene from different viewpoints. This technique especially enables applications such as freeviewpoint video or 3D-TV. Free-viewpoint video applications provide the feature to interactively select and render a virtual viewpoint of the scene. A 3D experience such as for example in 3D-TV is obtained if the data representation and display enable to distinguish the relief of the scene, i.e., the depth within the scene. With 3D-TV, the depth of the scene can be perceived using a multi-view display that renders simultaneously several views of the same scene. To render these multiple views on a remote display, an efficient transmission, and thus compression of the multi-view video is necessary. However, a major problem when dealing with multiview video is the intrinsically large amount of data to be compressed, decompressed and rendered. We aim at an efficient and flexible multi-view video system, and explore three different aspects. First, we develop an algorithm for acquiring a depth signal from a multi-view setup. Second, we present efficient 3D rendering algorithms for a multi-view signal. Third, we propose coding techniques for 3D multi-view signals, based on the use of an explicit depth signal. This motivates that the thesis is divided in three parts. The first part (Chapter 3) addresses the problem of 3D multi-view video acquisition. Multi-view video acquisition refers to the task of estimating and recording a 3D geometric description of the scene. A 3D description of the scene can be represented by a so-called depth image, which can be estimated by triangulation of the corresponding pixels in the multiple views. Initially, we focus on the problem of depth estimation using two views, and present the basic geometric model that enables the triangulation of corresponding pixels across the views. Next, we review two calculation/optimization strategies for determining corresponding pixels: a local and a one-dimensional optimization strategy. Second, to generalize from the two-view case, we introduce a simple geometric model for estimating the depth using multiple views simultaneously. Based on this geometric model, we propose a new multi-view depth-estimation technique, employing a one-dimensional optimization strategy that (1) reduces the noise level in the estimated depth images and (2) enforces consistent depth images across the views. The second part (Chapter 4) details the problem of multi-view image rendering. Multi-view image rendering refers to the process of generating synthetic images using multiple views. Two different rendering techniques are initially explored: a 3D image warping and a mesh-based rendering technique. Each of these methods has its limitations and suffers from either high computational complexity or low image rendering quality. As a consequence, we present two image-based rendering algorithms that improves the balance on the aforementioned issues. First, we derive an alternative formulation of the relief texture algorithm which was extented to the geometry of multiple views. The proposed technique features two advantages: it avoids rendering artifacts ("holes") in the synthetic image and it is suitable for execution on a standard Graphics Processor Unit (GPU). Second, we propose an inverse mapping rendering technique that allows a simple and accurate re-sampling of synthetic pixels. Experimental comparisons with 3D image warping show an improvement of rendering quality of 3.8 dB for the relief texture mapping and 3.0 dB for the inverse mapping rendering technique. The third part concentrates on the compression problem of multi-view texture and depth video (Chapters 5–7). In Chapter 5, we extend the standard H.264/MPEG-4 AVC video compression algorithm for handling the compression of multi-view video. As opposed to the Multi-view Video Coding (MVC) standard that encodes only the multi-view texture data, the proposed encoder peforms the compression of both the texture and the depth multi-view sequences. The proposed extension is based on exploiting the correlation between the multiple camera views. To this end, two different approaches for predictive coding of views have been investigated: a block-based disparity-compensated prediction technique and a View Synthesis Prediction (VSP) scheme. Whereas VSP relies on an accurate depth image, the block-based disparity-compensated prediction scheme can be performed without any geometry information. Our encoder adaptively selects the most appropriate prediction scheme using a rate-distortion criterion for an optimal prediction-mode selection. We present experimental results for several texture and depth multi-view sequences, yielding a quality improvement of up to 0.6 dB for the texture and 3.2 dB for the depth, when compared to solely performing H.264/MPEG-4AVC disparitycompensated prediction. Additionally, we discuss the trade-off between the random-access to a user-selected view and the coding efficiency. Experimental results illustrating and quantifying this trade-off are provided. In Chapter 6, we focus on the compression of a depth signal. We present a novel depth image coding algorithm which concentrates on the special characteristics of depth images: smooth regions delineated by sharp edges. The algorithm models these smooth regions using parameterized piecewiselinear functions and sharp edges by a straight line, so that it is more efficient than a conventional transform-based encoder. To optimize the quality of the coding system for a given bit rate, a special global rate-distortion optimization balances the rate against the accuracy of the signal representation. For typical bit rates, i.e., between 0.01 and 0.25 bit/pixel, experiments have revealed that the coder outperforms a standard JPEG-2000 encoder by 0.6-3.0 dB. Preliminary results were published in the Proceedings of 26th Symposium on Information Theory in the Benelux. In Chapter 7, we propose a novel joint depth-texture bit-allocation algorithm for the joint compression of texture and depth images. The described algorithm combines the depth and texture Rate-Distortion (R-D) curves, to obtain a single R-D surface that allows the optimization of the joint bit-allocation in relation to the obtained rendering quality. Experimental results show an estimated gain of 1 dB compared to a compression performed without joint bit-allocation optimization. Besides this, our joint R-D model can be readily integrated into an multi-view H.264/MPEG-4 AVC coder because it yields the optimal compression setting with a limited computation effort

    Far-Infrared and Sub-Millimeter Observations and Physical Models of the Reflection Nebula Ced 201

    Full text link
    ISO [C II] 158 micron, [O I] 63 micron, and H_2 9 and 17 micron observations are presented of the reflection nebula Ced 201, which is a photon-dominated region illuminated by a B9.5 star with a color temperature of 10,000 K (a cool PDR). In combination with ground based [C I] 609 micron, CO, 13CO, CS and HCO+ data, the carbon budget and physical structure of the reflection nebula are constrained. The obtained data set is the first one to contain all important cooling lines of a cool PDR, and allows a comparison to be made with classical PDRs. To this effect one- and three-dimensional PDR models are presented which incorporate the physical characteristics of the source, and are aimed at understanding the dominant heating processes of the cloud. The contribution of very small grains to the photo-electric heating rate is estimated from these models and used to constrain the total abundance of PAHs and small grains. Observations of the pure rotational H_2 lines with ISO, in particular the S(3) line, indicate the presence of a small amount of very warm, approximately 330 K, molecular gas. This gas cannot be accommodated by the presented models.Comment: 32 pages, 7 figures, in LaTeX. To be published in Ap

    The effect of boundary adaptivity on hexagonal ordering and bistability in circularly confined quasi hard discs

    Get PDF
    The behaviour of materials under spatial confinement is sensitively dependent on the nature of the confining boundaries. In two dimensions, confinement within a hard circular boundary inhibits the hexagonal ordering observed in bulk systems at high density. Using colloidal experiments and Monte Carlo simulations, we investigate two model systems of quasi hard discs under circularly symmetric confinement. The first system employs an adaptive circular boundary, defined experimentally using holographic optical tweezers. We show that deformation of this boundary allows, and indeed is required for, hexagonal ordering in the confined system. The second system employs a circularly symmetric optical potential to confine particles without a physical boundary. We show that, in the absence of a curved wall, near perfect hexagonal ordering is possible. We propose that the degree to which hexagonal ordering is suppressed by a curved boundary is determined by the `strictness' of that wall.Comment: 10 pages, 8 figure

    Planar laser-induced fluorescence (PLIF) investigation of hypersonic flowfields in a Mach 10 wind tunnel

    Get PDF
    Planar laser-induced fluorescence (PLIF) of nitric oxide (NO) was used to visualize four different hypersonic flowfields in the NASA Langley Research Center 31-Inch Mach 10 Air wind tunnel. The four configurations were: (1) the wake flowfield of a fuselage-only X-33 lifting body, (2) flow over a flat plate containing a rectangular cavity, (3) flow over a 70deg blunted cone with a cylindrical afterbody, formerly studied by an AGARD working group, and (4) an Apollo-geometry entry capsule - relevant to the Crew Exploration Vehicle currently being developed by NASA. In all cases, NO was seeded into the flowfield through tubes inside or attached to the model sting and strut. PLIF was used to visualize the NO in the flowfield. In some cases pure NO was seeded into the flow while in other cases a 5% NO, 95% N2 mix was injected. Several parameters were varied including seeding method and location, seeding mass flow rate, model angle of attack and tunnel stagnation pressure, which varies the unit Reynolds number. The location of the laser sheet was as also varied to provide three dimensional flow information. Virtual Diagnostics Interface (ViDI) technology developed at NASA Langley was used to visualize the data sets in post processing. The measurements demonstrate some of the capabilities of the PLIF method for studying hypersonic flows

    Depth from Defocus Technique: A Simple Calibration-Free Approach for Dispersion Size Measurement

    Full text link
    Dispersed particle size measurement is crucial in a variety of applications, be it in the sizing of spray droplets, tracking of particulate matter in multiphase flows, or the detection of target markers in machine vision systems. Further to sizing, such systems are characterised by extracting quantitative information like spatial position and associated velocity of the dispersed phase particles. In the present study we propose an imaging based volumetric measurement approach for estimating the size and position of spherically dispersed particles. The approach builds on the 'Depth from Defocus' (DFD) technique using a single camera approach. The simple optical configuration, consisting of a shadowgraph setup and a straightforward calibration procedure, makes this method readily deployable and accessible for broader applications

    Temporal Mapping of Surveillance Video for Indexing and Summarization

    Get PDF
    This work converts the surveillance video to a temporal domain image called temporal profile that is scrollable and scalable for quick searching of long surveillance video by human operators. Such a profile is sampled with linear pixel lines located at critical locations in the video frames. It has precise time stamp on the target passing events through those locations in the field of view, shows target shapes for identification, and facilitates the target search in long videos. In this paper, we first study the projection and shape properties of dynamic scenes in the temporal profile so as to set sampling lines. Then, we design methods to capture target motion and preserve target shapes for target recognition in the temporal profile. It also provides the uniformed resolution of large crowds passing through so that it is powerful in target counting and flow measuring. We also align multiple sampling lines to visualize the spatial information missed in a single line temporal profile. Finally, we achieve real time adaptive background removal and robust target extraction to ensure long-term surveillance. Compared to the original video or the shortened video, this temporal profile reduced data by one dimension while keeping the majority of information for further video investigation. As an intermediate indexing image, the profile image can be transmitted via network much faster than video for online video searching task by multiple operators. Because the temporal profile can abstract passing targets with efficient computation, an even more compact digest of the surveillance video can be created

    Graphical Computing Solution for Industrial Plant Engineering

    Get PDF
    When preparing an engineering operation on an industrial plant, reliable and updated models of the plant must be available for correct decisions and planning. However, especially in the case of offshore oil and gas installations, it can hazardous and expensive to send an engineering party to assess and update the model of the plant. To reduce the cost and risk of modelling the plant, there are methods for quickly generating a 3D representation, such as LiDAR and stereoscopic reconstruction. However, these methods generate large files with no inherit cohesion. To address this, we propose to find a solution to efficiently transform point clouds from stereoscopic reconstruction into small mesh files that can be streamed or shared across teams. With that in mind, different techniques for treating point clouds and generating meshes were tested independently to measure their performance and effectiveness on an artifact-rich data set, such as the ones this work is aimed for. Afterwards, the techniques were combined into pipelines and compared with each other in terms of efficiency, file size output, and quality. With all results in place, the best solution from the ones tested was identified and validated with large real-world data sets.Master's Thesis in InformaticsINF39

    Efficient image-based rendering

    Get PDF
    Recent advancements in real-time ray tracing and deep learning have significantly enhanced the realism of computer-generated images. However, conventional 3D computer graphics (CG) can still be time-consuming and resource-intensive, particularly when creating photo-realistic simulations of complex or animated scenes. Image-based rendering (IBR) has emerged as an alternative approach that utilizes pre-captured images from the real world to generate realistic images in real-time, eliminating the need for extensive modeling. Although IBR has its advantages, it faces challenges in providing the same level of control over scene attributes as traditional CG pipelines and accurately reproducing complex scenes and objects with different materials, such as transparent objects. This thesis endeavors to address these issues by harnessing the power of deep learning and incorporating the fundamental principles of graphics and physical-based rendering. It offers an efficient solution that enables interactive manipulation of real-world dynamic scenes captured from sparse views, lighting positions, and times, as well as a physically-based approach that facilitates accurate reproduction of the view dependency effect resulting from the interaction between transparent objects and their surrounding environment. Additionally, this thesis develops a visibility metric that can identify artifacts in the reconstructed IBR images without observing the reference image, thereby contributing to the design of an effective IBR acquisition pipeline. Lastly, a perception-driven rendering technique is developed to provide high-fidelity visual content in virtual reality displays while retaining computational efficiency.Jüngste Fortschritte im Bereich Echtzeit-Raytracing und Deep Learning haben den Realismus computergenerierter Bilder erheblich verbessert. Konventionelle 3DComputergrafik (CG) kann jedoch nach wie vor zeit- und ressourcenintensiv sein, insbesondere bei der Erstellung fotorealistischer Simulationen von komplexen oder animierten Szenen. Das bildbasierte Rendering (IBR) hat sich als alternativer Ansatz herauskristallisiert, bei dem vorab aufgenommene Bilder aus der realen Welt verwendet werden, um realistische Bilder in Echtzeit zu erzeugen, so dass keine umfangreiche Modellierung erforderlich ist. Obwohl IBR seine Vorteile hat, ist es eine Herausforderung, das gleiche Maß an Kontrolle über Szenenattribute zu bieten wie traditionelle CG-Pipelines und komplexe Szenen und Objekte mit unterschiedlichen Materialien, wie z.B. transparente Objekte, akkurat wiederzugeben. In dieser Arbeit wird versucht, diese Probleme zu lösen, indem die Möglichkeiten des Deep Learning genutzt und die grundlegenden Prinzipien der Grafik und des physikalisch basierten Renderings einbezogen werden. Sie bietet eine effiziente Lösung, die eine interaktive Manipulation von dynamischen Szenen aus der realen Welt ermöglicht, die aus spärlichen Ansichten, Beleuchtungspositionen und Zeiten erfasst wurden, sowie einen physikalisch basierten Ansatz, der eine genaue Reproduktion des Effekts der Sichtabhängigkeit ermöglicht, der sich aus der Interaktion zwischen transparenten Objekten und ihrer Umgebung ergibt. Darüber hinaus wird in dieser Arbeit eine Sichtbarkeitsmetrik entwickelt, mit der Artefakte in den rekonstruierten IBR-Bildern identifiziert werden können, ohne das Referenzbild zu betrachten, und die somit zur Entwicklung einer effektiven IBR-Erfassungspipeline beiträgt. Schließlich wird ein wahrnehmungsgesteuertes Rendering-Verfahren entwickelt, um visuelle Inhalte in Virtual-Reality-Displays mit hoherWiedergabetreue zu liefern und gleichzeitig die Rechenleistung zu erhalten
    corecore