10,276 research outputs found

    Pre-processing of integral images for 3-D displays

    Get PDF
    This paper seeks to explore a method to accurately correct geometric distortions caused during the capture of three dimensional (3-D) integral images. Such distortions are rotational and scaling errors which, if not corrected, will cause banding and moire effects on the replayed image. The method for calculating the angle of deviation in the 3-D Integral Images is based on Hough Transform. It allows detection of the angle necessary for correction of the rotational error. Experiments have been conducted on a number of 3-D integral image samples and it has been found that the proposed method produces results with accuracy of 0.05 deg

    Low-damping epsilon-near-zero slabs: nonlinear and nonlocal optical properties

    Full text link
    We investigate second harmonic generation, low-threshold multistability, all-optical switching, and inherently nonlocal effects due to the free-electron gas pressure in an epsilon-near-zero (ENZ) metamaterial slab made of cylindrical, plasmonic nanoshells illuminated by TM-polarized light. Damping compensation in the ENZ frequency region, achieved by using gain medium inside the shells' dielectric cores, enhances the nonlinear properties. Reflection is inhibited and the electric field component normal to the slab interface is enhanced near the effective pseudo-Brewster angle, where the effective \epsilon-near-zero condition triggers a non-resonant, impedance-matching phenomenon. We show that the slab displays a strong effective, spatial nonlocality associated with leaky modes that are mediated by the compensation of damping. The presence of these leaky modes then induces further spectral and angular conditions where the local fields are enhanced, thus opening new windows of opportunity for the enhancement of nonlinear optical processes

    A joint motion & disparity motion estimation technique for 3D integral video compression using evolutionary strategy

    Get PDF
    3D imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging, which can capture true 3D color images with only one camera, has been seen as the right technology to offer stress-free viewing to audiences of more than one person. Just like any digital video, 3D video sequences must also be compressed in order to make it suitable for consumer domain applications. However, ordinary compression techniques found in state-of-the-art video coding standards such as H.264, MPEG-4 and MPEG-2 are not capable of producing enough compression while preserving the 3D clues. Fortunately, a huge amount of redundancies can be found in an integral video sequence in terms of motion and disparity. This paper discusses a novel approach to use both motion and disparity information to compress 3D integral video sequences. We propose to decompose the integral video sequence down to viewpoint video sequences and jointly exploit motion and disparity redundancies to maximize the compression. We further propose an optimization technique based on evolutionary strategies to minimize the computational complexity of the joint motion disparity estimation. Experimental results demonstrate that Joint Motion and Disparity Estimation can achieve over 1 dB objective quality gain over normal motion estimation. Once combined with Evolutionary strategy, this can achieve up to 94% computational cost saving

    Realization of precise depth perception with coarse integral volumetric imaging

    Get PDF
    In this paper realization of precise depth perception using coarse integral volumetric imaging (CIVI) is discussed. CIVI is a 3D display technology that combines multiview and volumetric solutions by introducing multilayered structure to integral imaging. Since CIVI generates real images optically, optical distortion can cause distortion of 3D space to be presented. To attain presentation of undistorted 3D space with CIVI, the authors simulate the optics of CIVI and propose an algorithm to show undistorted 3D space by compensating the optical distortion on the software basis. The authors also carry out psychophysical experiments to verify that vergence-accommdation conflict is reduced and depth perception of the viewer is improved by combining multiview and volumetric technologies

    Semi-Weakly Supervised Learning for Label-efficient Semantic Segmentation in Expert-driven Domains

    Get PDF
    Unter Zuhilfenahme von Deep Learning haben semantische Segmentierungssysteme beeindruckende Ergebnisse erzielt, allerdings auf der Grundlage von ĂŒberwachtem Lernen, das durch die VerfĂŒgbarkeit kostspieliger, pixelweise annotierter Bilder limitiert ist. Bei der Untersuchung der Performance dieser Segmentierungssysteme in Kontexten, in denen kaum Annotationen vorhanden sind, bleiben sie hinter den hohen Erwartungen, die durch die Performance in annotationsreichen Szenarien geschĂŒrt werden, zurĂŒck. Dieses Dilemma wiegt besonders schwer, wenn die Annotationen von lange geschultem Personal, z.B. Medizinern, Prozessexperten oder Wissenschaftlern, erstellt werden mĂŒssen. Um gut funktionierende Segmentierungsmodelle in diese annotationsarmen, Experten-angetriebenen DomĂ€nen zu bringen, sind neue Lösungen nötig. Zu diesem Zweck untersuchen wir zunĂ€chst, wie schlecht aktuelle Segmentierungsmodelle mit extrem annotationsarmen Szenarien in Experten-angetriebenen BildgebungsdomĂ€nen zurechtkommen. Daran schließt sich direkt die Frage an, ob die kostspielige pixelweise Annotation, mit der Segmentierungsmodelle in der Regel trainiert werden, gĂ€nzlich umgangen werden kann, oder ob sie umgekehrt ein Kosten-effektiver Anstoß sein kann, um die Segmentierung in Gang zu bringen, wenn sie sparsam eingestetzt wird. Danach gehen wir auf die Frage ein, ob verschiedene Arten von Annotationen, schwache- und pixelweise Annotationen mit unterschiedlich hohen Kosten, gemeinsam genutzt werden können, um den Annotationsprozess flexibler zu gestalten. Experten-angetriebene DomĂ€nen haben oft nicht nur einen Annotationsmangel, sondern auch völlig andere Bildeigenschaften, beispielsweise volumetrische Bild-Daten. Der Übergang von der 2D- zur 3D-semantischen Segmentierung fĂŒhrt zu voxelweisen Annotationsprozessen, was den nötigen Zeitaufwand fĂŒr die Annotierung mit der zusĂ€tzlichen Dimension multipliziert. Um zu einer handlicheren Annotation zu gelangen, untersuchen wir Trainingsstrategien fĂŒr Segmentierungsmodelle, die nur preiswertere, partielle Annotationen oder rohe, nicht annotierte Volumina benötigen. Dieser Wechsel in der Art der Überwachung im Training macht die Anwendung der Volumensegmentierung in Experten-angetriebenen DomĂ€nen realistischer, da die Annotationskosten drastisch gesenkt werden und die Annotatoren von Volumina-Annotationen befreit werden, welche naturgemĂ€ĂŸ auch eine Menge visuell redundanter Regionen enthalten wĂŒrden. Schließlich stellen wir die Frage, ob es möglich ist, die Annotations-Experten von der strikten Anforderung zu befreien, einen einzigen, spezifischen Annotationstyp liefern zu mĂŒssen, und eine Trainingsstrategie zu entwickeln, die mit einer breiten Vielfalt semantischer Information funktioniert. Eine solche Methode wurde hierzu entwickelt und in unserer umfangreichen experimentellen Evaluierung kommen interessante Eigenschaften verschiedener Annotationstypen-Mixe in Bezug auf deren Segmentierungsperformance ans Licht. Unsere Untersuchungen fĂŒhrten zu neuen Forschungsrichtungen in der semi-weakly ĂŒberwachten Segmentierung, zu neuartigen, annotationseffizienteren Methoden und Trainingsstrategien sowie zu experimentellen Erkenntnissen, zur Verbesserung von Annotationsprozessen, indem diese annotationseffizient, expertenzentriert und flexibel gestaltet werden

    Visualization and Analysis of 3D Microscopic Images

    Get PDF
    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain

    Gigavoxels: ray-guided streaming for efficient and detailed voxel rendering

    Get PDF
    Figure 1: Images show volume data that consist of billions of voxels rendered with our dynamic sparse octree approach. Our algorithm achieves real-time to interactive rates on volumes exceeding the GPU memory capacities by far, tanks to an efficient streaming based on a ray-casting solution. Basically, the volume is only used at the resolution that is needed to produce the final image. Besides the gain in memory and speed, our rendering is inherently anti-aliased. We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element of our method is to guide data production and streaming directly based on information extracted during rendering. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. We also introduce a mipmapping-like process that allows for an increased display rate and better quality through high quality filtering. To further enrich the data set, we create additional details through a variety of procedural methods. We demonstrate our approach in several scenarios, like the exploration of a 3D scan (8192 3 resolution), of hypertextured meshes (16384 3 virtual resolution), or of a fractal (theoretically infinite resolution). All examples are rendered on current generation hardware at 20-90 fps and respect the limited GPU memory budget. This is the author’s version of the paper. The ultimate version has been published in the I3D 2009 conference proceedings.
    • 

    corecore