1,625 research outputs found

    An Advanced, Three-Dimensional Plotting Library for Astronomy

    Get PDF
    We present a new, three-dimensional (3D) plotting library with advanced features, and support for standard and enhanced display devices. The library - S2PLOT - is written in C and can be used by C, C++ and FORTRAN programs on GNU/Linux and Apple/OSX systems. S2PLOT draws objects in a 3D (x,y,z) Cartesian space and the user interactively controls how this space is rendered at run time. With a PGPLOT inspired interface, S2PLOT provides astronomers with elegant techniques for displaying and exploring 3D data sets directly from their program code, and the potential to use stereoscopic and dome display devices. The S2PLOT architecture supports dynamic geometry and can be used to plot time-evolving data sets, such as might be produced by simulation codes. In this paper, we introduce S2PLOT to the astronomical community, describe its potential applications, and present some example uses of the library.Comment: 12 pages, 10 eps figures (higher resolution versions available from http://astronomy.swin.edu.au/s2plot/paperfigures). The S2PLOT library is available for download from http://astronomy.swin.edu.au/s2plo

    New visual coding exploration in MPEG: Super-MultiView and free navigation in free viewpoint TV

    Get PDF
    ISO/IEC MPEG and ITU-T VCEG have recently jointly issued a new multiview video compression standard, called 3D-HEVC, which reaches unpreceded compression performances for linear,dense camera arrangements. In view of supporting future highquality,auto-stereoscopic 3D displays and Free Navigation virtual/augmented reality applications with sparse, arbitrarily arranged camera setups, innovative depth estimation and virtual view synthesis techniques with global optimizations over all camera views should be developed. Preliminary studies in response to the MPEG-FTV (Free viewpoint TV) Call for Evidence suggest these targets are within reach, with at least 6% bitrate gains over 3DHEVC technology

    Stereoscopic video quality assessment based on 3D convolutional neural networks

    Get PDF
    The research of stereoscopic video quality assessment (SVQA) plays an important role for promoting the development of stereoscopic video system. Existing SVQA metrics rely on hand-crafted features, which is inaccurate and time-consuming because of the diversity and complexity of stereoscopic video distortion. This paper introduces a 3D convolutional neural networks (CNN) based SVQA framework that can model not only local spatio-temporal information but also global temporal information with cubic difference video patches as input. First, instead of using hand-crafted features, we design a 3D CNN architecture to automatically and effectively capture local spatio-temporal features. Then we employ a quality score fusion strategy considering global temporal clues to obtain final video-level predicted score. Extensive experiments conducted on two public stereoscopic video quality datasets show that the proposed method correlates highly with human perception and outperforms state-of-the-art methods by a large margin. We also show that our 3D CNN features have more desirable property for SVQA than hand-crafted features in previous methods, and our 3D CNN features together with support vector regression (SVR) can further boost the performance. In addition, with no complex preprocessing and GPU acceleration, our proposed method is demonstrated computationally efficient and easy to use

    HEVC based Mixed-resolution Stereo Video Coding for Low Bitrate Transmission

    Get PDF
    This paper presents a mixed resolution stereo video coding model for High Efficiency Video Codec (HEVC). The challenging aspects of mixed resolution video coding are enabling the codec to encode frames with different frame resolution/size and using decoded pictures having different frame resolution/size for referencing. These challenges are further enlarged when implemented using HEVC, since the incoming video frames are subdivided into coding tree units. The ingenuity of the proposed codec’s design, is that the information in intermediate frames are down-sampled and yet the frames can retain the original resolution. To enable random access to full resolution decoded frame in the decoded picture buffer as reference frame a downsampled version of the decoded full resolution frame is used. The test video sequences were coded using the proposed codec and standard MV-HEVC. Results show that the proposed codec gives a significantly higher coding performance over the MV- HEVC codec

    Brain Analysis While Playing 2D and 3D Video Games of Nintendo 3DS Using Electroencephalogram (EEG)

    Get PDF
    To be able to gain knowledge of human brain and study tbe perception of human towards stimulated events, emotions and sense, scientists have been using few main methods. They are Electroencephalograph (EEG), Computerized Axial Tomography (CAT) scans, Magnetic Resonance Imaging (MRI), Functional Magnetic Resonance Imaging (fMRI) and Magnetoencephalograph (MEG). These technologies, up to this date are able to help scientists, researchers and doctors to understand how brain works and doing analysis upon tbem. [1]. Meanwhile tbis prqject will be focusing on the usage of EEG to do tbe analysis on human brain. The EEG shows electrical impulses of tbe brain and can be recorded in form of waves. Recently, tbe emerging of auto stereoscopic 3D technology of Nintendo 3DS has bring new gaming experience as players can see 3D. The objective of this project is to use EEG equipment to analyse tbe activity of human brain when playing console game Nintendo 3DS in 2 dimensions (2D) mode and 3 dimensions (3D) mode. The purpose of this project is also to study and compare on human brain perception of 2D and 3D gaming. Our brain perceives 2D and 3D moving images of video games differently, and we would want to study how different tbey are. In tbe end, tbis project will be able to explain and conclude how human brain responds to 2D and 3D gaming of Nintendo 3DS console game and what difference tbey make in human visual system of brain

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Immersive Neural Graphics Primitives

    Full text link
    Neural radiance field (NeRF), in particular its extension by instant neural graphics primitives, is a novel rendering method for view synthesis that uses real-world images to build photo-realistic immersive virtual scenes. Despite its potential, research on the combination of NeRF and virtual reality (VR) remains sparse. Currently, there is no integration into typical VR systems available, and the performance and suitability of NeRF implementations for VR have not been evaluated, for instance, for different scene complexities or screen resolutions. In this paper, we present and evaluate a NeRF-based framework that is capable of rendering scenes in immersive VR allowing users to freely move their heads to explore complex real-world scenes. We evaluate our framework by benchmarking three different NeRF scenes concerning their rendering performance at different scene complexities and resolutions. Utilizing super-resolution, our approach can yield a frame rate of 30 frames per second with a resolution of 1280x720 pixels per eye. We discuss potential applications of our framework and provide an open source implementation online.Comment: Submitted to IEEE VR, currently under revie

    Mixed-Resolution HEVC based multiview video codec for low bitrate transmission

    Get PDF
    • …
    corecore