28 research outputs found

    AUTOSTEREOSCOPY AND MOTION PARALLAX FOR MOBILE COMPUTER GAMES USING COMMERCIALLY AVAILABLE HARDWARE

    Get PDF
    ABSTRACT In this paper we present a solution for the three dimensional representation of mobile computer games which includes both motion parallax and an autostereoscopic display. The system was built on hardware which is available on the consumer market: an iPhone 3G with a Wazabee 3Dee Shell, which is an autostereoscopic extension for the iPhone. The motion sensor of the phone was used for the implementation of the motion parallax effect as well as for a tilt compensation for the autostereoscopic display. This system was evaluated in a limited user study on mobile 3D displays. Despite some obstacles that needed to be overcome and a few remaining shortcomings of the final system, an overall acceptable 3D experience could be reached. That leads to the conclusion that portable systems for the consumer market which include 3D displays are within reach

    Autostereoscopy and Motion Parallax for Mobile Computer Games Using Commercially Available Hardware

    Get PDF
    Abstract: In this paper we present a solution for the three dimensional representation of mobile computer games which includes both motion parallax and an autostereoscopic display. The system was built on hardware which is available on the consumer market: an iPhone 3G with a Wazabee 3Dee Shell, which is an autostereoscopic extension for the iPhone. The motion sensor of the phone was used for the implementation of the motion parallax effect as well as for a tilt compensation for the autostereoscopic display. This system was evaluated in a limited user study on mobile 3D displays. Despite some obstacles that needed to be overcome and a few remaining shortcomings of the final system, an overall acceptable 3D experience could be reached. That leads to the conclusion that portable systems for the consumer market which include 3D displays are within reach

    Decoding working memory-related information from repeated psychophysiological EEG experiments using convolutional and contrastive neural networks

    Get PDF
    Objective. Extracting reliable information from electroencephalogram (EEG) is difficult because the low signal-to-noise ratio and significant intersubject variability seriously hinder statistical analyses. However, recent advances in explainable machine learning open a new strategy to address this problem. Approach. The current study evaluates this approach using results from the classification and decoding of electrical brain activity associated with information retention. We designed four neural network models differing in architecture, training strategies, and input representation to classify single experimental trials of a working memory task. Main results. Our best models achieved an accuracy (ACC) of 65.29 ± 0.76 and Matthews correlation coefficient of 0.288 ± 0.018, outperforming the reference model trained on the same data. The highest correlation between classification score and behavioral performance was 0.36 (p = 0.0007). Using analysis of input perturbation, we estimated the importance of EEG channels and frequency bands in the task at hand. The set of essential features identified for each network varies. We identified a subset of features common to all models that identified brain regions and frequency bands consistent with current neurophysiological knowledge of the processes critical to attention and working memory. Finally, we proposed sanity checks to examine further the robustness of each model's set of features. Significance. Our results indicate that explainable deep learning is a powerful tool for decoding information from EEG signals. It is crucial to train and analyze a range of models to identify stable and reliable features. Our results highlight the need for explainable modeling as the model with the highest ACC appeared to use residual artifactual activity

    Quasi-Static Voltage Scaling for Energy Minimization With Time Constraints

    Full text link

    High-Quality Real-Time Depth-Image-Based-Rendering

    No full text
    With depth sensors becoming more and more common, and applications with varying viewpoints (like e.g. virtual reality) becoming more and more popular, there is a growing demand for real-time depth-image-based-rendering algorithms that reach a high quality. Starting from a quality-wise top performing depth-image-based-renderer, we develop a real-time version. Despite reaching a high quality as well, the new OpenGL-based renderer decreases runtime by (at least) 2 magnitudes. This was made possible by discovering similarities between forward-based and mesh-based rendering, which enable us to remove the common parallelization bottleneck of competing memory access, and facilitated by the implementation of accurate yet fast algorithms for the different parts of the rendering pipeline. We evaluated the proposed renderer using a publicly available dataset with ground-truth depth and camera data, that contains both rapid camera movements and rotations as well as complex scenes and is therefore challenging to project accurately

    Cubic Spline Interpolation in Real-Time Applications using Three Control Points

    Get PDF
    Spline interpolation is widely used in many different applications like computer graphics, animations and robotics. Many of these applications are run in real-time with constraints on computational complexity, thus fueling the need for computational inexpensive, real-time, continuous and loop-free data interpolation techniques. Often Catmull-Rom splines are used, which use four control-points: the two points between which to interpolate as well as the point directly before and the one directly after. If interpolating over time, this last point will ly in the future. However, in real-time applications future values may not be known in advance, meaning that Catmull-Rom splines are not applicable. In this paper we introduce another family of interpolation splines (dubbed Three-Point-Splines) which show the same characteristics as Catmull-Rom, but which use only three control-points, omitting the one “in the future”. Therefore they can generate smooth interpolation curves even in applications which do not have knowledge of future points, without the need for more computational complex methods. The generated curves are more rigid than Catmull-Rom, and because of that the Three-Point-Splines will not generate self-intersections within an interpolated curve segment, a property that has to be introduced to Catmull-Rom by careful parameterization. Thus, the Three-Point-Splines allow for greater freedom in parameterization, and can therefore be adapted to the application at hand, e.g. to a requested curvature or limitations on acceleration/deceleration. We will also show a method that allows to change the control-points during an ongoing interpolation, both with Thee-Point-Splines as well as with Catmull-Rom splines

    High-Quality Real-Time Depth-Image-Based-Rendering

    No full text
    With depth sensors becoming more and more common, and applications with varying viewpoints (like e.g. virtual reality) becoming more and more popular, there is a growing demand for real-time depth-image-based-rendering algorithms that reach a high quality. Starting from a quality-wise top performing depth-image-based-renderer, we develop a real-time version. Despite reaching a high quality as well, the new OpenGL-based renderer decreases runtime by (at least) 2 magnitudes. This was made possible by discovering similarities between forward-based and mesh-based rendering, which enable us to remove the common parallelization bottleneck of competing memory access, and facilitated by the implementation of accurate yet fast algorithms for the different parts of the rendering pipeline. We evaluated the proposed renderer using a publicly available dataset with ground-truth depth and camera data, that contains both rapid camera movements and rotations as well as complex scenes and is therefore challenging to project accurately

    High-Quality Real-Time Depth-Image-Based-Rendering

    No full text
    With depth sensors becoming more and more common, and applications with varying viewpoints (like e.g. virtual reality) becoming more and more popular, there is a growing demand for real-time depth-image-based-rendering algorithms that reach a high quality. Starting from a quality-wise top performing depth-image-based-renderer, we develop a real-time version. Despite reaching a high quality as well, the new OpenGL-based renderer decreases runtime by (at least) 2 magnitudes. This was made possible by discovering similarities between forward-based and mesh-based rendering, which enable us to remove the common parallelization bottleneck of competing memory access, and facilitated by the implementation of accurate yet fast algorithms for the different parts of the rendering pipeline. We evaluated the proposed renderer using a publicly available dataset with ground-truth depth and camera data, that contains both rapid camera movements and rotations as well as complex scenes and is therefore challenging to project accurately
    corecore