746 research outputs found

    A perceptual approach for stereoscopic rendering optimization

    Get PDF
    Cataloged from PDF version of article.The traditional way of stereoscopic rendering requires rendering the scene for left and right eyes separately: which doubles the rendering complexity. In this study, we propose a perceptually-based approach for accelerating stereoscopic rendering. This optimization approach is based on the Binocular Suppression Theory, which claims that the overall percept of a stereo pair in a region is determined by the dominant image on the corresponding region. We investigate how binocular suppression mechanism of human visual system can be utilized for rendering optimization. Our aim is to identify the graphics rendering and modeling features that do not affect the overall quality of a stereo pair when simplified in one view. By combining the results of this investigation with the principles of visual attention, we infer that this optimization approach is feasible if the high quality view has more intensity contrast. For this reason, we performed a subjective experiment, in which various representative graphical methods were analyzed. The experimental results verified our hypothesis that a modification, applied on a single view, is not perceptible if it decreases the intensity contrast, and thus can be used for stereoscopic rendering. (C) 2009 Elsevier Ltd. All rights reserved

    A Smart Browsing System with Colour Image Enhancement for Surveillance Videos

    Get PDF
    Surveillance cameras have been widely installed in large cities to monitor and record human activities for different applications. Since surveillance cameras often record all events 24 hours/day, it necessarily takes huge workforce watching surveillance videos to search for specific targets, thus a system that helps the user quickly look for targets of interest is highly demanded. This paper proposes a smart surveillance video browsing system with colour image enhancement. The basic idea is to collect all of moving objects which carry the most significant information in surveillance videos to construct a corresponding compact video by tuning positions of these moving objects. The compact video rearranges the spatiotemporal coordinates of moving objects to enhance the compression, but the temporal relationships among moving objects are still kept. The compact video can preserve the essential activities involved in the original surveillance video. This paper presents the details of browsing system and the approach to producing the compact video from a source surveillance video. At the end we will get the compact video with high resolution. DOI: 10.17762/ijritcc2321-8169.15038

    A Scalable GPU-Based Approach to Shading and Shadowing for Photo-Realistic Real-Time Augmented Reality

    Get PDF

    Fast scalable visualization techniques for interactive billion-particle walkthrough

    Get PDF
    This research develops a comprehensive framework for interactive walkthrough involving one billion particles in an immersive virtual environment to enable interrogative visualization of large atomistic simulation data. As a mixture of scientific and engineering approaches, the framework is based on four key techniques: adaptive data compression based on space-filling curves, octree-based visibility and occlusion culling, predictive caching based on machine learning, and scalable data reduction based on parallel and distributed processing. In terms of parallel rendering, this system combines functional parallelism, data parallelism, and temporal parallelism to improve interactivity. The visualization framework will be applicable not only to material simulation, but also to computational biology, applied mathematics, mechanical engineering, and nanotechnology, etc

    Light-Based Sample Reduction Methods for Interactive Relighting of Scenes with Minute Geometric Scale

    Get PDF
    Rendering production-quality cinematic scenes requires high computational and temporal costs. From an artist\u27s perspective, one must wait for several hours for feedback on even minute changes of light positions and parameters. Previous work approximates scenes so that adjustments on lights may be carried out with interactive feedback, so long as geometry and materials remain constant. We build on these methods by proposing means by which objects with high geometric complexity at the subpixel level, such as hair and foliage, can be approximated for real-time cinematic relighting. Our methods make no assumptions about the geometry or shaders in a scene, and as such are fully generalized. We show that clustering techniques can greatly reduce multisampling, while still maintaining image fidelity at an error significantly lower than sparsely sampling without clustering, provided that no shadows are computed. Scenes that produce noise-like shadow patterns when sparse shadow samples are taken suffer from additional error introduced by those shadows. We present a viable solution to scalable scene approximation for lower sampling reolutions, provided a robust solution to shadow approximation for sub-pixel geomery can be provided in the future

    Toward Guaranteed Illumination Models for Non-Convex Objects

    Full text link
    Illumination variation remains a central challenge in object detection and recognition. Existing analyses of illumination variation typically pertain to convex, Lambertian objects, and guarantee quality of approximation in an average case sense. We show that it is possible to build V(vertex)-description convex cone models with worst-case performance guarantees, for non-convex Lambertian objects. Namely, a natural verification test based on the angle to the constructed cone guarantees to accept any image which is sufficiently well-approximated by an image of the object under some admissible lighting condition, and guarantees to reject any image that does not have a sufficiently good approximation. The cone models are generated by sampling point illuminations with sufficient density, which follows from a new perturbation bound for point images in the Lambertian model. As the number of point images required for guaranteed verification may be large, we introduce a new formulation for cone preserving dimensionality reduction, which leverages tools from sparse and low-rank decomposition to reduce the complexity, while controlling the approximation error with respect to the original cone

    Analysis of a Moon outpost for Mars enabling technologies through a Virtual Reality environment

    Get PDF
    The Moon is now being considered as the starting point for human exploration of the Solar System beyond low-Earth orbit. Many national space agencies are actively advocating to build up a lunar surface habitat capability starting from 2030 or earlier: according to ESA Technology Roadmaps for Exploration this should be the result of a broad international cooperation. Taking into account an incremental approach to reduce risks and costs of space missions, a lunar outpost can be considered as a test bed towards Mars, allowing to validate enabling technologies, such as water processing, waste management, power generation and storage, automation, robotics and human factors. Our natural satellite is rich in resources that could be used to pursue such a goal through a necessary assessment of ISRU techniques. The aim of this research is the analysis of a Moon outpost dedicated to the validation of enabling technologies for human space exploration. The main building blocks of the outpost are identified and feasible evolutionary scenarios are depicted, to highlight the incremental steps to build up the outpost. Main aspects that are dealt with include outpost location and architecture, as well as ISRU facilities, which in a far term future can help reduce the mass at launch, by producing hydrogen and oxygen for consumables, ECLSS, and propellant for Earth-Moon sorties and Mars journeys. A test outpost is implemented in a Virtual Reality (VR) environment as a first proof-of-concepts, where the elements are computer-based mock-ups. The VR facility has a first-person interactive perspective, allowing for specific in-depth analyses of ergonomics and operations. The feedbacks of these analyses are crucial to highlight requirements that might otherwise be overlooked, while their general outputs are fundamental to write down procedures. Moreover, the mimic of astronauts’ EVAs is useful for pre-flight training, but can also represent an additional tool for failures troubleshooting during the flight controllers’ nominal operations. Additionally, illumination maps have been obtained to study the light conditions, which are essential parameters to assess the base elements location. This unique simulation environment may offer the largest suite of benefits during the design and development phase, as it allows to design future systems to optimize operations, thus maximizing the mission’s scientific return, and to enhance the astronauts training, by saving time and cost. The paper describes how a virtual environment could help to design a Moon outpost for an incremental architecture strategy towards Mars missions

    Planetary Rover Simulation for Lunar Exploration Missions

    Get PDF
    When planning planetary rover missions it is useful to develop intuition and skills driving in, quite literally, alien environments before incurring the cost of reaching said locales. Simulators make it possible to operate in environments that have the physical characteristics of target locations without the expense and overhead of extensive physical tests. To that end, NASA Ames and Open Robotics collaborated on a Lunar rover driving simulator based on the open source Gazebo simulation platform and leveraging ROS (Robotic Operating System) components. The simulator was integrated with research and mission software for rover driving, system monitoring, and science instrument simulation to constitute an end-to-end Lunar mission simulation capability. Although we expect our simulator to be applicable to arbitrary Lunar regions, we designed to a reference mission of prospecting in polar regions. The harsh lighting and low illumination angles at the Lunar poles combine with the unique reflectance properties of Lunar regolith to present a challenging visual environment for both human and computer perception. Our simulator placed an emphasis on high fidelity visual simulation in order to produce synthetic imagery suitable for evaluating human rover drivers with navigation tasks, as well as providing test data for computer vision software development.In this paper, we describe the software used to construct the simulated Lunar environment and the components of the driving simulation. Our synthetic terrain generation software artificially increases the resolution of Lunar digital elevation maps by fractal synthesis and inserts craters and rocks based on Lunar size-frequency distribution models. We describe the necessary enhancements to import large scale, high resolution terrains into Gazebo, as well as our approach to modeling the visual environment of the Lunar surface. An overview of the mission software system is provided, along with how ROS was used to emulate flight software components that had not been developed yet. Finally, we discuss the effect of using the high-fidelity synthetic Lunar images for visual odometry. We also characterize the wheel slip model, and find some inconsistencies in the produced wheel slip behaviour
    • …
    corecore