12,488 research outputs found

    The development of local solar irradiance for outdoor computer graphics rendering

    Get PDF
    Atmospheric effects are approximated by solving the light transfer equation, LTE, of a given viewing path. The resulting accumulated spectral energy (its visible band) arriving at the observer’s eyes, defines the colour of the object currently on the line of sight. Due to the convenience of using a single rendering equation to solve the LTE for daylight sky and distant objects (aerial perspective), recent methods had opt for a similar kind of approach. Alas, the burden that the real-time calculation brings to the foil had forced these methods to make simplifications that were not in line with the actual world observation. Consequently, the results of these methods are laden with visual-errors. The two most common simplifications made were: i) assuming the atmosphere as a full-scattering medium only and ii) assuming a single density atmosphere profile. This research explored the possibility of replacing the real-time calculation involved in solving the LTE with an analytical-based approach. Hence, the two simplifications made by the previous real-time methods can be avoided. The model was implemented on top of a flight simulator prototype system since the requirements of such system match the objectives of this study. Results were verified against the actual images of the daylight skies. Comparison was also made with the previous methods’ results to showcase the proposed model strengths and advantages over its peers

    Adaptive GPU-accelerated force calculation for interactive rigid molecular docking using haptics

    Get PDF
    Molecular docking systems model and simulate in silico the interactions of intermolecular binding. Haptics-assisted docking enables the user to interact with the simulation via their sense of touch but a stringent time constraint on the computation of forces is imposed due to the sensitivity of the human haptic system. To simulate high fidelity smooth and stable feedback the haptic feedback loop should run at rates of 500 Hz to 1 kHz. We present an adaptive force calculation approach that can be executed in parallel on a wide range of Graphics Processing Units (GPUs) for interactive haptics-assisted docking with wider applicability to molecular simulations. Prior to the interactive session either a regular grid or an octree is selected according to the available GPU memory to determine the set of interatomic interactions within a cutoff distance. The total force is then calculated from this set. The approach can achieve force updates in less than 2 ms for molecular structures comprising hundreds of thousands of atoms each, with performance improvements of up to 90 times the speed of current CPU-based force calculation approaches used in interactive docking. Furthermore, it overcomes several computational limitations of previous approaches such as pre-computed force grids, and could potentially be used to model receptor flexibility at haptic refresh rates

    Visualization for the Physical Sciences

    Get PDF

    Ambient occlusion and shadows for molecular graphics

    Get PDF
    Computer based visualisations of molecules have been produced as early as the 1950s to aid researchers in their understanding of biomolecular structures. An important consideration for Molecular Graphics software is the ability to visualise the 3D structure of the molecule in a clear manner. Recent advancements in computer graphics have led to improved rendering capabilities of the visualisation tools. The capabilities of current shading languages allow the inclusion of advanced graphic effects such as ambient occlusion and shadows that greatly improve the comprehension of the 3D shapes of the molecules. This thesis focuses on finding improved solutions to the real time rendering of Molecular Graphics on modern day computers. The methods of calculating ambient occlusion and both hard and soft shadows are examined and implemented to give the user a more complete experience when navigating large molecular structures

    ReLiShaft: realistic real-time light shaft generation taking sky illumination into account

    Get PDF
    © 2018 The Author(s) Rendering atmospheric phenomena is known to have its basis in the fields of atmospheric optics and meteorology and is increasingly used in games and movies. Although many researchers have focused on generating and enhancing realistic light shafts, there is still room for improvement in terms of both qualification and quantification. In this paper, a new technique, called ReLiShaft, is presented to generate realistic light shafts for outdoor rendering. In the first step, a realistic light shaft with respect to the sun position and sky colour in any specific location, date and time is constructed in real-time. Then, Hemicube visibility-test radiosity is employed to reveal the effect of a generated sky colour on environments. Two different methods are considered for indoor and outdoor rendering, ray marching based on epipolar sampling for indoor environments, and filtering on regular epipolar of z-partitioning for outdoor environments. Shadow maps and shadow volumes are integrated to consider the computational costs. Through this technique, the light shaft colour is adjusted according to the sky colour in any specific location, date and time. The results show different light shaft colours in different times of day in real-time

    Auditory-visual interaction in computer graphics

    Get PDF
    Generating high-fidelity images in real-time at reasonable frame rates, still remains one of the main challenges in computer graphics. Furthermore, visuals remain only one of the multiple sensory cues that are required to be delivered simultaneously in a multi-sensory virtual environment. The most frequently used sense, besides vision, in virtual environments and entertainment, is audio. While the rendering community focuses on solving the rendering equation more quickly using various algorithmic and hardware improvements, the exploitation of human limitations to assist in this process remain largely unexplored. Many findings in the research literature prove the existence of physical and psychological limitations of humans, including attentional, perceptual and limitations of the Human Sensory System (HSS). Knowledge of the Human Visual System (HVS) may be exploited in computer graphics to significantly reduce rendering times without the viewer being aware of any resultant image quality difference. Furthermore, cross-modal effects, that is the influence of one sensory input on another, for example sound and visuals, have also recently been shown to have a substantial impact on viewer perception of virtual environment. In this thesis, auditory-visual cross-modal interaction research findings have been investigated and adapted to graphics rendering purposes. The results from five psychophysical experiments, involving 233 participants, showed that, even in the realm of computer graphics, there is a strong relationship between vision and audition in both spatial and temporal domains. The first experiment, investigating the auditory-visual cross-modal interaction within spatial domain, showed that unrelated sound effects reduce perceived rendering quality threshold. In the following experiments, the effect of audio on temporal visual perception was investigated. The results obtained indicate that audio with certain beat rates can be used in order to reduce the amount of rendering required to achieve a perceptual high quality. Furthermore, introducing the sound effect of footsteps to walking animations increased the visual smoothness perception. These results suggest that for certain conditions the number of frames that need to be rendered each second can be reduced, saving valuable computation time, without the viewer being aware of this reduction. This is another step towards a comprehensive understanding of auditory-visual cross-modal interaction and its use in high-fidelity interactive multi-sensory virtual environments

    Oriented tensor reconstruction: tracing neural pathways from diffusion tensor MRI

    Get PDF
    In this paper we develop a new technique for tracing anatomical fibers from 3D tensor fields. The technique extracts salient tensor features using a local regularization technique that allows the algorithm to cross noisy regions and bridge gaps in the data. We applied the method to human brain DT-MRI data and recovered identifiable anatomical structures that correspond to the white matter brain-fiber pathways. The images in this paper are derived from a dataset having 121x88x60 resolution. We were able to recover fibers with less than the voxel size resolution by applying the regularization technique, i.e., using a priori assumptions about fiber smoothness. The regularization procedure is done through a moving least squares filter directly incorporated in the tracing algorithm
    corecore