79,533 research outputs found

    Overview of crowd simulation in computer graphics

    Get PDF
    High-powered technology use computer graphics in education, entertainment, games, simulation, and virtual heritage applications has led it to become an important area of research. In simulation, according to Tecchia et al. (2002), it is important to create an interactive, complex, and realistic virtual world so that the user can have an immersive experience during navigation through the world. As the size and complexity of the environments in the virtual world increased, it becomes more necessary to populate them with peoples, and this is the reason why rendering the crowd in real-time is very crucial. Generally, crowd simulation consists of three important areas. They are realism of behavioral (Thompson and Marchant 1995), high-quality visualization (Dobbyn et al. 2005) and convergence of both areas. Realism of behavioral is mainly used for simple 2D visualizations because most of the attentions are concentrated on simulating the behaviors of the group. High quality visualization is regularly used for movie productions and computer games. It gives intention on producing more convincing visual rather than realism of behaviors. The convergences of both areas are mainly used for application like training systems. In order to make the training system more effective, the element of valid replication of the behaviors and high-quality visualization is added

    Path-tracing Monte Carlo Library for 3D Radiative Transfer in Highly Resolved Cloudy Atmospheres

    Full text link
    Interactions between clouds and radiation are at the root of many difficulties in numerically predicting future weather and climate and in retrieving the state of the atmosphere from remote sensing observations. The large range of issues related to these interactions, and in particular to three-dimensional interactions, motivated the development of accurate radiative tools able to compute all types of radiative metrics, from monochromatic, local and directional observables, to integrated energetic quantities. In the continuity of this community effort, we propose here an open-source library for general use in Monte Carlo algorithms. This library is devoted to the acceleration of path-tracing in complex data, typically high-resolution large-domain grounds and clouds. The main algorithmic advances embedded in the library are those related to the construction and traversal of hierarchical grids accelerating the tracing of paths through heterogeneous fields in null-collision (maximum cross-section) algorithms. We show that with these hierarchical grids, the computing time is only weakly sensitivive to the refinement of the volumetric data. The library is tested with a rendering algorithm that produces synthetic images of cloud radiances. Two other examples are given as illustrations, that are respectively used to analyse the transmission of solar radiation under a cloud together with its sensitivity to an optical parameter, and to assess a parametrization of 3D radiative effects of clouds.Comment: Submitted to JAMES, revised and submitted again (this is v2

    Trends and concerns in digital cartography

    Get PDF
    CISRG discussion paper ;

    An Analysis of Publication Venues for Automatic Differentiation Research

    Get PDF
    We present the results of our analysis of publication venues for papers on automatic differentiation (AD), covering academic journals and conference proceedings. Our data are collected from the AD publications database maintained by the autodiff.org community website. The database is purpose-built for the AD field and is expanding via submissions by AD researchers. Therefore, it provides a relatively noise-free list of publications relating to the field. However, it does include noise in the form of variant spellings of journal and conference names. We handle this by manually correcting and merging these variants under the official names of corresponding venues. We also share the raw data we get after these corrections.Comment: 6 pages, 3 figure

    Accelerating Monte Carlo simulations with an NVIDIA® graphics processor

    Get PDF
    Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA® 8800gt graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer

    Secure and Robust Image Watermarking Scheme Using Homomorphic Transform, SVD and Arnold Transform in RDWT Domain

    Get PDF
    The main objective for a watermarking technique is to attain imperceptibility, robustness and security against various malicious attacks applied by illicit users. To fulfil these basic requirements for a scheme is a big issue of concern. So, in this paper, a new image watermarking method is proposed which utilizes properties of homomorphic transform, Redundant Discrete Wavelet Transform (RDWT), Arnold Transform (AT) along with Singular Value Decomposition (SVD) to attain these required properties. RDWT is performed on host image to achieve LL subband. This LL subband image is further decomposed into illumination and reflectance components by homomorphic transform. In order to strengthen security of proposed scheme, AT is used to scramble watermark. This scrambled watermark is embedded with Singular Values (SVs) of reflectance component which are obtained by applying SVD to it. Since reflectance component contains important features of image, therefore, embedding of watermark in this part provides excellent imperceptibility. Proposed scheme is comprehensively examined against different attacks like scaling, shearing etc. for its robustness. Comparative study with other prevailing algorithms clearly reveals superiority of proposed scheme in terms of robustness and imperceptibility

    A progressive refinement approach for the visualisation of implicit surfaces

    Get PDF
    Visualising implicit surfaces with the ray casting method is a slow procedure. The design cycle of a new implicit surface is, therefore, fraught with long latency times as a user must wait for the surface to be rendered before being able to decide what changes should be introduced in the next iteration. In this paper, we present an attempt at reducing the design cycle of an implicit surface modeler by introducing a progressive refinement rendering approach to the visualisation of implicit surfaces. This progressive refinement renderer provides a quick previewing facility. It first displays a low quality estimate of what the final rendering is going to be and, as the computation progresses, increases the quality of this estimate at a steady rate. The progressive refinement algorithm is based on the adaptive subdivision of the viewing frustrum into smaller cells. An estimate for the variation of the implicit function inside each cell is obtained with an affine arithmetic range estimation technique. Overall, we show that our progressive refinement approach not only provides the user with visual feedback as the rendering advances but is also capable of completing the image faster than a conventional implicit surface rendering algorithm based on ray casting
    • …
    corecore