551 research outputs found

    Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    Get PDF
    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. This paper demonstrates how one can take advantage of graphic processing units (GPUs) available in today’s typical desktop computer, together with a multiscale approximation method, to significantly speedup such computations. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is implemented on an ATI Radeon 4870 GPU in combination with the hierarchical charge partitioning (HCP) multiscale approximation. This implementation delivers a combined 1800-fold speedup for a 476,040 atom viral capsid

    Comparison between Famous Game Engines and Eminent Games

    Get PDF
    Nowadays game engines are imperative for building 3D applications and games. This is for the reason that the engines appreciably reduce resources for employing obligatory but intricate utilities. This paper elucidates about a game engine, popular games developed by these engines and its foremost elements. It portrays a number of special kinds of contemporary game developed by engines in the way of their aspects, procedure and deliberates their stipulations with comparison

    Real-time adaptive sensing of nuclear spins by a single-spin quantum sensor

    Full text link
    Quantum sensing is considered to be one of the most promising subfields of quantum information to deliver practical quantum advantages in real-world applications. However, its impressive capabilities, including high sensitivity, are often hindered by the limited quantum resources available. Here, we incorporate the expected information gain (EIG) and techniques such as accelerated computation into Bayesian experimental design (BED) in order to use quantum resources more efficiently. A simulated nitrogen-vacancy center in diamond is used to demonstrate real-time operation of the BED. Instead of heuristics, the EIG is used to choose optimal control parameters in real-time. Moreover, combining the BED with accelerated computation and asynchronous operations, we find that up to a tenfold speed-up in absolute time cost can be achieved in sensing multiple surrounding C13 nuclear spins. Our work explores the possibilities of applying the EIG to BED-based quantum-sensing tasks and provides techniques useful to integrate BED into more generalized quantum sensing systems

    SOFI: A 3D simulator for the generation of underwater optical images

    No full text
    International audienceWe present an original simulator-called SOFI-for the synthetic generation of underwater optical images. The simulator architecture is flexible and relies on flow diagrams in order to allow the integration of various models for image generation which are based on the underwater optical phenomena. The objective is also to ensure real time or quasi real time performance so it takes advantage of the latest technologies, such as GPGPU, and relies on GPU programming under CUDA. Two kinds of models for image generation are presented and should be integrated in SOFI: (1) the OSOA model based on the radiative transfer theory and (2) global image modeling which describes globally how an image is deteriorated under the effects of sea water

    Data Science and Ebola

    Get PDF
    Data Science---Today, everybody and everything produces data. People produce large amounts of data in social networks and in commercial transactions. Medical, corporate, and government databases continue to grow. Sensors continue to get cheaper and are increasingly connected, creating an Internet of Things, and generating even more data. In every discipline, large, diverse, and rich data sets are emerging, from astrophysics, to the life sciences, to the behavioral sciences, to finance and commerce, to the humanities and to the arts. In every discipline people want to organize, analyze, optimize and understand their data to answer questions and to deepen insights. The science that is transforming this ocean of data into a sea of knowledge is called data science. This lecture will discuss how data science has changed the way in which one of the most visible challenges to public health is handled, the 2014 Ebola outbreak in West Africa.Comment: Inaugural lecture Leiden Universit

    Performance of distributed multiscale simulations

    Get PDF
    Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption

    Markov chain Monte Carlo on the GPU

    Get PDF
    Markov chains are a useful tool in statistics that allow us to sample and model a large population of individuals. We can extend this idea to the challenge of sampling solutions to problems. Using Markov chain Monte Carlo (MCMC) techniques we can also attempt to approximate the number of solutions with a certain confidence based on the number of samples we use to compute our estimate. Even though this approximation works very well for getting accurate results for very large problems, it is still computationally intensive. Many of the current algorithms use parallel implementations to improve their performance. Modern day graphics processing units (GPU\u27s) have been increasing in computational power very rapidly over the past few years. Due to their inherently parallel nature and increased flexibility for general purpose computation, they lend themselves very well to building a framework for general purpose Markov chain simulation and evaluation. In addition, the majority of mid- to high-range workstations have graphics cards capable of supporting modern day general purpose GPU (GPGPU) frameworks such as OpenCL, CUDA, or DirectCompute. This thesis presents work done to create a general purpose framework for Markov chain simulations and Markov chain Monte Carlo techniques on the GPU using the OpenCL toolkit. OpenCL is a GPGPU framework that is platform and hardware independent, which will further increase the accessibility of the software. Due to the increasing power, flexibility, and prevalence of GPUs, a wider range of developers and researchers will be able to take advantage of a high performing general purpose framework in their research. A number of experiments are also conducted to demonstrate the benefits and feasibility of using the power of the GPU to solve Markov chain Monte Carlo problems
    • …
    corecore