7,004 research outputs found

    Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    Full text link
    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach which is a combination of an heterogeneity sensitive spatial domain decomposition with an \textit{a priori} rearrangement of subdomain-walls. Within this approach, the theoretical modeling and scaling-laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase separated binary Lennard-Jones fluid.Comment: 14 pages, 12 figure

    Molecular dynamics modelling of nanoindentation

    Get PDF
    This thesis presents an atomic-scale study of nanoindentation, with carbon materials and both bcc and fcc metals as test specimens. Classical molecular dynamics (MD) simulations using Newtonian mechanics and many-body potentials, are employed to investigate the elastic-plastic deformation behaviour of the work materials during nanometresized indentations. In a preliminary model, the indenter is represented solely by a non-deformable interface with pyramidal and axisymmetric geometries. An atomistic description of a blunted 90° pyramidal indenter is also used to study deformation of the tip, adhesive tip-substrate interactions and atom transfer, together with damage after adhesive rupture and mechanisms of tip-induced structural transformations and surface nanotopograpghy. To alleviate finite-size effects and to facilitate the simulation of over one million atoms, a parallel MD code using the MPI paradigm has also been developed to run on multiple processor machines. The work materials show a diverse range of deformation behaviour, ranging from purely elastic deformation with graphite, to appreciable plastic deformation with metals. Some qualitative comparisons are made to experiment, but available computer power constrains feasible indentation depths to an order of magnitude smaller than experiment, and over indentation times several orders of magnitude smaller. The simulations give a good description of nanoindentation and support many of the experimental features

    Spatial reaction systems on parallel supercomputers

    Get PDF

    TweTriS: Twenty trillion-atom simulation

    Get PDF
    Significant improvements are presented for the molecular dynamics code ls1 mardyn — a linked cell-based code for simulating a large number of small, rigid molecules with application areas in chemical engineering. The changes consist of a redesign of the SIMD vectorization via wrappers, MPI improvements and a software redesign to allow memory-efficient execution with the production trunk to increase portability and extensibility. Two novel, memory-efficient OpenMP schemes for the linked cell-based force calculation are presented, which are able to retain Newton’s third law optimization. Comparisons to well-optimized Verlet list-based codes, such as LAMMPS and GROMACS, demonstrate the viability of the linked cell-based approach. The present version of ls1 mardyn is used to run simulations on entire supercomputers, maximizing the number of sampled atoms. Compared to the preceding version of ls1 mardyn on the entire set of 9216 nodes of SuperMUC, Phase 1, 27% more atoms are simulated. Weak scaling performance is increased by up to 40% and strong scaling performance by up to more than 220%. On Hazel Hen, strong scaling efficiency of up to 81% and 189 billion molecule updates per second is attained, when scaling from 8 to 7168 nodes. Moreover, a total of 20 trillion atoms is simulated at up to 88% weak scaling efficiency running at up to 1.33 PFLOPS. This represents a fivefold increase in terms of the number of atoms simulated to date.BMBF, 01IH16008, Verbundprojekt: TaLPas - Task-basierte Lastverteilung und Auto-Tuning in der Partikelsimulatio

    Lattice Vibration Study Of Silica Nanoparticle In Suspension

    Get PDF
    In recent years considerable research has been done in the area of nanofluids . Nanofluids are colloidal suspensions of nanometer size metallic or oxide particles in a base fluid such as water, ethylene glycol. Nanofluids show enhanced heat transfer characteristics compared to the base fluid. The thermal transport properties of nanofluids depend on various parameters e.g. interfacial resistance, Brownian motion of particles, liquid layering at the solid-liquid interface and clustering of nanoparticles. In this work atomic scale simulation has been used to study possible mechanisms affecting the heat transfer characteristics of nanofluids. Molecular dynamics simulation for a single silica nanoparticle surrounded by water molecules has been performed. Periodic boundary condition has been used in all three directions. The effect of nanoparticle size and temperature of system on the thermal conductivity of nanofluids has been studied. It was found that as the size of nanoparticle decreases thermal conductivity of nanofluid increases. This is partially due to the fact that as the diameter of nanoparticle decreases from micrometer to nanometer its surface area to volume ratio increases by a factor of 103. Since heat transfer between the fluid and the nanoparticle takes place at the surface this enhanced surface area gives higher thermal conductivity for smaller particles. Thermal conductivity enhancement is also due to the accumulation of water molecules near the particle surface and the lattice vibration of the nanoparticle. The phonon transfer through the second layer allows the nanofluid thermal conductivity to increase by 23%-27% compared to the base fluid water for 2% concentration of nanosilica

    DisPar Methods and Their Implementation on a Heterogeneous PC Cluster

    Get PDF
    Esta dissertação avalia duas áreas cruciais da simulação de advecção- difusão. A primeira parte é dedicada a estudos numéricos. Foi comprovado que existe uma relação directa entre os momentos de deslocamento de uma partícula de poluente e os erros de truncatura. Esta relação criou os fundamentos teóricos para criar uma nova família de métodos numéricos, DisPar. Foram introduzidos e avaliados três métodos. O primeiro é um método semi-Lagrangeano 2D baseado nos momentos de deslocamento de uma partícula para malhas regulares, DisPar-k. Com este método é possível controlar explicitamente o erro de truncatura desejado. O segundo método também se baseia nos momentos de deslocamento de uma partícula, sendo, contudo, desenvolvido para malhas uniformes não regulares, DisParV. Este método também apresentou uma forte robustez numérica. Ao contrário dos métodos DisPar-K e DisParV, o terceiro segue uma aproximação Eulereana com três regiões de destino da partícula. O método foi desenvolvido de forma a manter um perfil de concentração inicial homogéneo independentemente dos parâmetros usados. A comparação com o método DisPar-k em situações não lineares realçou as fortes limitações associadas aos métodos de advecção-difusão em cenários reais. A segunda parte da tese é dedicada à implementação destes métodos num Cluster de PCs heterogéneo. Para o fazer, foi desenvolvido um novo esquema de partição, AORDA. A aplicação, Scalable DisPar, foi implementada com a plataforma da Microsoft .Net, tendo sido totalmente escrita em C#. A aplicação foi testada no estuário do Tejo que se localiza perto de Lisboa, Portugal. Para superar os problemas de balanceamento de cargas provocados pelas marés, foram implementados diversos esquemas de partição: “Scatter Partitioning”, balanceamento dinâmico de cargas e uma mistura de ambos. Pelos testes elaborados, foi possível verificar que o número de máquinas vizinhas se apresentou como o mais limitativo em termos de escalabilidade, mesmo utilizando comunicações assíncronas. As ferramentas utilizadas para as comunicações foram a principal causa deste fenómeno. Aparentemente, o Microsoft .Net remoting 1.0 não funciona de forma apropriada nos ambientes de concorrência criados pelas comunicações assíncronas. Este facto não permitiu a obtenção de conclusões acerca dos níveis relativos de escalabilidade das diferentes estratégias de partição utilizadas. No entanto, é fortemente sugerido que a melhor estratégia irá ser “Scatter Partitioning” associada a balanceamento dinâmico de cargas e a comunicações assíncronas. A técnica de “Scatter Partitioning” mitiga os problemas de desbalanceamentos de cargas provocados pelas marés. Por outro lado, o balanceamento dinâmico será essencialmente activado no inicio da simulação para corrigir possíveis problemas nas previsões dos poderes de cada processador.This thesis assesses two main areas of the advection-diffusion simulation. The first part is dedicated to the numerical studies. It has been proved that there is a direct relation between pollutant particle displacement moments and truncation errors. This relation raised the theoretical foundations to create a new family of numerical methods, DisPar. Three methods have been introduced and appraised. The first is a 2D semi- Lagrangian method based on particle displacement moments for regular grids, DisPar-k. With this method one can explicitly control the desired truncation error. The second method is also based on particle displacement moments but it is targeted to regular/non-uniform grids, DisParV. The method has also shown a strong numerical capacity. Unlike DisPar-k and DisParV, the third method is a Eulerian approximation for three particle destination units. The method was developed so that an initial concentration profile will be kept homogeneous independently of the used parameters. The comparison with DisPar-k in non-linear situations has emphasized the strong shortcomings associated with numerical methods for advection-diffusion in real scenarios. The second part of the dissertation is dedicated to the implementation of these methods in a heterogeneous PC Cluster. To do so, a new partitioning method has been developed, AORDA. The application, Scalable DisPar, was implemented with the Microsoft .Net framework and was totally written in C#. The application was tested on the Tagus Estuary, near Lisbon (Portugal). To overcome the load imbalances caused by tides scatter partitioning was implemented, dynamic load balancing and a mix of both. By the tests made, it was possible to verify that the number of neighboring machines was the main factor affecting the application scalability, even with asynchronous communications. The tools used for communications mainly caused this. Microsoft .Net remoting 1.0 does not seem to properly work in environments with concurrency associated with the asynchronous communications. This did not allow taking conclusions about the relative efficiency between the partitioning strategies used. However, it is strongly suggested that the best approach will be to scatter partitioning with dynamic load balancing and with asynchronous communications. Scatter partitioning mitigates load imbalances caused by tides and dynamic load balancing is basically trigged at the begging of the simulation to correct possible problems in processor power predictions

    Engineering simulations for cancer systems biology

    Get PDF
    Computer simulation can be used to inform in vivo and in vitro experimentation, enabling rapid, low-cost hypothesis generation and directing experimental design in order to test those hypotheses. In this way, in silico models become a scientific instrument for investigation, and so should be developed to high standards, be carefully calibrated and their findings presented in such that they may be reproduced. Here, we outline a framework that supports developing simulations as scientific instruments, and we select cancer systems biology as an exemplar domain, with a particular focus on cellular signalling models. We consider the challenges of lack of data, incomplete knowledge and modelling in the context of a rapidly changing knowledge base. Our framework comprises a process to clearly separate scientific and engineering concerns in model and simulation development, and an argumentation approach to documenting models for rigorous way of recording assumptions and knowledge gaps. We propose interactive, dynamic visualisation tools to enable the biological community to interact with cellular signalling models directly for experimental design. There is a mismatch in scale between these cellular models and tissue structures that are affected by tumours, and bridging this gap requires substantial computational resource. We present concurrent programming as a technology to link scales without losing important details through model simplification. We discuss the value of combining this technology, interactive visualisation, argumentation and model separation to support development of multi-scale models that represent biologically plausible cells arranged in biologically plausible structures that model cell behaviour, interactions and response to therapeutic interventions

    Shear-promoted drug encapsulation into red blood cells: a CFD model and μ-PIV analysis

    Get PDF
    The present work focuses on the main parameters that influence shear-promoted encapsulation of drugs into erythrocytes. A CFD model was built to investigate the fluid dynamics of a suspension of particles flowing in a commercial micro channel. Micro Particle Image Velocimetry (μ-PIV) allowed to take into account for the real properties of the red blood cell (RBC), thus having a deeper understanding of the process. Coupling these results with an analytical diffusion model, suitable working conditions were defined for different values of haematocrit

    Integrated Flywheel Technology, 1983

    Get PDF
    Topics of discussion included: technology assessment of the integrated flywheel systems, potential of system concepts, identification of critical areas needing development and, to scope and define an appropriate program for coordinated activity
    corecore