4,034 research outputs found

    Hydrodynamics of Suspensions of Passive and Active Rigid Particles: A Rigid Multiblob Approach

    Get PDF
    We develop a rigid multiblob method for numerically solving the mobility problem for suspensions of passive and active rigid particles of complex shape in Stokes flow in unconfined, partially confined, and fully confined geometries. As in a number of existing methods, we discretize rigid bodies using a collection of minimally-resolved spherical blobs constrained to move as a rigid body, to arrive at a potentially large linear system of equations for the unknown Lagrange multipliers and rigid-body motions. Here we develop a block-diagonal preconditioner for this linear system and show that a standard Krylov solver converges in a modest number of iterations that is essentially independent of the number of particles. For unbounded suspensions and suspensions sedimented against a single no-slip boundary, we rely on existing analytical expressions for the Rotne-Prager tensor combined with a fast multipole method or a direct summation on a Graphical Processing Unit to obtain an simple yet efficient and scalable implementation. For fully confined domains, such as periodic suspensions or suspensions confined in slit and square channels, we extend a recently-developed rigid-body immersed boundary method to suspensions of freely-moving passive or active rigid particles at zero Reynolds number. We demonstrate that the iterative solver for the coupled fluid and rigid body equations converges in a bounded number of iterations regardless of the system size. We optimize a number of parameters in the iterative solvers and apply our method to a variety of benchmark problems to carefully assess the accuracy of the rigid multiblob approach as a function of the resolution. We also model the dynamics of colloidal particles studied in recent experiments, such as passive boomerangs in a slit channel, as well as a pair of non-Brownian active nanorods sedimented against a wall.Comment: Under revision in CAMCOS, Nov 201

    Joint Reconstruction of Absorbed Optical Energy Density and Sound Speed Distribution in Photoacoustic Computed Tomography: A numerical Investigation

    Get PDF
    Photoacoustic computed tomography (PACT) is a rapidly emerging bioimaging modality that seeks to reconstruct an estimate of the absorbed optical energy density within an object. Conventional PACT image reconstruction methods assume a constant speed-of-sound (SOS), which can result in image artifacts when acoustic aberrations are significant. It has been demonstrated that incorporating knowledge of an object's SOS distribution into a PACT image reconstruction method can improve image quality. However, in many cases, the SOS distribution cannot be accurately and/or conveniently estimated prior to the PACT experiment. Because variations in the SOS distribution induce aberrations in the measured photoacoustic wavefields, certain information regarding an object's SOS distribution is encoded in the PACT measurement data. Based on this observation, a joint reconstruction (JR) problem has been proposed in which the SOS distribution is concurrently estimated along with the sought-after absorbed optical energy density from the photoacoustic measurement data. A broad understanding of the extent to which the JR problem can be accurately and reliably solved has not been reported. In this work, a series of numerical experiments is described that elucidate some important properties of the JR problem that pertain to its practical feasibility. To accomplish this, an optimization-based formulation of the JR problem is developed that yields a non-linear iterative algorithm that alternatingly updates the two image estimates. Heuristic analytic insights into the reconstruction problem are also provided. These results confirm the ill-conditioned nature of the joint reconstruction problem that will present significant challenges for practical applications.Comment: 13 pages, submitted to IEEE Transactions on Computational Imagin

    Tackling the Curse of Dimensionality with Physics-Informed Neural Networks

    Full text link
    The curse-of-dimensionality (CoD) taxes computational resources heavily with exponentially increasing computational cost as the dimension increases. This poses great challenges in solving high-dimensional PDEs as Richard Bellman first pointed out over 60 years ago. While there has been some recent success in solving numerically partial differential equations (PDEs) in high dimensions, such computations are prohibitively expensive, and true scaling of general nonlinear PDEs to high dimensions has never been achieved. In this paper, we develop a new method of scaling up physics-informed neural networks (PINNs) to solve arbitrary high-dimensional PDEs. The new method, called Stochastic Dimension Gradient Descent (SDGD), decomposes a gradient of PDEs into pieces corresponding to different dimensions and samples randomly a subset of these dimensional pieces in each iteration of training PINNs. We theoretically prove the convergence guarantee and other desired properties of the proposed method. We experimentally demonstrate that the proposed method allows us to solve many notoriously hard high-dimensional PDEs, including the Hamilton-Jacobi-Bellman and the Schr\"{o}dinger equations in thousands of dimensions very fast on a single GPU using the PINNs mesh-free approach. For example, we solve nontrivial nonlinear PDEs (the HJB-Lin equation and the BSB equation) in 100,000 dimensions in 6 hours on a single GPU using SDGD with PINNs. Since SDGD is a general training methodology of PINNs, SDGD can be applied to any current and future variants of PINNs to scale them up for arbitrary high-dimensional PDEs.Comment: 32 pages, 5 figure

    A Framework for Megascale Agent Based Model Simulations on Graphics Processing Units

    Get PDF
    Agent-based modeling is a technique for modeling dynamic systems from the bottom up. Individual elements of the system are represented computationally as agents. The system-level behaviors emerge from the micro-level interactions of the agents. Contemporary state-of-the-art agent-based modeling toolkits are essentially discrete-event simulators designed to execute serially on the Central Processing Unit (CPU). They simulate Agent-Based Models (ABMs) by executing agent actions one at a time. In addition to imposing an un-natural execution order, these toolkits have limited scalability. In this article, we investigate data-parallel computer architectures such as Graphics Processing Units (GPUs) to simulate large scale ABMs. We have developed a series of efficient, data parallel algorithms for handling environment updates, various agent interactions, agent death and replication, and gathering statistics. We present three fundamental innovations that provide unprecedented scalability. The first is a novel stochastic memory allocator which enables parallel agent replication in O(1) average time. The second is a technique for resolving precedence constraints for agent actions in parallel. The third is a method that uses specialized graphics hardware, to gather and process statistical measures. These techniques have been implemented on a modern day GPU resulting in a substantial performance increase. We believe that our system is the first ever completely GPU based agent simulation framework. Although GPUs are the focus of our current implementations, our techniques can easily be adapted to other data-parallel architectures. We have benchmarked our framework against contemporary toolkits using two popular ABMs, namely, SugarScape and StupidModel.GPGPU, Agent Based Modeling, Data Parallel Algorithms, Stochastic Simulations
    • …
    corecore