7,839 research outputs found
A fast immersed boundary method for external incompressible viscous flows using lattice Green's functions
A new parallel, computationally efficient immersed boundary method for
solving three-dimensional, viscous, incompressible flows on unbounded domains
is presented. Immersed surfaces with prescribed motions are generated using the
interpolation and regularization operators obtained from the discrete delta
function approach of the original (Peskin's) immersed boundary method. Unlike
Peskin's method, boundary forces are regarded as Lagrange multipliers that are
used to satisfy the no-slip condition. The incompressible Navier-Stokes
equations are discretized on an unbounded staggered Cartesian grid and are
solved in a finite number of operations using lattice Green's function
techniques. These techniques are used to automatically enforce the natural
free-space boundary conditions and to implement a novel block-wise adaptive
grid that significantly reduces the run-time cost of solutions by limiting
operations to grid cells in the immediate vicinity and near-wake region of the
immersed surface. These techniques also enable the construction of practical
discrete viscous integrating factors that are used in combination with
specialized half-explicit Runge-Kutta schemes to accurately and efficiently
solve the differential algebraic equations describing the discrete momentum
equation, incompressibility constraint, and no-slip constraint. Linear systems
of equations resulting from the time integration scheme are efficiently solved
using an approximation-free nested projection technique. The algebraic
properties of the discrete operators are used to reduce projection steps to
simple discrete elliptic problems, e.g. discrete Poisson problems, that are
compatible with recent parallel fast multipole methods for difference
equations. Numerical experiments on low-aspect-ratio flat plates and spheres at
Reynolds numbers up to 3,700 are used to verify the accuracy and physical
fidelity of the formulation.Comment: 32 pages, 9 figures; preprint submitted to Journal of Computational
Physic
Generalized Filtering Decomposition
This paper introduces a new preconditioning technique that is suitable for
matrices arising from the discretization of a system of PDEs on unstructured
grids. The preconditioner satisfies a so-called filtering property, which
ensures that the input matrix is identical with the preconditioner on a given
filtering vector. This vector is chosen to alleviate the effect of low
frequency modes on convergence and so decrease or eliminate the plateau which
is often observed in the convergence of iterative methods. In particular, the
paper presents a general approach that allows to ensure that the filtering
condition is satisfied in a matrix decomposition. The input matrix can have an
arbitrary sparse structure. Hence, it can be reordered using nested dissection,
to allow a parallel computation of the preconditioner and of the iterative
process
ThumbNet: One Thumbnail Image Contains All You Need for Recognition
Although deep convolutional neural networks (CNNs) have achieved great
success in computer vision tasks, its real-world application is still impeded
by its voracious demand of computational resources. Current works mostly seek
to compress the network by reducing its parameters or parameter-incurred
computation, neglecting the influence of the input image on the system
complexity. Based on the fact that input images of a CNN contain substantial
redundancy, in this paper, we propose a unified framework, dubbed as ThumbNet,
to simultaneously accelerate and compress CNN models by enabling them to infer
on one thumbnail image. We provide three effective strategies to train
ThumbNet. In doing so, ThumbNet learns an inference network that performs
equally well on small images as the original-input network on large images.
With ThumbNet, not only do we obtain the thumbnail-input inference network that
can drastically reduce computation and memory requirements, but also we obtain
an image downscaler that can generate thumbnail images for generic
classification tasks. Extensive experiments show the effectiveness of ThumbNet,
and demonstrate that the thumbnail-input inference network learned by ThumbNet
can adequately retain the accuracy of the original-input network even when the
input images are downscaled 16 times
- …