540 research outputs found

    Transition to turbulence in pulsating pipe flow

    Full text link
    Fluid flows in nature and applications are frequently subject to periodic velocity modulations. Surprisingly, even for the generic case of flow through a straight pipe, there is little consensus regarding the influence of pulsation on the transition threshold to turbulence: while most studies predict a monotonically increasing threshold with pulsation frequency (i.e. Womersley number, α\alpha), others observe a decreasing threshold for identical parameters and only observe an increasing threshold at low α\alpha. In the present study we apply recent advances in the understanding of transition in steady shear flows to pulsating pipe flow. For moderate pulsation amplitudes we find that the first instability encountered is subcritical (i.e. requiring finite amplitude disturbances) and gives rise to localized patches of turbulence ("puffs") analogous to steady pipe flow. By monitoring the impact of pulsation on the lifetime of turbulence we map the onset of turbulence in parameter space. Transition in pulsatile flow can be separated into three regimes. At small Womersley numbers the dynamics are dominated by the decay turbulence suffers during the slower part of the cycle and hence transition is delayed significantly. As shown in this regime thresholds closely agree with estimates based on a quasi steady flow assumption only taking puff decay rates into account. The transition point predicted in the zero α\alpha limit equals to the critical point for steady pipe flow offset by the oscillation Reynolds number. In the high frequency limit puff lifetimes are identical to those in steady pipe flow and hence the transition threshold appears to be unaffected by flow pulsation. In the intermediate frequency regime the transition threshold sharply drops (with increasing α\alpha) from the decay dominated (quasi steady) threshold to the steady pipe flow level

    A direct approach to sharp Li-Yau Estimates on closed manifolds with negative Ricci lower bound

    Full text link
    Recently, Qi S.Zhang [26] has derived a sharp Li-Yau estimate for positive solutions of the heat equation on closed Riemannian manifolds with the Ricci curvature bounded below by a negative constant. The proof is based on an integral iteration argument which utilizes Hamilton's gradient estimate, heat kernel Gaussian bounds and parabolic Harnack inequality. In this paper, we show that the sharp Li-Yau estimate can actually be obtained directly following the classical maximum principle argument, which simplifies the proof in [26]. In addition, we apply the same idea to the heat and conjugate heat equations under the Ricci flow and prove some Li-Yau type estimates with optimal coefficients.Comment: 14 page

    On wave diffraction of two-dimensional moonpools in a two-layer fluid with finite depth

    Get PDF
    This paper studies the wave diffraction problem of two-dimensional moonpools in a two-layer fluid by using domain decomposition scheme and the method of eigenfunction expansion. Wave exciting forces, free surface and internal wave elevations are computed and analyzed for both surface wave and internal wave modes. The present model is validated by comparing a limiting case with a single-layer fluid case. Both piston mode and sloshing mode resonances have been identified and analyzed. It is observed that, compared with the solutions in surface wave mode, the wave exciting forces in internal wave mode are much smaller, and show more peaks and valleys in low-frequency region. As the wave frequency increases, the bandwidth of sloshing mode resonances decreases. Extensive parametric studies have been performed to examine the effects of moonpool geometry and density stratification on the resonant wave motion and exciting forces. It is found that, for twin bodies with deep draft in surface wave mode, the decreasing density ratio has little effects on the sloshing mode resonance frequencies but can somehow suppress the horizontal wave exciting forces and surface wave elevations around piston mode resonance region. In addition, the presence of lower-layer fluid can lead to the reduction of piston mode resonance frequency

    EIE: Efficient Inference Engine on Compressed Deep Neural Network

    Full text link
    State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120x energy saving; Exploiting sparsity saves 10x; Weight sharing gives 8x; Skipping zero activations from ReLU saves another 3x. Evaluated on nine DNN benchmarks, EIE is 189x and 13x faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102GOPS/s working directly on a compressed network, corresponding to 3TOPS/s on an uncompressed network, and processes FC layers of AlexNet at 1.88x10^4 frames/sec with a power dissipation of only 600mW. It is 24,000x and 3,400x more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9x, 19x and 3x better throughput, energy efficiency and area efficiency.Comment: External Links: TheNextPlatform: http://goo.gl/f7qX0L ; O'Reilly: https://goo.gl/Id1HNT ; Hacker News: https://goo.gl/KM72SV ; Embedded-vision: http://goo.gl/joQNg8 ; Talk at NVIDIA GTC'16: http://goo.gl/6wJYvn ; Talk at Embedded Vision Summit: https://goo.gl/7abFNe ; Talk at Stanford University: https://goo.gl/6lwuer. Published as a conference paper in ISCA 201

    A force-based gradient descent method for ab initio\mathit{\text{ab initio}} atomic structure relaxation

    Full text link
    Force-based algorithms for ab initio\mathit{\text{ab initio}} atomic structure relaxation, such as conjugate gradient methods, usually get stuck in the line minimization processes along search directions, where expensive ab initio\mathit{\text{ab initio}} calculations are triggered frequently to test trial positions before locating the next iterate. We present a force-based gradient descent method, WANBB, that circumvents the deficiency. At each iteration, WANBB enters the line minimization process with a trial stepsize capturing the local curvature of the energy surface. The exit is controlled by an unrestrictive criterion that tends to accept early trials. These two ingredients streamline the line minimization process in WANBB. The numerical simulations on nearly 80 systems with good universality demonstrate the considerable compression of WANBB on the cost for the unaccepted trials compared with conjugate gradient methods. We also observe across the board significant and universal speedups as well as the superior robustness of WANBB over several widely used methods. The latter point is theoretically established. The implementation of WANBB is pretty simple, in that no a priori physical knowledge is required and only two parameters are present without tuning.Comment: 8 pages, 9 figure

    Proactive Information Sampling in Value-Based Decision-Making: Deciding When and Where to Saccade

    Get PDF
    Evidence accumulation has been the core component in recent development of perceptual and value-based decision-making theories. Most studies have focused on the evaluation of evidence between alternative options. What remains largely unknown is the process that prepares evidence: how may the decision-maker sample different sources of information sequentially, if they can only sample one source at a time? Here we propose a theoretical framework in prescribing how different sources of information should be sampled to facilitate the decision process: beliefs for different noisy sources are updated in a Bayesian manner and participants can proactively allocate resource for sampling (i.e., saccades) among different sources to maximize the information gain in such process. We show that our framework can account for human participants' actual choice and saccade behavior in a two-alternative value-based decision-making task. Moreover, our framework makes novel predictions about the empirical eye movement patterns

    Profile-Free and Real-Time Task Recommendation in Mobile Crowdsensing

    Get PDF
    • …
    corecore