7,053 research outputs found

    Lattice Boltzmann Simulations of Droplet formation in confined Channels with Thermocapillary flows

    Full text link
    Based on mesoscale lattice Boltzmann simulations with the "Shan-Chen" model, we explore the influence of thermocapillarity on the break-up properties of fluid threads in a microfluidic T-junction, where a dispersed phase is injected perpendicularly into a main channel containing a continuous phase, and the latter induces periodic break-up of droplets due to the cross-flowing. Temperature effects are investigated by switching on/off both positive/negative temperature gradients along the main channel direction, thus promoting a different thread dynamics with anticipated/delayed break-up. Numerical simulations are performed at changing the flow-rates of both the continuous and dispersed phases, as well as the relative importance of viscous forces, surface tension forces and thermocapillary stresses. The range of parameters is broad enough to characterize the effects of thermocapillarity on different mechanisms of break-up in the confined T-junction, including the so-called "squeezing" and "dripping" regimes, previously identified in the literature. Some simple scaling arguments are proposed to rationalize the observed behaviour, and to provide quantitative guidelines on how to predict the droplet size after break-up.Comment: 18 pages, 9 figure

    Ground States for Diffusion Dominated Free Energies with Logarithmic Interaction

    Get PDF
    Replacing linear diffusion by a degenerate diffusion of porous medium type is known to regularize the classical two-dimensional parabolic-elliptic Keller-Segel model. The implications of nonlinear diffusion are that solutions exist globally and are uniformly bounded in time. We analyse the stationary case showing the existence of a unique, up to translation, global minimizer of the associated free energy. Furthermore, we prove that this global minimizer is a radially decreasing compactly supported continuous density function which is smooth inside its support, and it is characterized as the unique compactly supported stationary state of the evolution model. This unique profile is the clear candidate to describe the long time asymptotics of the diffusion dominated classical Keller-Segel model for general initial data.Comment: 30 pages, 2 figure

    Non-Oberbeck-Boussinesq effects in two-dimensional Rayleigh-Benard convection in glycerol

    Get PDF
    We numerically analyze Non-Oberbeck-Boussinesq (NOB) effects in two-dimensional Rayleigh-Benard flow in glycerol, which shows a dramatic change in the viscosity with temperature. The results are presented both as functions of the Rayleigh number (Ra) up to 10810^8 (for fixed temperature difference between the top and bottom plates) and as functions of "non-Oberbeck-Boussinesqness'' or "NOBness'' (Δ\Delta) up to 50 K (for fixed Ra). For this large NOBness the center temperature TcT_c is more than 5 K larger than the arithmetic mean temperature TmT_m between top and bottom plate and only weakly depends on Ra. To physically account for the NOB deviations of the Nusselt numbers from its Oberbeck-Boussinesq values, we apply the decomposition of NuNOB/NuOBNu_{NOB}/Nu_{OB} into the product of two effects, namely first the change in the sum of the top and bottom thermal BL thicknesses, and second the shift of the center temperature TcT_c as compared to TmT_m. While for water the origin of the NuNu deviation is totally dominated by the second effect (cf. Ahlers et al., J. Fluid Mech. 569, pp. 409 (2006)) for glycerol the first effect is dominating, in spite of the large increase of TcT_c as compared to TmT_m.Comment: 6 pages, 7 figure

    A Proposal for an ADU Incentive Program for the City of Mill Valley

    Get PDF
    The development of Accessory Dwelling Units (ADUs) is an important option for responding to increases in housing demand, increasing the diversity in housing options, broadening the range of space available, addressing varied financial needs, and increasing density

    {\delta}N formalism

    Full text link
    Precise understanding of nonlinear evolution of cosmological perturbations during inflation is necessary for the correct interpretation of measurements of non-Gaussian correlations in the cosmic microwave background and the large-scale structure of the universe. The "{\delta}N formalism" is a popular and powerful technique for computing non-linear evolution of cosmological perturbations on large scales. In particular, it enables us to compute the curvature perturbation, {\zeta}, on large scales without actually solving perturbed field equations. However, people often wonder why this is the case. In order for this approach to be valid, the perturbed Hamiltonian constraint and matter-field equations on large scales must, with a suitable choice of coordinates, take on the same forms as the corresponding unperturbed equations. We find that this is possible when (1) the unperturbed metric is given by a homogeneous and isotropic Friedmann-Lema\^itre-Robertson-Walker metric; and (2) on large scales and with a suitable choice of coordinates, one can ignore the shift vector (g0i) as well as time-dependence of tensor perturbations to gij/a2(t) of the perturbed metric. While the first condition has to be assumed a priori, the second condition can be met when (3) the anisotropic stress becomes negligible on large scales. However, in order to explicitly show that the second condition follows from the third condition, one has to use gravitational field equations, and thus this statement may depend on the details of theory of gravitation. Finally, as the {\delta}N formalism uses only the Hamiltonian constraint and matter-field equations, it does not a priori respect the momentum constraint. We show that the violation of the momentum constraint only yields a decaying mode solution for {\zeta}, and the violation vanishes when the slow-roll conditions are satisfied.Comment: 10 page

    Generating sequential space-filling designs using genetic algorithms and Monte Carlo methods

    Get PDF
    In this paper, the authors compare a Monte Carlo method and an optimization-based approach using genetic algorithms for sequentially generating space-filling experimental designs. It is shown that Monte Carlo methods perform better than genetic algorithms for this specific problem

    On Minimizing Crossings in Storyline Visualizations

    Get PDF
    In a storyline visualization, we visualize a collection of interacting characters (e.g., in a movie, play, etc.) by xx-monotone curves that converge for each interaction, and diverge otherwise. Given a storyline with nn characters, we show tight lower and upper bounds on the number of crossings required in any storyline visualization for a restricted case. In particular, we show that if (1) each meeting consists of exactly two characters and (2) the meetings can be modeled as a tree, then we can always find a storyline visualization with O(nlogn)O(n\log n) crossings. Furthermore, we show that there exist storylines in this restricted case that require Ω(nlogn)\Omega(n\log n) crossings. Lastly, we show that, in the general case, minimizing the number of crossings in a storyline visualization is fixed-parameter tractable, when parameterized on the number of characters kk. Our algorithm runs in time O(k!2klogk+k!2m)O(k!^2k\log k + k!^2m), where mm is the number of meetings.Comment: 6 pages, 4 figures. To appear at the 23rd International Symposium on Graph Drawing and Network Visualization (GD 2015

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Nonlinear Evolution of Very Small Scale Cosmological Baryon Perturbations at Recombination

    Get PDF
    The evolution of baryon density perturbations on very small scales is investigated. In particular, the nonlinear growth induced by the radiation drag force from the shear velocity field on larger scales during the recombination epoch, which is originally proposed by Shaviv in 1998, is studied in detail. It is found that inclusion of the diffusion term which Shaviv neglected in his analysis results in rather mild growth whose growth rate is 100\ll 100 instead of enormous amplification 104\sim 10^4 of Shaviv's original claim since the diffusion suppresses the growth. The growth factor strongly depends on the amplitude of the large scale velocity field. The nonlinear growth mechanism is applied to density perturbations of general adiabatic cold dark matter (CDM) models. In these models, it has been found in the previous works that the baryon density perturbations are not completely erased by diffusion damping if there exists gravitational potential of CDM. With employing the perturbed rate equation which is derived in this paper, the nonlinear evolution of baryon density perturbations is investigated. It is found that: (1) The nonlinear growth is larger for smaller scales. This mechanism only affects the perturbations whose scales are smaller than 102M\sim 10^2M_\odot, which are coincident with the stellar scales. (2) The maximum growth factors of baryon density fluctuations for various COBE normalized CDM models are typically less than factor 10 for 3σ3-\sigma large scale velocity peaks. (3) The growth factor depends on Ωb\Omega_{\rm b}.Comment: 24 pages, 9 figures, submitted to Ap
    corecore