5,050 research outputs found

    A note on Stokes' problem in dense granular media using the μ(I)\mu(I)--rheology

    Full text link
    The classical Stokes' problem describing the fluid motion due to a steadily moving infinite wall is revisited in the context of dense granular flows of mono-dispersed beads using the recently proposed μ(I)\mu(I)--rheology. In Newtonian fluids, molecular diffusion brings about a self-similar velocity profile and the boundary layer in which the fluid motion takes place increases indefinitely with time tt as νt\sqrt{\nu t}, where ν\nu is the kinematic viscosity. For a dense granular visco-plastic liquid, it is shown that the local shear stress, when properly rescaled, exhibits self-similar behaviour at short-time scales and it then rapidly evolves towards a steady-state solution. The resulting shear layer increases in thickness as νgt\sqrt{\nu_g t} analogous to a Newtonian fluid where νg\nu_g is an equivalent granular kinematic viscosity depending not only on the intrinsic properties of the granular media such as grain diameter dd, density ρ\rho and friction coefficients but also on the applied pressure pwp_w at the moving wall and the solid fraction ϕ\phi (constant). In addition, the μ(I)\mu(I)--rheology indicates that this growth continues until reaching the steady-state boundary layer thickness δs=βw(pw/ϕρg)\delta_s = \beta_w (p_w/\phi \rho g ), independent of the grain size, at about a finite time proportional to βw2(pw/ρgd)3/2d/g\beta_w^2 (p_w/\rho g d)^{3/2} \sqrt{d/g}, where gg is the acceleration due to gravity and βw=(τwτs)/τs\beta_w = (\tau_w - \tau_s)/\tau_s is the relative surplus of the steady-state wall shear-stress τw\tau_w over the critical wall shear stress τs\tau_s (yield stress) that is needed to bring the granular media into motion... (see article for a complete abstract).Comment: in press (Journal of Fluid Mechanics

    Quantifying Timing Leaks and Cost Optimisation

    Full text link
    We develop a new notion of security against timing attacks where the attacker is able to simultaneously observe the execution time of a program and the probability of the values of low variables. We then show how to measure the security of a program with respect to this notion via a computable estimate of the timing leakage and use this estimate for cost optimisation.Comment: 16 pages, 2 figures, 4 tables. A shorter version is included in the proceedings of ICICS'08 - 10th International Conference on Information and Communications Security, 20-22 October, 2008 Birmingham, U

    Probabilistic abstract interpretation: From trace semantics to DTMC’s and linear regression

    Get PDF
    In order to perform probabilistic program analysis we need to consider probabilistic languages or languages with a probabilistic semantics, as well as a corresponding framework for the analysis which is able to accommodate probabilistic properties and properties of probabilistic computations. To this purpose we investigate the relationship between three different types of probabilistic semantics for a core imperative language, namely Kozen’s Fixpoint Semantics, our Linear Operator Semantics and probabilistic versions of Maximal Trace Semantics. We also discuss the relationship between Probabilistic Abstract Interpretation (PAI) and statistical or linear regression analysis. While classical Abstract Interpretation, based on Galois connection, allows only for worst-case analyses, the use of the Moore-Penrose pseudo inverse in PAI opens the possibility of exploiting statistical and noisy observations in order to analyse and identify various system properties

    An Algorithmic Approach to Quantum Field Theory

    Full text link
    The lattice formulation provides a way to regularize, define and compute the Path Integral in a Quantum Field Theory. In this paper we review the theoretical foundations and the most basic algorithms required to implement a typical lattice computation, including the Metropolis, the Gibbs sampling, the Minimal Residual, and the Stabilized Biconjugate inverters. The main emphasis is on gauge theories with fermions such as QCD. We also provide examples of typical results from lattice QCD computations for quantities of phenomenological interest.Comment: 44 pages, to be published in IJMP

    Lattice study of two-dimensional N=(2,2) super Yang-Mills at large-N

    Full text link
    We study two-dimensional N=(2,2) SU(N) super Yang-Mills theory on Euclidean two-torus using Sugino's lattice regularization. We perform the Monte-Carlo simulation for N=2,3,4,5 and then extrapolate the result to N = infinity. With the periodic boundary conditions for the fermions along both circles, we establish the existence of a bound state in which scalar fields clump around the origin, in spite of the existence of a classical flat direction. In this phase the global (Z_N)^2 symmetry turns out to be broken. We provide a simple explanation for this fact and discuss its physical implications.Comment: 24 pages, 13 figure

    A computational group theoretic symmetry reduction package for the SPIN model checker

    Get PDF
    Symmetry reduced model checking is hindered by two problems: how to identify state space symmetry when systems are not fully symmetric, and how to determine equivalence of states during search. We present TopSpin, a fully automatic symmetry reduction package for the Spin model checker. TopSpin uses the Gap computational algebra system to effectively detect state space symmetry from the associated Promela specification, and to choose an efficient symmetry reduction strategy by classifying automorphism groups as a disjoint/wreath product of subgroups. We present encouraging experimental results for a variety of Promela examples

    Oncogenic K-Ras suppresses IP<sub>3</sub>-dependent Ca<sup>2+</sup> release through remodeling of IP<sub>3</sub>Rs isoform composition and ER luminal Ca<sup>2+</sup> levels in colorectal cancer cell lines

    Get PDF
    The GTPase Ras is a molecular switch engaged downstream of G-protein coupled receptors and receptor tyrosine inases that controls multiple cell fate-determining signalling athways. Ras signalling is frequently deregulated in cancer underlying associated changes in cell phenotype. Although Ca2+ signalling pathways control some overlapping functions with Ras, and altered Ca2+ signalling pathways are emerging as important players in oncogenic transformation, how Ca2+ signalling is remodelled during transformation and whether it has a causal role remains unclear. We have investigated Ca2+ signalling in two human colorectal cancer cell lines and their isogenic derivatives in which the mutated K-Ras allele (G13D) has been deleted by homologous recombination. We show that agonist-induced Ca2+ release from intracellular stores is enhanced by loss of K-RasG13D through an increase in the ER store content and a modification of IP3R subtype abundance. Consistently, uptake of Ca2+ into mitochondria and sensitivity to apoptosis was enhanced as a result of KRasG13D loss. These results suggest that suppression of Ca2+ signalling is a common response to naturally occurring levels of K-RasG13D that contributes to a survival advantage during oncogenic transformation

    Towards a lattice determination of the BBπB^\ast B \pi coupling

    Get PDF
    The coupling gBBπg_{B^\ast B \pi} is related to the form factor at zero momentum of the axial current between BB^\ast- and BB-states. This form factor is evaluated on the lattice using static heavy quarks and light quark propagators determined by a stochastic inversion of the fermionic bilinear. The \gBBP coupling is related to the coupling gg between heavy mesons and low-momentum pions in the effective heavy meson chiral lagrangian. The coupling of the effective theory can therefore be computed by numerical simulations. We find the value g=0.42(4)(8)g = 0.42(4)(8). Besides its theoretical interest, the phenomenological implications of such a determination are discussed.Comment: 20 pages, 6 figure

    Second large-scale Monte Carlo study for the Cherenkov Telescope Array

    Full text link
    The Cherenkov Telescope Array (CTA) represents the next generation of ground based instruments for Very High Energy gamma-ray astronomy. It is expected to improve on the sensitivity of current instruments by an order of magnitude and provide energy coverage from 20 GeV to more than 200 TeV. In order to achieve these ambitious goals Monte Carlo (MC) simulations play a crucial role, guiding the design of CTA. Here, results of the second large-scale MC production are reported, providing a realistic estimation of feasible array candidates for both Northern and Sourthern Hemisphere sites performance, placing CTA capabilities into the context of the current generation of High Energy γ\gamma-ray detectors.Comment: In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), The Hague, The Netherlands. All CTA contributions at arXiv:1508.0589
    corecore