5,698 research outputs found
Algorithmic Debugging of Real-World Haskell Programs: Deriving Dependencies from the Cost Centre Stack
Existing algorithmic debuggers for Haskell require a transformation of all modules in a program, even libraries that the user does not want to debug and which may use language features not supported by the debugger. This is a pity, because a promising ap- proach to debugging is therefore not applicable to many real-world programs. We use the cost centre stack from the Glasgow Haskell Compiler profiling environment together with runtime value observations as provided by the Haskell Object Observation Debugger (HOOD) to collect enough information for algorithmic debugging. Program annotations are in suspected modules only. With this technique algorithmic debugging is applicable to a much larger set of Haskell programs. This demonstrates that for functional languages in general a simple stack trace extension is useful to support tasks such as profiling and debugging
Strengthening impact assessment: a call for integration and focus
We suggest that the impact assessment community has lost its way based on our observation that impact assessment is under attack because of a perceived lack of efficiency. Specifically, we contend that the proliferation of different impact assessment types creates separate silos of expertise and feeds arguments for not only a lack of efficiency but also a lack of effectiveness of the process through excessive specialisation and a lack of interdisciplinary practice. We propose that the solution is a return to the basics of impact assessment with a call for increased integration around the goal of sustainable development and focus through better scoping. We rehearse and rebut counter arguments covering silo-based expertise, advocacy, democracy, sustainability understanding and communication. We call on the impact assessment community to rise to the challenge of increasing integration and focus, and to engage in the debate about the means of strengthening impact assessment
Some relations between Lagrangian models and synthetic random velocity fields
We propose an alternative interpretation of Markovian transport models based
on the well-mixedness condition, in terms of the properties of a random
velocity field with second order structure functions scaling linearly in the
space time increments. This interpretation allows direct association of the
drift and noise terms entering the model, with the geometry of the turbulent
fluctuations. In particular, the well known non-uniqueness problem in the
well-mixedness approach is solved in terms of the antisymmetric part of the
velocity correlations; its relation with the presence of non-zero mean helicity
and other geometrical properties of the flow is elucidated. The well-mixedness
condition appears to be a special case of the relation between conditional
velocity increments of the random field and the one-point Eulerian velocity
distribution, allowing generalization of the approach to the transport of
non-tracer quantities. Application to solid particle transport leads to a model
satisfying, in the homogeneous isotropic turbulence case, all the conditions on
the behaviour of the correlation times for the fluid velocity sampled by the
particles. In particular, correlation times in the gravity and in the inertia
dominated case, respectively, longer and shorter than in the passive tracer
case; in the gravity dominated case, correlation times longer for velocity
components along gravity, than for the perpendicular ones. The model produces,
in channel flow geometry, particle deposition rates in agreement with
experiments.Comment: 54 pages, 8 eps figures included; contains additional material on
SO(3) and on turbulent channel flows. Few typos correcte
Supergravity Instabilities of Non-Supersymmetric Quantum Critical Points
Motivated by the recent use of certain consistent truncations of M-theory to
study condensed matter physics using holographic techniques, we study the
SU(3)-invariant sector of four-dimensional, N=8 gauged supergravity and compute
the complete scalar spectrum at each of the five non-trivial critical points.
We demonstrate that the smaller SU(4)^- sector is equivalent to a consistent
truncation studied recently by various authors and find that the critical point
in this sector, which has been proposed as the ground state of a holographic
superconductor, is unstable due to a family of scalars that violate the
Breitenlohner-Freedman bound. We also derive the origin of this instability in
eleven dimensions and comment on the generalization to other embeddings of this
critical point which involve arbitrary Sasaki-Einstein seven manifolds. In the
spirit of a resurging interest in consistent truncations, we present a formal
treatment of the SU(3)-invariant sector as a U(1)xU(1) gauged N=2 supergravity
theory coupled to one hypermultiplet.Comment: 46 page
A Complete Classification of Higher Derivative Gravity in 3D and Criticality in 4D
We study the condition that the theory is unitary and stable in
three-dimensional gravity with most general quadratic curvature,
Lorentz-Chern-Simons and cosmological terms. We provide the complete
classification of the unitary theories around flat Minkowski and (anti-)de
Sitter spacetimes. The analysis is performed by examining the quadratic
fluctuations around these classical vacua. We also discuss how to understand
critical condition for four-dimensional theories at the Lagrangian level.Comment: 20 pages, v2: minor corrections, refs. added, v3: logic modified, v4:
typos correcte
Kaehler forms and cosmological solutions in type II supergravities
We consider cosmological solutions to type II supergravity theories where the
spacetime is split into a FRW universe and a K\"ahler space, which may be taken
to be Calabi-Yau. The various 2-forms present in the theories are taken to be
proportional to the K\"ahler form associated to the K\"ahler space.Comment: 6 pages, LaTeX2
On Maximal Massive 3D Supergravity
We construct, at the linearized level, the three-dimensional (3D) N = 4
supersymmetric "general massive supergravity" and the maximally supersymmetric
N = 8 "new massive supergravity". We also construct the maximally
supersymmetric linearized N = 7 topologically massive supergravity, although we
expect N = 6 to be maximal at the non-linear level.Comment: 33 page
Turbulent Friction in Rough Pipes and the Energy Spectrum of the Phenomenological Theory
The classical experiments on turbulent friction in rough pipes were performed
by J. Nikuradse in the 1930's. Seventy years later, they continue to defy
theory. Here we model Nikuradse's experiments using the phenomenological theory
of Kolmog\'orov, a theory that is widely thought to be applicable only to
highly idealized flows. Our results include both the empirical scalings of
Blasius and Strickler, and are otherwise in minute qualitative agreement with
the experiments; they suggest that the phenomenological theory may be relevant
to other flows of practical interest; and they unveil the existence of close
ties between two milestones of experimental and theoretical turbulence.Comment: Accepted for publication in PRL; 4 pages, 4 figures; revised versio
Measurement of Lagrangian velocity in fully developed turbulence
We have developed a new experimental technique to measure the Lagrangian
velocity of tracer particles in a turbulent flow, based on ultrasonic Doppler
tracking. This method yields a direct access to the velocity of a single
particule at a turbulent Reynolds number . Its dynamics is
analyzed with two decades of time resolution, below the Lagrangian correlation
time. We observe that the Lagrangian velocity spectrum has a Lorentzian form
, in agreement
with a Kolmogorov-like scaling in the inertial range. The probability density
function (PDF) of the velocity time increments displays a change of shape from
quasi-Gaussian a integral time scale to stretched exponential tails at the
smallest time increments. This intermittency, when measured from relative
scaling exponents of structure functions, is more pronounced than in the
Eulerian framework.Comment: 4 pages, 5 figures. to appear in PR
Application of the Fokker-Planck molecular mixing model to turbulent scalar mixing using moment methods
An extended quadrature method of moments using the beta kernel density function (beta-EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope [ Direct numerical simulations of the turbulent mixing of a passive scalar, Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope [ Mapping closures for turbulent mixing and reaction, Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope [ A DNS study of turbulent mixing of two passive scalars, Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed beta-PDF model [S. S. Girimaji, Assumed beta-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing, Combust. Sci. Technol. 78, 177 (1991)], the beta-EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost
- …