17,242 research outputs found
Measuring the Effects of Artificial Viscosity in SPH Simulations of Rotating Fluid Flows
A commonly cited drawback of SPH is the introduction of spurious shear
viscosity by the artificial viscosity term in situations involving rotation.
Existing approaches for quantifying its effect include approximate analytic
formulae and disc-averaged be- haviour in specific ring-spreading simulations,
based on the kinematic effects produced by the artificial viscosity. These
methods have disadvantages, in that they typically are applicable to a very
small range of physical scenarios, have a large number of simplifying
assumptions, and often are tied to specific SPH formulations which do not
include corrective (e.g., Balsara) or time-dependent viscosity terms. In this
study we have developed a simple, generally applicable and practical technique
for evaluating the local effect of artificial viscosity directly from the
creation of specific entropy for each SPH particle. This local approach is
simple and quick to implement, and it al- lows a detailed characterization of
viscous effects as a function of position. Several advantages of this method
are discussed, including its ease in evaluation, its greater accuracy and its
broad applicability. In order to compare this new method with ex- isting ones,
simple disc flow examples are used. Even in these basic cases, the very roughly
approximate nature of the previous methods is shown. Our local method pro-
vides a detailed description of the effects of the artificial viscosity
throughout the disc, even for extended examples which implement Balsara
corrections. As a further use of this approach, explicit dependencies of the
effective viscosity in terms of SPH and flow parameters are estimated from the
example cases. In an appendix, a method for the initial placement of SPH
particles is discussed which is very effective in reducing numerical
fluctuations.Comment: 15 pages, 9 figures, resubmitted to MNRA
Aortic Coarctation: Recent Developments in Experimental and Computational Methods to Assess Treatments for this Simple Condition
Coarctation of the aorta (CoA) is often considered a relatively simple disease, but long-term outcomes suggest otherwise as life expectancies are decades less than in the average population and substantial morbidity often exists. What follows is an expanded version of collective work conducted by the authors\u27 and numerous collaborators that was presented at the 1st International Conference on Computational Simulation in Congenital Heart Disease pertaining to recent advances for CoA. The work begins by focusing on what is known about blood flow, pressure and indices of wall shear stress (WSS) in patients with normal vascular anatomy from both clinical imaging and the use of computational fluid dynamics (CFD) techniques. Hemodynamic alterations observed in CFD studies from untreated CoA patients and those undergoing surgical or interventional treatment are subsequently discussed. The impact of surgical approach, stent design and valve morphology are also presented for these patient populations. Finally, recent work from a representative experimental animal model of CoA that may offer insight into proposed mechanisms of long-term morbidity in CoA is presented
Form discrimination in young children and the concept of similarity
A brief account is given of differentiation theory as it relates
to form discrimination studies, and relevant experimental work is
reviewed. Some preliminary comments on the concept of similarity,
arising out of this and pertaining to the extent to which children can
be said to possess an adequate grasp of what is implied by "the same
as," are made as an introduction to the first experiment. This
examines the performance of a group of children twice, at mean ages
3-8 and 4-9, in matching-from-sample discrimination tasks with
stimulus material varying in orientation only, and in form and orient¬
ation. Considerable improvement in terms of increased number of
correct responses is shown. A sequence of three stages is described.
In the first, characterised by a small number of correct responses and a
small number of multiple responses (MR), that is responses where more
than one comparison figure are matched to the standard, performance is
affected as much by extraneous factors such as position of a figure in the
comparison array as by stimulus features. The second stage shows an
increase in both MR and number of correct matches; here global features
of similarity appear to be being detected and used. The final stage,
with most responses being correct and few MRs being given, represents
the most competent level. It is indicated that failure to detect stimulus
features, rather than the presence of a deviant notion of the meaning of
"same, "/"same," is responsible for the error patterns shown.
The concept of similarity is then examined in greater detail.
A number of distinctions are drawn, in particular, between the
specification of similarity relations between members of a stimulus
set in terms of its attribute structure and the perceived similarity of
the same set as expressed by subjects' judgements. The importance
of providing a normative model as a baseline for the assessment of
perfomnance is emphasised. A number of ways of specifying stimuli
are described, together with methods of analysing similarities data.
The use of a model to link perceived and physical structure, and thus
to give some indication of the processes underlying discrimination
performance, is considered. A set of experiments is then described
which embody these ideas, and it is shown that even in four-year-old
children errors in a matching-from-sample task systematically reflect
features of the v/hole stimulus set. It is also shown that this does not hold
with pair comparison presentation. The capacity of children for
redefining the attribute structure of stimulus sets is brought out. It is
concluded that much more attention should be given to stimulus speci¬
fication in formal terms if the processes involved in form discrimination
are to be elucidated, as opposed to the discriminability of a particular
set of stimuli under particular circumstances
Renormalization in Coulomb gauge QCD
In the Coulomb gauge of QCD, the Hamiltonian contains a non-linear Christ-Lee
term, which may alternatively be derived from a careful treatment of ambiguous
Feynman integrals at 2-loop order. We investigate how and if UV divergences
from higher order graphs can be consistently absorbed by renormalization of the
Christ-Lee term. We find that they cannot.Comment: 23 pages, 26 figure
Energy Dependence of Scattering Ground State Polar Molecules
We explore the total cross section of ground state polar molecules in an
electric field at various energies, focusing on RbCs and RbK. An external
electric field polarizes the molecules and induces strong dipolar interactions
leading to non-zero partial waves contributing to the scattering even as the
collision energy goes to zero. This results in the need to compute scattering
problems with many different values of total M to converge the total cross
section. An accurate and efficient approximate total cross section is
introduced and used to study the low field temperature dependence. To
understand the scattering of the polar molecules we compare a semi-classical
cross section with quantum unitarity limit. This comparison leads to the
ability to characterize the scattering based on the value of the electric field
and the collision energy.Comment: Accepted PRA, 10 pages, 5 figure
Inverse Scattering and Acousto-Optic Imaging
We propose a tomographic method to reconstruct the optical properties of a
highly-scattering medium from incoherent acousto-optic measurements. The method
is based on the solution to an inverse problem for the diffusion equation and
makes use of the principle of interior control of boundary measurements by an
external wave field.Comment: 10 page
- …