1,906 research outputs found
Vorticity Budget of Weak Thermal Convection in Keplerian disks
By employing the equations of mean-square vorticity (enstrophy) fluctuations
in strong shear flows, we demonstrate that unlike energy production of
turbulent vorticity in nonrotating shear flows, the turbulent vorticity of weak
convection in Keplerian disks cannot gain energy from vortex stretching/tilting
by background shear unless the asscoiated Reynolds stresses are negative. This
is because the epicyclic motion is an energy sink of the radial component of
mean-square turbulent vorticity in Keplerian disks when Reynolds stresses are
positive. Consequently, weak convection cannot be self-sustained in Keplerian
flows. This agrees with the results implied from the equations of mean-square
velocity fluctuations in strong shear flows. Our analysis also sheds light on
the explanation of the simulation result in which positive kinetic helicity is
produced by the Balbus-Hawley instability in a vertically stratified Keplerian
disk. We also comment on the possibility of outward angular momentum transport
by strong convection based on azimuthal pressure perturbations and directions
of energy cascade.Comment: 8 pages, 1 figure, emulateapj.sty, revised version in response to
referee's comments, accepted by Ap
Anisotropic eddy viscosity models
A general discussion on the structure of the eddy viscosity tensor in anisotropic flows is presented. The systematic use of tensor symmetries and flow symmetries is shown to reduce drastically the number of independent parameters needed to describe the rank 4 eddy viscosity tensor. The possibility of using Onsager symmetries for simplifying further the eddy viscosity is discussed explicitly for the axisymmetric geometry
Analgesic treatment of ciguatoxin-induced cold allodynia
Ciguatera, the most common form of nonbacterial ichthyosarcotoxism, is caused by consumption of fish that have bioaccumulated the polyether sodium channel activator ciguatoxin. The neurological symptoms of ciguatera include distressing, often persistent sensory disturbances such as paraesthesias and the pathognomonic symptom of cold allodynia. We show that intracutaneous administration of ciguatoxin in humans elicits a pronounced axon-reflex flare and replicates cold allodynia. To identify compounds able to inhibit ciguatoxin-induced Na-v responses, we developed a novel in vitro ciguatoxin assay using the human neuroblastoma cell line SH-SY5Y. Pharmacological characterisation of this assay demonstrated a major contribution of Na(v)1.2 and Na(v)1.3, but not Na(v)1.7, to ciguatoxin-induced Ca2+ responses. Clinically available Nav inhibitors, as well as the K(v)7 agonist flupirtine, inhibited tetrodotoxin-sensitive ciguatoxin-evoked responses. To establish their in vivo efficacy, we used a novel animal model of ciguatoxin-induced cold allodynia. However, differences in the efficacy of these compounds to reverse ciguatoxin-induced cold allodynia did not correlate with their potency to inhibit ciguatoxin-induced responses in SH-SY5Y cells or at heterologously expressed Nav1.3, Na(v)1.6, Na(v)1.7, or Na(v)1.8, indicating cold allodynia might be more complex than simple activation of Na-v channels. These findings highlight the need for suitable animal models to guide the empiric choice of analgesics, and suggest that lamotrigine and flupirtine could be potentially useful for the treatment of ciguatera. (C) 2013 International Association for the Study of Pain. Published by Elsevier B. V. All rights reserved
Ensemble averaged dynamic modeling
The possibility of using the information from simultaneous equivalent Large Eddy Simulations (LAS) for improving the subgrid scale modeling is investigated. An ensemble average dynamic model is proposed as an alternative to the usual spatial average versions. It is shown to be suitable independently of the existence of any homogeneity directions, and its formulation is thus universal. The ensemble average dynamic model is shown to give very encouraging results for as few as 16 simultaneous LES's
The development and piloting of the graduate assessment of preparedness for practice (GAPP) questionnaire
Introduction Most new dental graduates in the UK begin their professional career following a year in dental foundation training (DFT). There has been little investigation of how prepared they feel for independent general dental practice across all four domains of the General Dental Council’s curriculum ‘Preparing for practice’. This paper describes the development of the Graduate Assessment of Preparedness for Practice (GAPP) questionnaire to address this. Methodology The GAPP questionnaire was developed and piloted using a cohort of educational supervisors (ESs) and foundation dentists (FDs). The questionnaire comprised three parts, the first of which collected respondent demographic data. The second was based on Preparing for practice and was used to develop 34 ‘competence areas’ and required a tick-box response on a 7‑category Likert Scale. The third comprised free text questions in order to further explore the subject’s responses. Results Pilot feedback was positive, the statements were felt to be clear and unambiguous, allowing them sufficient scope to state their position. The pilot study informed small cosmetic changes to the GAPP questionnaire and inclusion of a ‘comments’ column for respondents to qualify their responses. The pilot results indicated that both FDs and their ESs felt that at ten months of DFT, the FDs were very well prepared for independent general dental practice. Discussion The paper describes the important considerations relating to the reliability and validity of the GAPP questionnaire. Conclusions GAPP appears to be a suitable questionnaire to measure preparedness of new graduates with a degree of reliability and validity. The instrument is designed to be simple to complete and provides a useful analytical instrument for both self-assessment of competence and for wider use within dental education
The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe
The preponderance of matter over antimatter in the early Universe, the
dynamics of the supernova bursts that produced the heavy elements necessary for
life and whether protons eventually decay --- these mysteries at the forefront
of particle physics and astrophysics are key to understanding the early
evolution of our Universe, its current state and its eventual fate. The
Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed
plan for a world-class experiment dedicated to addressing these questions. LBNE
is conceived around three central components: (1) a new, high-intensity
neutrino source generated from a megawatt-class proton accelerator at Fermi
National Accelerator Laboratory, (2) a near neutrino detector just downstream
of the source, and (3) a massive liquid argon time-projection chamber deployed
as a far detector deep underground at the Sanford Underground Research
Facility. This facility, located at the site of the former Homestake Mine in
Lead, South Dakota, is approximately 1,300 km from the neutrino source at
Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino
charge-parity symmetry violation and mass ordering effects. This ambitious yet
cost-effective design incorporates scalability and flexibility and can
accommodate a variety of upgrades and contributions. With its exceptional
combination of experimental configuration, technical capabilities, and
potential for transformative discoveries, LBNE promises to be a vital facility
for the field of particle physics worldwide, providing physicists from around
the globe with opportunities to collaborate in a twenty to thirty year program
of exciting science. In this document we provide a comprehensive overview of
LBNE's scientific objectives, its place in the landscape of neutrino physics
worldwide, the technologies it will incorporate and the capabilities it will
possess.Comment: Major update of previous version. This is the reference document for
LBNE science program and current status. Chapters 1, 3, and 9 provide a
comprehensive overview of LBNE's scientific objectives, its place in the
landscape of neutrino physics worldwide, the technologies it will incorporate
and the capabilities it will possess. 288 pages, 116 figure
A Lagrangian dynamic subgrid-scale model turbulence
A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space
Solving large 0–1 multidimensional knapsack problems by a new simplified binary artificial fish swarm algorithm
The artificial fish swarm algorithm has recently been emerged in continuous global
optimization. It uses points of a population in space to identify the position of fish in the school. Many real-world optimization problems are described by 0-1 multidimensional knapsack problems that are NP-hard. In the last decades several exact as well as heuristic methods have been proposed for solving these problems. In this paper, a new simpli ed binary version of the artificial fish swarm algorithm is presented, where a point/ fish is represented by a binary string of 0/1 bits. Trial points are created by using crossover and mutation in the different fi sh behavior that are randomly selected by using two user de ned probability values. In order to make the points feasible the presented algorithm uses a random heuristic drop item procedure followed by an add item procedure aiming to increase the profit throughout the adding of more items in the knapsack. A cyclic reinitialization of 50% of the population, and a simple local search that allows the progress of a small percentage of points towards optimality and after that refines the best point in the population greatly improve the quality of the solutions. The presented method is tested on a set of benchmark instances and a comparison with other methods available in literature is shown. The comparison shows that the proposed method can be an alternative method for solving these problems.The authors wish to thank three anonymous referees for their comments and valuable suggestions to improve the paper. The first author acknowledges Ciˆencia 2007 of FCT (Foundation for Science and Technology) Portugal for the fellowship grant C2007-UMINHO-ALGORITMI-04. Financial support from FEDER COMPETE (Operational Programme Thematic Factors of Competitiveness) and FCT under project FCOMP-01-0124-FEDER-022674 is also acknowledged
Quantum Smoluchowski equation: Escape from a metastable state
We develop a quantum Smoluchowski equation in terms of a true probability
distribution function to describe quantum Brownian motion in configuration
space in large friction limit at arbitrary temperature and derive the rate of
barrier crossing and tunneling within an unified scheme. The present treatment
is independent of path integral formalism and is based on canonical
quantization procedure.Comment: 10 pages, To appear in the Proceedings of Statphys - Kolkata I
- …
