1,152 research outputs found
On the stability of travelling waves with vorticity obtained by minimisation
We modify the approach of Burton and Toland [Comm. Pure Appl. Math. (2011)]
to show the existence of periodic surface water waves with vorticity in order
that it becomes suited to a stability analysis. This is achieved by enlarging
the function space to a class of stream functions that do not correspond
necessarily to travelling profiles. In particular, for smooth profiles and
smooth stream functions, the normal component of the velocity field at the free
boundary is not required a priori to vanish in some Galilean coordinate system.
Travelling periodic waves are obtained by a direct minimisation of a functional
that corresponds to the total energy and that is therefore preserved by the
time-dependent evolutionary problem (this minimisation appears in Burton and
Toland after a first maximisation). In addition, we not only use the
circulation along the upper boundary as a constraint, but also the total
horizontal impulse (the velocity becoming a Lagrange multiplier). This allows
us to preclude parallel flows by choosing appropriately the values of these two
constraints and the sign of the vorticity. By stability, we mean conditional
energetic stability of the set of minimizers as a whole, the perturbations
being spatially periodic of given period.Comment: NoDEA Nonlinear Differential Equations and Applications, to appea
Non-invasive laminar inference with MEG: comparison of methods and source inversion algorithms
Magnetoencephalography (MEG) is a direct measure of neuronal current flow; its anatomical resolution is therefore not constrained by physiology but rather by data quality and the models used to explain these data. Recent simulation work has shown that it is possible to distinguish between signals arising in the deep and superficial cortical laminae given accurate knowledge of these surfaces with respect to the MEG sensors. This previous work has focused around a single inversion scheme (multiple sparse priors) and a single global parametric fit metric (free energy). In this paper we use several different source inversion algorithms and both local and global, as well as parametric and non-parametric fit metrics in order to demonstrate the robustness of the discrimination between layers. We find that only algorithms with some sparsity constraint can successfully be used to make laminar discrimination. Importantly, local t-statistics, global cross-validation and free energy all provide robust and mutually corroborating metrics of fit. We show that discrimination accuracy is affected by patch size estimates, cortical surface features, and lead field strength, which suggests several possible future improvements to this technique. This study demonstrates the possibility of determining the laminar origin of MEG sensor activity, and thus directly testing theories of human cognition that involve laminar- and frequency-specific mechanisms. This possibility can now be achieved using recent developments in high precision MEG, most notably the use of subject-specific head-casts, which allow for significant increases in data quality and therefore anatomically precise MEG recordings
Testing Lorentz invariance of dark matter
We study the possibility to constrain deviations from Lorentz invariance in
dark matter (DM) with cosmological observations. Breaking of Lorentz invariance
generically introduces new light gravitational degrees of freedom, which we
represent through a dynamical timelike vector field. If DM does not obey
Lorentz invariance, it couples to this vector field. We find that this coupling
affects the inertial mass of small DM halos which no longer satisfy the
equivalence principle. For large enough lumps of DM we identify a (chameleon)
mechanism that restores the inertial mass to its standard value. As a
consequence, the dynamics of gravitational clustering are modified. Two
prominent effects are a scale dependent enhancement in the growth of large
scale structure and a scale dependent bias between DM and baryon density
perturbations. The comparison with the measured linear matter power spectrum in
principle allows to bound the departure from Lorentz invariance of DM at the
per cent level.Comment: 42 pages, 9 figure
Recurrent measurement of frailty is important for mortality prediction: findings from the North West Adelaide Health Study
OBJECTIVES:Frailty places individuals at greater risk of adverse health outcomes. However, it is a dynamic condition and may not always lead to decline. Our objective was to determine the relationship between frailty status (at baseline and follow-up) and mortality using both the frailty phenotype (FP) and frailty index (FI). DESIGN:Population-based cohort. SETTING:Community-dwelling older adults. PARTICIPANTS:A total of 909 individuals aged 65 years or older (55% female), mean age 74.4 (SD 6.2) years, had frailty measurement at baseline. Overall, 549 participants had frailty measurement at two time points. MEASUREMENTS:Frailty was measured using the FP and FI, with a mean 4.5 years between baseline and follow-up. Mortality was matched to official death records with a minimum of 10 years of follow-up. RESULTS:For both measures, baseline frailty was a significant predictor of mortality up to 10 years, with initially good predictive ability (area under the curve [AUC] = .8-.9) decreasing over time. Repeated measurement at follow-up resulted in good prediction compared with lower (AUC = .6-.7) discrimination of equivalent baseline frailty status. In a multivariable model, frailty measurement at follow-up was a stronger predictor of mortality compared with baseline. Frailty change for the Continuous FI was a significant predictor of decreased or increased mortality risk based on corresponding improvement or worsening of score (hazard ratio = 1.04; 95% confidence interval = 1.02-1.07; P = .001). CONCLUSIONS:Frailty measurement is a good predictor of mortality up to 10 years; however, recency of frailty measurement is important for improved prediction. A regular review of frailty status is required in older adults.Mark Q. Thompson, Olga Theou, Graeme R. Tucker, Robert J. Adams, and Renuka Visvanatha
Prevention is better than cure, but...: Preventive medication as a risk to ordinariness?
Preventive health remains at the forefront of public health concerns; recent initiatives, such as the NHS health check, may lead to recommendations for medication in response to the identification of 'at risk' individuals. Little is known about lay views of preventive medication. This paper uses the case of aspirin as a prophylactic against heart disease to explore views among people invited to screening for a trial investigating the efficacy of such an approach. Qualitative interviews (N=46) and focus groups (N=5, participants 31) revealed dilemmas about preventive medication in the form of clashes between norms: first, in general terms, assumptions about the benefit of prevention were complicated by dislike of medication; second, the individual duty to engage in prevention was complicated by the need not to be over involved with one's own health; third, the potential appeal of this alternative approach to health promotion was complicated by unease about the implications of encouraging irresponsible behaviour among others. Though respondents made different decisions about using the drug, they reported very similar ways of trying to resolve these conflicts, drawing upon concepts of necessity and legitimisation and the special ordinariness of the particular dru
Recommended from our members
Non-paraxial Split-step Finite-difference Method for Beam Propagation
A method based on symmetrized splitting of the propagation operator in the finite difference scheme for non-paraxial beam propagation is presented. The formulation allows the solution of the second order scalar wave equation without having to make the slowly varying envelope and one-way propagation approximations. The method is highly accurate and numerically efficient. Unlike most Padé approximant based methods, it is non-iterative in nature and requires less computation. The method can be used for bi-directional propagation as well
Precision Measurement of the Proton and Deuteron Spin Structure Functions g2 and Asymmetries A2
We have measured the spin structure functions g2p and g2d and the virtual
photon asymmetries A2p and A2d over the kinematic range 0.02 < x < 0.8 and 0.7
< Q^2 < 20 GeV^2 by scattering 29.1 and 32.3 GeV longitudinally polarized
electrons from transversely polarized NH3 and 6LiD targets. Our measured g2
approximately follows the twist-2 Wandzura-Wilczek calculation. The twist-3
reduced matrix elements d2p and d2n are less than two standard deviations from
zero. The data are inconsistent with the Burkhardt-Cottingham sum rule if there
is no pathological behavior as x->0. The Efremov-Leader-Teryaev integral is
consistent with zero within our measured kinematic range. The absolute value of
A2 is significantly smaller than the sqrt[R(1+A1)/2] limit.Comment: 12 pages, 4 figures, 2 table
Standard Model baryogenesis through four-fermion operators in braneworlds
We study a new baryogenesis scenario in a class of braneworld models with low
fundamental scale, which typically have difficulty with baryogenesis. The
scenario is characterized by its minimal nature: the field content is that of
the Standard Model and all interactions consistent with the gauge symmetry are
admitted. Baryon number is violated via a dimension-6 proton decay operator,
suppressed today by the mechanism of quark-lepton separation in extra
dimensions; we assume that this operator was unsuppressed in the early Universe
due to a time-dependent quark-lepton separation. The source of CP violation is
the CKM matrix, in combination with the dimension-6 operators. We find that
almost independently of cosmology, sufficient baryogenesis is nearly impossible
in such a scenario if the fundamental scale is above 100 TeV, as required by an
unsuppressed neutron-antineutron oscillation operator. The only exception
producing sufficient baryon asymmetry is a scenario involving
out-of-equilibrium c quarks interacting with equilibrium b quarks.Comment: 39 pages, 5 figures v2: typos, presentational changes, references and
acknowledgments adde
What Can WMAP Tell Us About The Very Early Universe? New Physics as an Explanation of Suppressed Large Scale Power and Running Spectral Index
The Wilkinson Microwave Anisotropy Probe microwave background data may be
giving us clues about new physics at the transition from a ``stringy'' epoch of
the universe to the standard Friedmann Robertson Walker description. Deviations
on large angular scales of the data, as compared to theoretical expectations,
as well as running of the spectral index of density perturbations, can be
explained by new physics whose scale is set by the height of an inflationary
potential. As examples of possible signatures for this new physics, we study
the cosmic microwave background spectrum for two string inspired models: 1)
modifications to the Friedmann equations and 2) velocity dependent potentials.
The suppression of low ``l'' modes in the microwave background data arises due
to the new physics. In addition, the spectral index is red (n<1) on small
scales and blue (n>1) on large scales, in agreement with data.Comment: 18 pages, 2 figures, submitted for publication in Physical Review D,
references added in this versio
- …