3,123 research outputs found
Histone deacetylases in RA: epigenetics and epiphenomena
Reduced synovial expression of histone deacetylases (HDACs) is proposed to contribute to pathology in rheumatoid arthritis (RA) by enhancing histone-dependent access of transcription factors to promoters of inflammatory genes. In the previous issue of Arthritis Research & Therapy, Kawabata and colleagues provided independent evidence that HDAC activity is increased in the synovium and fibroblast-like synoviocytes (FLSs) of patients with RA and is paralleled by increased HDAC1 expression and synovial tumor necrosis factor-alpha (TNFα) production. Remarkably, stimulation of RA FLSs with TNFα specifically increases HDAC activity and HDAC1 expression, suggesting that changes in synovial HDAC activity and expression may be secondary to local inflammatory status
Precision on leptonic mixing parameters at future neutrino oscillation experiments
We perform a comparison of the different future neutrino oscillation
experiments based on the achievable precision in the determination of the
fundamental parameters theta_{13} and the CP phase, delta, assuming that
theta_{13} is in the range indicated by the recent Daya Bay measurement. We
study the non-trivial dependence of the error on delta on its true value. When
matter effects are small, the largest error is found at the points where CP
violation is maximal, and the smallest at the CP conserving points. The
situation is different when matter effects are sizable. As a result of this
effect, the comparison of the physics reach of different experiments on the
basis of the CP discovery potential, as usually done, can be misleading. We
have compared various proposed super-beam, beta-beam and neutrino factory
setups on the basis of the relative precision of theta_{13} and the error on
delta. Neutrino factories, both high-energy or low-energy, outperform
alternative beam technologies. An ultimate precision on theta_{13} below 3% and
an error on delta of < 7^{\circ} at 1 sigma (1 d.o.f.) can be obtained at a
neutrino factory.Comment: Minor changes, matches version accepted in JHEP. 30 pages, 9 figure
EquiFACS: the Equine Facial Action Coding System
Although previous studies of horses have investigated their facial expressions in specific contexts, e.g. pain, until now there has been no methodology available that documents all the possible facial movements of the horse and provides a way to record all potential facial configurations. This is essential for an objective description of horse facial expressions across a range of contexts that reflect different emotional states. Facial Action Coding Systems (FACS) provide a systematic methodology of identifying and coding facial expressions on the basis of underlying facial musculature and muscle movement. FACS are anatomically based and document all possible facial movements rather than a configuration of movements associated with a particular situation. Consequently, FACS can be applied as a tool for a wide range of research questions. We developed FACS for the domestic horse (Equus caballus) through anatomical investigation of the underlying musculature and subsequent analysis of naturally occurring behaviour captured on high quality video. Discrete facial movements were identified and described in terms of the underlying muscle contractions, in correspondence with previous FACS systems. The reliability of others to be able to learn this system (EquiFACS) and consistently code behavioural sequences was highâand this included people with no previous experience of horses. A wide range of facial movements were identified, including many that are also seen in primates and other domestic animals (dogs and cats). EquiFACS provides a method that can now be used to document the facial movements associated with different social contexts and thus to address questions relevant to understanding social cognition and comparative psychology, as well as informing current veterinary and animal welfare practices
Multidimensional cluster states using a single spin-photon interface coupled strongly to an intrinsic nuclear register
Photonic cluster states are a powerful resource for measurement-based quantum
computing and loss-tolerant quantum communication. Proposals to generate
multi-dimensional lattice cluster states have identified coupled spin-photon
interfaces, spin-ancilla systems, and optical feedback mechanisms as potential
schemes. Following these, we propose the generation of multi-dimensional
lattice cluster states using a single, efficient spin-photon interface coupled
strongly to a nuclear register. Our scheme makes use of the contact hyperfine
interaction to enable universal quantum gates between the interface spin and a
local nuclear register and funnels the resulting entanglement to photons via
the spin-photon interface. Among several quantum emitters, we identify the
silicon-29 vacancy centre in diamond, coupled to a nanophotonic structure, as
possessing the right combination of optical quality and spin coherence for this
scheme. We show numerically that using this system a 2x5-sized cluster state
with a lower-bound fidelity of 0.5 and repetition rate of 65 kHz is achievable
under currently realised experimental performances and with feasible technical
overhead. Realistic gate improvements put 100-photon cluster states within
experimental reach
Mass hierarchy discrimination with atmospheric neutrinos in large volume ice/water Cherenkov detectors
Large mass ice/water Cherenkov experiments, optimized to detect low energy
(1-20 GeV) atmospheric neutrinos, have the potential to discriminate between
normal and inverted neutrino mass hierarchies. The sensitivity depends on
several model and detector parameters, such as the neutrino flux profile and
normalization, the Earth density profile, the oscillation parameter
uncertainties, and the detector effective mass and resolution. A proper
evaluation of the mass hierarchy discrimination power requires a robust
statistical approach. In this work, the Toy Monte Carlo, based on an extended
unbinned likelihood ratio test statistic, was used. The effect of each model
and detector parameter, as well as the required detector exposure, was then
studied. While uncertainties on the Earth density and atmospheric neutrino flux
profiles were found to have a minor impact on the mass hierarchy
discrimination, the flux normalization, as well as some of the oscillation
parameter (\Delta m^2_{31}, \theta_{13}, \theta_{23}, and \delta_{CP})
uncertainties and correlations resulted critical. Finally, the minimum required
detector exposure, the optimization of the low energy threshold, and the
detector resolutions were also investigated.Comment: 23 pages, 16 figure
Requirements for a New Detector at the South Pole Receiving an Accelerator Neutrino Beam
There are recent considerations to increase the photomultiplier density in
the IceCube detector array beyond that of DeepCore, which will lead to a lower
detection threshold and a huge fiducial mass for the neutrino detection. This
initiative is known as "Phased IceCube Next Generation Upgrade" (PINGU). We
discuss the possibility to send a neutrino beam from one of the major
accelerator laboratories in the Northern hemisphere to such a detector. Such an
experiment would be unique in the sense that it would be the only neutrino beam
where the baseline crosses the Earth's core. We study the detector requirements
for a beta beam, a neutrino factory beam, and a superbeam, where we consider
both the cases of small theta_13 and large theta_13, as suggested by the recent
T2K and Double Chooz results. We illustrate that a flavor-clean beta beam best
suits the requirements of such a detector, in particular, that PINGU may
replace a magic baseline detector for small values of theta_13 -- even in the
absence of any energy resolution capability. For large theta_13, however, a
single-baseline beta beam experiment cannot compete if it is constrained by the
CERN-SPS. For a neutrino factory, because of the missing charge identification
possibility in the detector, a very good energy resolution is required. If this
can be achieved, especially a low energy neutrino factory, which does not
suffer from the tau contamination, may be an interesting option for large
theta_13. For the superbeam, where we use the LBNE beam as a reference,
electron neutrino flavor identification and statistics are two of the main
limitations. Finally, we demonstrate that, at least in principle, neutrino
factory and superbeam can measure the density of the Earth's core to the
sub-percent level for sin^2 2theta_13 larger than 0.01.Comment: 34 pages, 15 figures. Minor changes and accepted in JHE
Mass hierarchy, 2-3 mixing and CP-phase with Huge Atmospheric Neutrino Detectors
We explore the physics potential of multi-megaton scale ice or water
Cherenkov detectors with low ( GeV) threshold. Using some proposed
characteristics of the PINGU detector setup we compute the distributions of
events versus neutrino energy and zenith angle , and study
their dependence on yet unknown neutrino parameters. The
regions are identified where the distributions have the highest sensitivity to
the neutrino mass hierarchy, to the deviation of the 2-3 mixing from the
maximal one and to the CP-phase. We evaluate significance of the measurements
of the neutrino parameters and explore dependence of this significance on the
accuracy of reconstruction of the neutrino energy and direction. The effect of
degeneracy of the parameters on the sensitivities is also discussed. We
estimate the characteristics of future detectors (energy and angle resolution,
volume, etc.) required for establishing the neutrino mass hierarchy with high
confidence level. We find that the hierarchy can be identified at --
level (depending on the reconstruction accuracies) after 5 years of
PINGU operation.Comment: 39 pages, 21 figures. Description of Fig.3 correcte
Non-standard interactions versus non-unitary lepton flavor mixing at a neutrino factory
The impact of heavy mediators on neutrino oscillations is typically described
by non-standard four-fermion interactions (NSIs) or non-unitarity (NU). We
focus on leptonic dimension-six effective operators which do not produce
charged lepton flavor violation. These operators lead to particular
correlations among neutrino production, propagation, and detection non-standard
effects. We point out that these NSIs and NU phenomenologically lead, in fact,
to very similar effects for a neutrino factory, for completely different
fundamental reasons. We discuss how the parameters and probabilities are
related in this case, and compare the sensitivities. We demonstrate that the
NSIs and NU can, in principle, be distinguished for large enough effects at the
example of non-standard effects in the --sector, which basically
corresponds to differentiating between scalars and fermions as heavy mediators
as leading order effect. However, we find that a near detector at superbeams
could provide very synergistic information, since the correlation between
source and matter NSIs is broken for hadronic neutrino production, while NU is
a fundamental effect present at any experiment.Comment: 32 pages, 5 figures. Final version published in JHEP. v3: Typo in Eq.
(27) correcte
- âŠ