16 research outputs found
Topologically Massive Gauge Theories and their Dual Factorised Gauge Invariant Formulation
There exists a well-known duality between the Maxwell-Chern-Simons theory and
the self-dual massive model in 2+1 dimensions. This dual description has been
extended to topologically massive gauge theories (TMGT) in any dimension. This
Letter introduces an unconventional approach to the construction of this type
of duality through a reparametrisation of the master theory action. The dual
action thereby obtained preserves the same gauge symmetry structure as the
original theory. Furthermore, the dual action is factorised into a propagating
sector of massive gauge invariant variables and a sector with gauge variant
variables defining a pure topological field theory. Combining results obtained
within the Lagrangian and Hamiltonian formulations, a new completed structure
for a gauge invariant dual factorisation of TMGT is thus achieved.Comment: 1+7 pages, no figure
U_e(1)xU_g(1) Actions in 2+1-Dimensions: Full Vectorial Electric and Magnetic Fields
It is considered a dimensional reduction of Ue(1)xUg(1) 3+1-dimensional
electromagnetism with a gauge field (photon) and a pseudo-vector gauge field
(pseudo-photon) to 2+1-dimensions. In the absence of boundary effects, the
quantum structure is maintained, while when boundary effects are considered, as
have been previously studied, a cross Chern-Simons term between both gauge
fields is present, which accounts for topological effects and changes the
quantum structure of the theory. Our construction maintains the dimensional
reduced action invariant under parity (P) and time-inversion (T). We show that
the theory has two massive degrees of freedom, corresponding to the
longitudinal modes of the photon and of the pseudo-photon and briefly discuss
the quantization procedures of the theory in the topological limit (wave
functional quantization) and perturbative limit (an effective dynamical current
theory), pointing out directions to solve the constraints and deal with the
negative energy contributions from pseudo-photons. We recall that the physical
interpretation of the fields in the planar system is new and is only meaningful
in the context of Ue(1)xUg(1) electromagnetism. In this work it is shown that
all the six electromagnetic vectorial fields components are present in the
dimensional reduced theory and that, independently of the embedding of the
planar system, can be described in terms of the two gauge fields only. As far
as the author is aware it is the first time that such a construction is fully
justified, thus allowing a full vectorial treatment at variational level of
electromagnetism in planar systems.Comment: 8 pages, 1 figur
Impact of the spotted microarray preprocessing method on fold-change compression and variance stability
<p>Abstract</p> <p>Background</p> <p>The standard approach for preprocessing spotted microarray data is to subtract the local background intensity from the spot foreground intensity, to perform a log2 transformation and to normalize the data with a global median or a lowess normalization. Although well motivated, standard approaches for background correction and for transformation have been widely criticized because they produce high variance at low intensities. Whereas various alternatives to the standard background correction methods and to log2 transformation were proposed, impacts of both successive preprocessing steps were not compared in an objective way.</p> <p>Results</p> <p>In this study, we assessed the impact of eight preprocessing methods combining four background correction methods and two transformations (the log2 and the glog), by using data from the MAQC study. The current results indicate that most preprocessing methods produce fold-change compression at low intensities. Fold-change compression was minimized using the Standard and the Edwards background correction methods coupled with a log2 transformation. The drawback of both methods is a high variance at low intensities which consequently produced poor estimations of the p-values. On the other hand, effective stabilization of the variance as well as better estimations of the p-values were observed after the glog transformation.</p> <p>Conclusion</p> <p>As both fold-change magnitudes and p-values are important in the context of microarray class comparison studies, we therefore recommend to combine the Edwards correction with a hybrid transformation method that uses the log2 transformation to estimate fold-change magnitudes and the glog transformation to estimate p-values.</p
The transfer of a LC-UV method for the determination of fenofibrate and fenofibric acid in Lidoses: Use of total error as decision criterion
Two new statistical approaches to assess the validity of the transfer of a LC-UV method for the determination of fenofibrate and fenofibric acid were investigated and compared to the conventional approaches generally used in this domain. These new approaches, namely the Tolerance Interval and the Risk approaches, are based on the simultaneous evaluation of the systematic (or trueness) and random (or precision) errors of the transfer into a single criterion called total error (or accuracy). The results of the transfer showed that only the total error based approaches fulfilled the objective of an analytical method transfer, i.e. to give guarantees that each future measurement made by the receiving laboratory will be close enough to the true value of the analyte in the sample. Furthermore the Risk approach was the most powerful one and allowed the estimation of the risk to have future measurements out of specification in the receiving laboratory, therefore being a risk management tool. (c) 2006 Elsevier B.V. All rights reserved
A HIGH SENSITIVITY SHORT BASELINE EXPERIMENT TO SEARCH FOR MUON-NEUTRINO ---> TAU-NEUTRINO OSCILLATION
CERN-SPSC-97-05, CERN-SPSC-I-213
(The TOSCA Collaboration
TOP: A PROTOTYPE FOR THE TOSCA EXPERIMENT
CERN-SPSC-98-02, CERN-SPSC-98-2, CERN-SPSC-M-604
(The TOP Collaboration
Addition of clopidogrel to aspirin and fibrinolytic therapy for myocardial infarction with ST-segment elevation
BACKGROUND:
A substantial proportion of patients receiving fibrinolytic therapy for myocardial infarction with ST-segment elevation have inadequate reperfusion or reocclusion of the infarct-related artery, leading to an increased risk of complications and death.
METHODS:
We enrolled 3491 patients, 18 to 75 years of age, who presented within 12 hours after the onset of an ST-elevation myocardial infarction and randomly assigned them to receive clopidogrel (300-mg loading dose, followed by 75 mg once daily) or placebo. Patients received a fibrinolytic agent, aspirin, and when appropriate, heparin (dispensed according to body weight) and were scheduled to undergo angiography 48 to 192 hours after the start of study medication. The primary efficacy end point was a composite of an occluded infarct-related artery (defined by a Thrombolysis in Myocardial Infarction flow grade of 0 or 1) on angiography or death or recurrent myocardial infarction before angiography.
RESULTS:
The rates of the primary efficacy end point were 21.7 percent in the placebo group and 15.0 percent in the clopidogrel group, representing an absolute reduction of 6.7 percentage points in the rate and a 36 percent reduction in the odds of the end point with clopidogrel therapy (95 percent confidence interval, 24 to 47 percent; P<0.001). By 30 days, clopidogrel therapy reduced the odds of the composite end point of death from cardiovascular causes, recurrent myocardial infarction, or recurrent ischemia leading to the need for urgent revascularization by 20 percent (from 14.1 to 11.6 percent, P=0.03). The rates of major bleeding and intracranial hemorrhage were similar in the two groups.
CONCLUSIONS:
In patients 75 years of age or younger who have myocardial infarction with ST-segment elevation and who receive aspirin and a standard fibrinolytic regimen, the addition of clopidogrel improves the patency rate of the infarct-related artery and reduces ischemic complications