413 research outputs found

    Nr2e3 is a Genetic Modifier That Rescues Retinal Degeneration and Promotes Homeostasis in Multiple Models of Retinitis Pigmentosa

    Get PDF
    Recent advances in viral vector engineering, as well as an increased understanding of the cellular and molecular mechanism of retinal diseases, have led to the development of novel gene therapy approaches. Furthermore, ease of accessibility and ocular immune privilege makes the retina an ideal target for gene therapies. In this study, the nuclear hormone receptor gene Nr2e3 was evaluated for efficacy as broad-spectrum therapy to attenuate early to intermediate stages of retinal degeneration in five unique mouse models of retinitis pigmentosa (RP). RP is a group of heterogenic inherited retinal diseases associated with over 150 gene mutations, affecting over 1.5 million individuals worldwide. RP varies in age of onset, severity, and rate of progression. In addition, ~40% of RP patients cannot be genetically diagnosed, confounding the ability to develop personalized RP therapies. Remarkably, Nr2e3 administered therapy resulted in reduced retinal degeneration as observed by increase in photoreceptor cells, improved electroretinogram, and a dramatic molecular reset of key transcription factors and associated gene networks. These therapeutic effects improved retinal homeostasis in diseased tissue. Results of this study provide evidence that Nr2e3 can serve as a broad-spectrum therapy to treat multiple forms of RP

    Accuracy of popular automatic QT Interval algorithms assessed by a 'Gold Standard' and comparison with a Novel method: computer simulation study

    Get PDF
    BACKGROUND: Accurate measurement of the QT interval is very important from a clinical and pharmaceutical drug safety screening perspective. Expert manual measurement is both imprecise and imperfectly reproducible, yet it is used as the reference standard to assess the accuracy of current automatic computer algorithms, which thus produce reproducible but incorrect measurements of the QT interval. There is a scientific imperative to evaluate the most commonly used algorithms with an accurate and objective 'gold standard' and investigate novel automatic algorithms if the commonly used algorithms are found to be deficient. METHODS: This study uses a validated computer simulation of 8 different noise contaminated ECG waveforms (with known QT intervals of 461 and 495 ms), generated from a cell array using Luo-Rudy membrane kinetics and the Crank-Nicholson method, as a reference standard to assess the accuracy of commonly used QT measurement algorithms. Each ECG contaminated with 39 mixtures of noise at 3 levels of intensity was first filtered then subjected to three threshold methods (T1, T2, T3), two T wave slope methods (S1, S2) and a Novel method. The reproducibility and accuracy of each algorithm was compared for each ECG. RESULTS: The coefficient of variation for methods T1, T2, T3, S1, S2 and Novel were 0.36, 0.23, 1.9, 0.93, 0.92 and 0.62 respectively. For ECGs of real QT interval 461 ms the methods T1, T2, T3, S1, S2 and Novel calculated the mean QT intervals(standard deviations) to be 379.4(1.29), 368.5(0.8), 401.3(8.4), 358.9(4.8), 381.5(4.6) and 464(4.9) ms respectively. For ECGs of real QT interval 495 ms the methods T1, T2, T3, S1, S2 and Novel calculated the mean QT intervals(standard deviations) to be 396.9(1.7), 387.2(0.97), 424.9(8.7), 386.7(2.2), 396.8(2.8) and 493(0.97) ms respectively. These results showed significant differences between means at >95% confidence level. Shifting ECG baselines caused large errors of QT interval with T1 and T2 but no error with Novel. CONCLUSION: The algorithms T2, T1 and Novel gave low coefficients of variation for QT measurement. The Novel technique gave the most accurate measurement of QT interval, T3 (a differential threshold method) was the next most accurate by a large margin. The objective and accurate 'gold standard' presented in this paper may be useful to assess new QT measurement algorithms. The Novel algorithm may prove to be more accurate and reliable method to measure the QT interval

    A feasibility study incorporating a pilot randomised controlled trial of oral feeding plus pre-treatment gastrostomy tube versus oral feeding plus as-needed nasogastric tube feeding in patients undergoing chemoradiation for head and neck cancer (TUBE trial): study protocol

    Get PDF
    Background There are 7000 new cases of head and neck squamous cell cancers (HNSCC) treated by the NHS each year. Stage III and IV HNSCC can be treated non-surgically by radio therapy (RT) or chemoradiation therapy (CRT). CRT can affect eating and drinking through a range of side effects with 90 % of patients undergoing this treatment requiring nutritional support via gastrostomy (G) or nasogastric (NG) tube feeding. Long-term dysphagia following CRT is a primary concern for patients. The effect of enteral feeding routes on swallowing function is not well understood, and the two feeding methods have, to date, not been compared to assess which leads to a better patient outcome. The purpose of this study is to explore the feasibility of conducting a randomised controlled trial (RCT) comparing these two options with particular emphasis on patient willingness to be randomised and clinician willingness to approach eligible patients. Methods/design This is a mixed methods multicentre study to establish the feasibility of a randomised controlled trial comparing oral feeding plus pre-treatment gastrostomy versus oral feeding plus as required nasogastric tube feeding in patients with HNSCC. A total of 60 participants will be randomised to the two arms of the study (1:1 ratio). The primary outcome of feasibility is a composite of recruitment (willingness to randomise and be randomised) and retention. A qualitative process evaluation investigating patient, family and friends and staff experiences of trial participation will also be conducted alongside an economic modelling exercise to synthesise available evidence and provide estimates of cost-effectiveness and value of information. Participants will be assessed at baseline (pre-randomisation), during CRT weekly, 3 months and 6 months. Discussion Clinicians are in equipoise over the enteral feeding options for patients being treated with CRT. Swallowing outcomes have been identified as a top priority for patients following treatment and this trial would inform a future larger scale RCT in this area to inform best practice

    Right ventricular dyssynchrony in patients with pulmonary hypertension is associated with disease severity and functional class

    Get PDF
    BACKGROUND: Abnormalities in right ventricular function are known to occur in patients with pulmonary arterial hypertension. OBJECTIVE: Test the hypothesis that chronic elevation in pulmonary artery systolic pressure delays mechanical activation of the right ventricle, termed dyssynchrony, and is associated with both symptoms and right ventricular dysfunction. METHODS: Fifty-two patients (mean age 46 ± 15 years, 24 patients with chronic pulmonary hypertension) were prospectively evaluated using several echocardiographic parameters to assess right ventricular size and function. In addition, tissue Doppler imaging was also obtained to assess longitudinal strain of the right ventricular wall, interventricular septum, and lateral wall of the left ventricle and examined with regards to right ventricular size and function as well as clinical variables. RESULTS: In this study, patients with chronic pulmonary hypertension had statistically different right ventricular fractional area change (35 ± 13 percent), right ventricular end-systolic area (21 ± 10 cm(2)), right ventricular Myocardial Performance Index (0.72 ± 0.34), and Eccentricity Index (1.34 ± 0.37) than individuals without pulmonary hypertension (51 ± 5 percent, 9 ± 2 cm(2), 0.27 ± 0.09, and 0.97 ± 0.06, p < 0.005, respectively). Furthermore, peak longitudinal right ventricular wall strain in chronic pulmonary hypertension was also different -20.8 ± 9.0 percent versus -28.0 ± 4.1 percent, p < 0.01). Right ventricular dyssynchrony correlated very well with right ventricular end-systolic area (r = 0.79, p < 0.001) and Eccentricity Index (r = 0.83, p < 0.001). Furthermore, right ventricular dyssynchrony correlates with pulmonary hypertension severity index (p < 0.0001), World Health Organization class (p < 0.0001), and number of hospitalizations (p < 0.0001). CONCLUSION: Lower peak longitudinal right ventricular wall strain and significantly delayed time-to-peak strain values, consistent with right ventricular dyssynchrony, were found in a small heterogeneous group of patients with chronic pulmonary hypertension when compared to individuals without pulmonary hypertension. Furthermore, right ventricular dyssynchrony was associated with disease severity and compromised functional class

    A Semianalytical PDF of Downlink SINR for Femtocell Networks

    Get PDF
    This paper presents a derivation of the probability density function (PDF) of the signal-to-interference and noise ratio (SINR) for the downlink of a cell in multicellular networks. The mathematical model considers uncoordinated locations and transmission powers of base stations (BSs) which reflect accurately the deployment of randomly located femtocells in an indoor environment. The derivation is semianalytical, in that the PDF is obtained by analysis and can be easily calculated by employing standard numerical methods. Thus, it obviates the need for time-consuming simulation efforts. The derivation of the PDF takes into account practical propagation models including shadow fading. The effect of background noise is also considered. Numerical experiments are performed assuming various environments and deployment scenarios to examine the performance of femtocell networks. The results are compared with Monte Carlo simulations for verification purposes and show good agreement

    Combined analgesics in (headache) pain therapy: shotgun approach or precise multi-target therapeutics?

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Pain in general and headache in particular are characterized by a change in activity in brain areas involved in pain processing. The therapeutic challenge is to identify drugs with molecular targets that restore the healthy state, resulting in meaningful pain relief or even freedom from pain. Different aspects of pain perception, i.e. sensory and affective components, also explain why there is not just one single target structure for therapeutic approaches to pain. A network of brain areas ("pain matrix") are involved in pain perception and pain control. This diversification of the pain system explains why a wide range of molecularly different substances can be used in the treatment of different pain states and why in recent years more and more studies have described a superior efficacy of a precise multi-target combination therapy compared to therapy with monotherapeutics.</p> <p>Discussion</p> <p>In this article, we discuss the available literature on the effects of several fixed-dose combinations in the treatment of headaches and discuss the evidence in support of the role of combination therapy in the pharmacotherapy of pain, particularly of headaches. The scientific rationale behind multi-target combinations is the therapeutic benefit that could not be achieved by the individual constituents and that the single substances of the combinations act together additively or even multiplicatively and cooperate to achieve a completeness of the desired therapeutic effect.</p> <p>As an example the fixesd-dose combination of acetylsalicylic acid (ASA), paracetamol (acetaminophen) and caffeine is reviewed in detail. The major advantage of using such a fixed combination is that the active ingredients act on different but distinct molecular targets and thus are able to act on more signalling cascades involved in pain than most single analgesics without adding more side effects to the therapy.</p> <p>Summary</p> <p>Multitarget therapeutics like combined analgesics broaden the array of therapeutic options, enable the completeness of the therapeutic effect, and allow doctors (and, in self-medication with OTC medications, the patients themselves) to customize treatment to the patient's specific needs. There is substantial clinical evidence that such a multi-component therapy is more effective than mono-component therapies.</p

    The PLATO 2.0 mission

    Get PDF
    PLATO 2.0 has recently been selected for ESA's M3 launch opportunity (2022/24). Providing accurate key planet parameters (radius, mass, density and age) in statistical numbers, it addresses fundamental questions such as: How do planetary systems form and evolve? Are there other systems with planets like ours, including potentially habitable planets? The PLATO 2.0 instrument consists of 34 small aperture telescopes (32 with 25 s readout cadence and 2 with 2.5 s candence) providing a wide field-of-view (2232 deg 2) and a large photometric magnitude range (4-16 mag). It focusses on bright (4-11 mag) stars in wide fields to detect and characterize planets down to Earth-size by photometric transits, whose masses can then be determined by ground-based radial-velocity follow-up measurements. Asteroseismology will be performed for these bright stars to obtain highly accurate stellar parameters, including masses and ages. The combination of bright targets and asteroseismology results in high accuracy for the bulk planet parameters: 2 %, 4-10 % and 10 % for planet radii, masses and ages, respectively. The planned baseline observing strategy includes two long pointings (2-3 years) to detect and bulk characterize planets reaching into the habitable zone (HZ) of solar-like stars and an additional step-and-stare phase to cover in total about 50 % of the sky. PLATO 2.0 will observe up to 1,000,000 stars and detect and characterize hundreds of small planets, and thousands of planets in the Neptune to gas giant regime out to the HZ. It will therefore provide the first large-scale catalogue of bulk characterized planets with accurate radii, masses, mean densities and ages. This catalogue will include terrestrial planets at intermediate orbital distances, where surface temperatures are moderate. Coverage of this parameter range with statistical numbers of bulk characterized planets is unique to PLATO 2.0. The PLATO 2.0 catalogue allows us to e.g.: - complete our knowledge of planet diversity for low-mass objects, - correlate the planet mean density-orbital distance distribution with predictions from planet formation theories,- constrain the influence of planet migration and scattering on the architecture of multiple systems, and - specify how planet and system parameters change with host star characteristics, such as type, metallicity and age. The catalogue will allow us to study planets and planetary systems at different evolutionary phases. It will further provide a census for small, low-mass planets. This will serve to identify objects which retained their primordial hydrogen atmosphere and in general the typical characteristics of planets in such low-mass, low-density range. Planets detected by PLATO 2.0 will orbit bright stars and many of them will be targets for future atmosphere spectroscopy exploring their atmosphere. Furthermore, the mission has the potential to detect exomoons, planetary rings, binary and Trojan planets. The planetary science possible with PLATO 2.0 is complemented by its impact on stellar and galactic science via asteroseismology as well as light curves of all kinds of variable stars, together with observations of stellar clusters of different ages. This will allow us to improve stellar models and study stellar activity. A large number of well-known ages from red giant stars will probe the structure and evolution of our Galaxy. Asteroseismic ages of bright stars for different phases of stellar evolution allow calibrating stellar age-rotation relationships. Together with the results of ESA's Gaia mission, the results of PLATO 2.0 will provide a huge legacy to planetary, stellar and galactic science

    Measurement and interpretation of same-sign W boson pair production in association with two jets in pp collisions at s = 13 TeV with the ATLAS detector

    Get PDF
    This paper presents the measurement of fducial and diferential cross sections for both the inclusive and electroweak production of a same-sign W-boson pair in association with two jets (W±W±jj) using 139 fb−1 of proton-proton collision data recorded at a centre-of-mass energy of √s = 13 TeV by the ATLAS detector at the Large Hadron Collider. The analysis is performed by selecting two same-charge leptons, electron or muon, and at least two jets with large invariant mass and a large rapidity diference. The measured fducial cross sections for electroweak and inclusive W±W±jj production are 2.92 ± 0.22 (stat.) ± 0.19 (syst.)fb and 3.38±0.22 (stat.)±0.19 (syst.)fb, respectively, in agreement with Standard Model predictions. The measurements are used to constrain anomalous quartic gauge couplings by extracting 95% confdence level intervals on dimension-8 operators. A search for doubly charged Higgs bosons H±± that are produced in vector-boson fusion processes and decay into a same-sign W boson pair is performed. The largest deviation from the Standard Model occurs for an H±± mass near 450 GeV, with a global signifcance of 2.5 standard deviations

    Measurements of differential cross-sections in top-quark pair events with a high transverse momentum top quark and limits on beyond the Standard Model contributions to top-quark pair production with the ATLAS detector at √s = 13 TeV

    Get PDF
    Cross-section measurements of top-quark pair production where the hadronically decaying top quark has transverse momentum greater than 355 GeV and the other top quark decays into ℓνb are presented using 139 fb−1 of data collected by the ATLAS experiment during proton-proton collisions at the LHC. The fiducial cross-section at s = 13 TeV is measured to be σ = 1.267 ± 0.005 ± 0.053 pb, where the uncertainties reflect the limited number of data events and the systematic uncertainties, giving a total uncertainty of 4.2%. The cross-section is measured differentially as a function of variables characterising the tt¯ system and additional radiation in the events. The results are compared with various Monte Carlo generators, including comparisons where the generators are reweighted to match a parton-level calculation at next-to-next-to-leading order. The reweighting improves the agreement between data and theory. The measured distribution of the top-quark transverse momentum is used to search for new physics in the context of the effective field theory framework. No significant deviation from the Standard Model is observed and limits are set on the Wilson coefficients of the dimension-six operators OtG and Otq(8), where the limits on the latter are the most stringent to date. [Figure not available: see fulltext.]
    corecore