353 research outputs found

    Combinatorial optimization applied to VLBI scheduling

    Get PDF
    Due to the advent of powerful solvers, today linear programming has seen many applications in production and routing. In this publication, we present mixed-integer linear programming as applied to scheduling geodetic very-long-baseline interferometry (VLBI) observations. The approach uses combinatorial optimization and formulates the scheduling task as a mixed-integer linear program. Within this new method, the schedule is considered as an entity containing all possible observations of an observing session at the same time, leading to a global optimum. In our example, the optimum is found by maximizing the sky coverage score. The sky coverage score is computed by a hierarchical partitioning of the local sky above each telescope into a number of cells. Each cell including at least one observation adds a certain gain to the score. The method is computationally expensive and this publication may be ahead of its time for large networks and large numbers of VLBI observations. However, considering that developments of solvers for combinatorial optimization are progressing rapidly and that computers increase in performance, the usefulness of this approach may come up again in some distant future. Nevertheless, readers may be prompted to look into these optimization methods already today seeing that they are available also in the geodetic literature. The validity of the concept and the applicability of the logic are demonstrated by evaluating test schedules for five 1-h, single-baseline Intensive VLBI sessions. Compared to schedules that were produced with the scheduling software sked, the number of observations per session is increased on average by three observations and the simulated precision of UT1-UTC is improved in four out of five cases (6μs average improvement in quadrature). Moreover, a simplified and thus much faster version of the mixed-integer linear program has been developed for modern VLBI Global Observing System telescopes

    2015 Update on Acute Adverse Reactions to Gadolinium based Contrast Agents in Cardiovascular MR. Large Multi-National and Multi-Ethnical Population Experience With 37788 Patients From the EuroCMR Registry

    Get PDF
    Objectives: Specifically we aim to demonstrate that the results of our earlier safety data hold true in this much larger multi-national and multi-ethnical population. Background: We sought to re-evaluate the frequency, manifestations, and severity of acute adverse reactions associated with administration of several gadolinium- based contrast agents during routine CMR on a European level. Methods: Multi-centre, multi-national, and multi-ethnical registry with consecutive enrolment of patients in 57 European centres. Results: During the current observation 37788 doses of Gadolinium based contrast agent were administered to 37788 patients. The mean dose was 24.7 ml (range 5–80 ml), which is equivalent to 0.123 mmol/kg (range 0.01 - 0.3 mmol/kg). Forty-five acute adverse reactions due to contrast administration occurred (0.12 %). Most reactions were classified as mild (43 of 45) according to the American College of Radiology definition. The most frequent complaints following contrast administration were rashes and hives (15 of 45), followed by nausea (10 of 45) and flushes (10 of 45). The event rate ranged from 0.05 % (linear non-ionic agent gadodiamide) to 0.42 % (linear ionic agent gadobenate dimeglumine). Interestingly, we also found different event rates between the three main indications for CMR ranging from 0.05 % (risk stratification in suspected CAD) to 0.22 % (viability in known CAD). Conclusions: The current data indicate that the results of the earlier safety data hold true in this much larger multi-national and multi-ethnical population. Thus, the “off-label” use of Gadolinium based contrast in cardiovascular MR should be regarded as safe concerning the frequency, manifestation and severity of acute events

    Generating and Reversing Chronic Wounds in Diabetic Mice by Manipulating Wound Redox Parameters

    Get PDF
    By 2025, more than 500 M people worldwide will suffer from diabetes; 125 M will develop foot ulcer(s) and 20 M will undergo an amputation, creating a major health problem. Understanding how these wounds become chronic will provide insights to reverse chronicity. We hypothesized that oxidative stress (OS) in wounds is a critical component for generation of chronicity. We used the db/db mouse model of impaired healing and inhibited, at time of injury, two major antioxidant enzymes, catalase and glutathione peroxidase, creating high OS in the wounds. This was necessary and sufficient to trigger wounds to become chronic. The wounds initially contained a polymicrobial community that with time selected for specific biofilm-forming bacteria. To reverse chronicity we treated the wounds with the antioxidants α-tocopherol and N-acetylcysteine and found that OS was highly reduced, biofilms had increased sensitivity to antibiotics, and granulation tissue was formed with proper collagen deposition and remodeling. We show for the first time generation of chronic wounds in which biofilm develops spontaneously, illustrating importance of early and continued redox imbalance coupled with the presence of biofilm in development of wound chronicity. This model will help decipher additional mechanisms and potentially better diagnosis of chronicity and treatment of human chronic wounds

    Current variables, definitions and endpoints of the European Cardiovascular Magnetic Resonance Registry

    Get PDF
    BACKGROUND: Cardiovascular magnetic resonance (CMR) is increasingly used in daily clinical practice. However, little is known about its clinical utility such as image quality, safety and impact on patient management. In addition, there is limited information about the potential of CMR to acquire prognostic information. METHODS: The European Cardiovascular Magnetic Resonance Registry (EuroCMR Registry) will consist of two parts: 1) Multicenter registry with consecutive enrolment of patients scanned in all participating European CMR centres using web based online case record forms. 2) Prospective clinical follow up of patients with suspected coronary artery disease (CAD) and hypertrophic cardiomyopathy (HCM) every 12 months after enrolment to assess prognostic data. CONCLUSION: The EuroCMR Registry offers an opportunity to provide information about the clinical utility of routine CMR in a large number of cases and a diverse population. Furthermore it has the potential to gather information about the prognostic value of CMR in specific patient populations

    Mutual Information for Testing Gene-Environment Interaction

    Get PDF
    Despite current enthusiasm for investigation of gene-gene interactions and gene-environment interactions, the essential issue of how to define and detect gene-environment interactions remains unresolved. In this report, we define gene-environment interactions as a stochastic dependence in the context of the effects of the genetic and environmental risk factors on the cause of phenotypic variation among individuals. We use mutual information that is widely used in communication and complex system analysis to measure gene-environment interactions. We investigate how gene-environment interactions generate the large difference in the information measure of gene-environment interactions between the general population and a diseased population, which motives us to develop mutual information-based statistics for testing gene-environment interactions. We validated the null distribution and calculated the type 1 error rates for the mutual information-based statistics to test gene-environment interactions using extensive simulation studies. We found that the new test statistics were more powerful than the traditional logistic regression under several disease models. Finally, in order to further evaluate the performance of our new method, we applied the mutual information-based statistics to three real examples. Our results showed that P-values for the mutual information-based statistics were much smaller than that obtained by other approaches including logistic regression models

    Evaluation of presumably disease causing SCN1A variants in a cohort of common epilepsy syndromes

    Get PDF
    Objective: The SCN1A gene, coding for the voltage-gated Na+ channel alpha subunit NaV1.1, is the clinically most relevant epilepsy gene. With the advent of high-throughput next-generation sequencing, clinical laboratories are generating an ever-increasing catalogue of SCN1A variants. Variants are more likely to be classified as pathogenic if they have already been identified previously in a patient with epilepsy. Here, we critically re-evaluate the pathogenicity of this class of variants in a cohort of patients with common epilepsy syndromes and subsequently ask whether a significant fraction of benign variants have been misclassified as pathogenic. Methods: We screened a discovery cohort of 448 patients with a broad range of common genetic epilepsies and 734 controls for previously reported SCN1A mutations that were assumed to be disease causing. We re-evaluated the evidence for pathogenicity of the identified variants using in silico predictions, segregation, original reports, available functional data and assessment of allele frequencies in healthy individuals as well as in a follow up cohort of 777 patients. Results and Interpretation: We identified 8 known missense mutations, previously reported as pathogenic, in a total of 17 unrelated epilepsy patients (17/448; 3.80%). Our re-evaluation indicates that 7 out of these 8 variants (p.R27T; p.R28C; p.R542Q; p.R604H; p.T1250M; p.E1308D; p.R1928G; NP-001159435.1) are not pathogenic. Only the p.T1174S mutation may be considered as a genetic risk factor for epilepsy of small effect size based on the enrichment in patients (P = 6.60 7 10-4; OR = 0.32, fishers exact test), previous functional studies but incomplete penetrance. Thus, incorporation of previous studies in genetic counseling of SCN1A sequencing results is challenging and may produce incorrect conclusions

    A model-based approach to selection of tag SNPs

    Get PDF
    BACKGROUND: Single Nucleotide Polymorphisms (SNPs) are the most common type of polymorphisms found in the human genome. Effective genetic association studies require the identification of sets of tag SNPs that capture as much haplotype information as possible. Tag SNP selection is analogous to the problem of data compression in information theory. According to Shannon's framework, the optimal tag set maximizes the entropy of the tag SNPs subject to constraints on the number of SNPs. This approach requires an appropriate probabilistic model. Compared to simple measures of Linkage Disequilibrium (LD), a good model of haplotype sequences can more accurately account for LD structure. It also provides a machinery for the prediction of tagged SNPs and thereby to assess the performances of tag sets through their ability to predict larger SNP sets. RESULTS: Here, we compute the description code-lengths of SNP data for an array of models and we develop tag SNP selection methods based on these models and the strategy of entropy maximization. Using data sets from the HapMap and ENCODE projects, we show that the hidden Markov model introduced by Li and Stephens outperforms the other models in several aspects: description code-length of SNP data, information content of tag sets, and prediction of tagged SNPs. This is the first use of this model in the context of tag SNP selection. CONCLUSION: Our study provides strong evidence that the tag sets selected by our best method, based on Li and Stephens model, outperform those chosen by several existing methods. The results also suggest that information content evaluated with a good model is more sensitive for assessing the quality of a tagging set than the correct prediction rate of tagged SNPs. Besides, we show that haplotype phase uncertainty has an almost negligible impact on the ability of good tag sets to predict tagged SNPs. This justifies the selection of tag SNPs on the basis of haplotype informativeness, although genotyping studies do not directly assess haplotypes. A software that implements our approach is available
    corecore