2,005 research outputs found

    Characteristics of predictor sets found using differential prioritization

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Feature selection plays an undeniably important role in classification problems involving high dimensional datasets such as microarray datasets. For filter-based feature selection, two well-known criteria used in forming predictor sets are relevance and redundancy. However, there is a third criterion which is at least as important as the other two in affecting the efficacy of the resulting predictor sets. This criterion is the degree of differential prioritization (DDP), which varies the emphases on relevance and redundancy depending on the value of the DDP. Previous empirical works on publicly available microarray datasets have confirmed the effectiveness of the DDP in molecular classification. We now propose to establish the fundamental strengths and merits of the DDP-based feature selection technique. This is to be done through a simulation study which involves vigorous analyses of the characteristics of predictor sets found using different values of the DDP from toy datasets designed to mimic real-life microarray datasets.</p> <p>Results</p> <p>A simulation study employing analytical measures such as the distance between classes before and after transformation using principal component analysis is implemented on toy datasets. From these analyses, the necessity of adjusting the differential prioritization based on the dataset of interest is established. This conclusion is supported by comparisons against both simplistic rank-based selection and state-of-the-art equal-priorities scoring methods, which demonstrates the superiority of the DDP-based feature selection technique. Reapplying similar analyses to real-life multiclass microarray datasets provides further confirmation of our findings and of the significance of the DDP for practical applications.</p> <p>Conclusion</p> <p>The findings have been achieved based on analytical evaluations, not empirical evaluation involving classifiers, thus providing further basis for the usefulness of the DDP and validating the need for unequal priorities on relevance and redundancy during feature selection for microarray datasets, especially highly multiclass datasets.</p

    Atomic structure at 2.5 Å resolution of uridine phosphorylase from E. coli as refined in the monoclinic crystal lattice

    Get PDF
    AbstractUridine phosphorylase from E. coli (Upase) has been crystallized using vapor diffusion technique in a new monoclinic crystal form. The structure was determined by the molecular replacement method at 2.5 Å resolution. The coordinates of the trigonal crystal form were used as a starting model and the refinement by the program XPLOR led to the R-factor of 18.6%. The amino acid fold of the protein was found to be the same as that in the trigonal crystals. The positions of flexible regions were refined. The conclusion about the involvement in the active site is in good agreement with the results of the biochemical experiments

    Temperature inversion of the thermal polarization of water

    Full text link

    Pathotypic diversity of Hyaloperonospora brassicae collected from Brassica oleracea

    Get PDF
    Downy mildew caused by Hyaloperonospora brassicae is an economically destructive disease of brassica crops in many growing regions throughout the world. Specialised pathogenicity of downy mildews from different Brassica species and closely related ornamental or wild relatives has been described from host range studies. Pathotypic variation amongst Hyaloperonospora brassicae isolates from Brassica oleracea has also been described; however, a standard set of B. oleracea lines that could enable reproducible classification of H. brassicae pathotypes was poorly developed. For this purpose, we examined the use of eight genetically refined host lines derived from our previous collaborative work on downy mildew resistance as a differential set to characterise pathotypes in the European population of H. brassicae. Interaction phenotypes for each combination of isolate and host line were assessed following drop inoculation of cotyledons and a spectrum of seven phenotypes was observed based on the level of sporulation on cotyledons and visible host responses. Two host lines were resistant or moderately resistant to the entire collection of isolates, and another was universally susceptible. Five lines showed differential responses to the H. brassicae isolates. A minimum of six pathotypes and five major effect resistance genes are proposed to explain all of the observed interaction phenotypes. The B. oleracea lines from this study can be useful for monitoring pathotype frequencies in H. brassicae populations in the same or other vegetable growing regions, and to assess the potential durability of disease control from different combinations of the predicted downy mildew resistance genes

    Precision Measurement of the Newtonian Gravitational Constant Using Cold Atoms

    Full text link
    About 300 experiments have tried to determine the value of the Newtonian gravitational constant, G, so far, but large discrepancies in the results have made it impossible to know its value precisely. The weakness of the gravitational interaction and the impossibility of shielding the effects of gravity make it very difficult to measure G while keeping systematic effects under control. Most previous experiments performed were based on the torsion pendulum or torsion balance scheme as in the experiment by Cavendish in 1798, and in all cases macroscopic masses were used. Here we report the precise determination of G using laser-cooled atoms and quantum interferometry. We obtain the value G=6.67191(99) x 10^(-11) m^3 kg^(-1) s^(-2) with a relative uncertainty of 150 parts per million (the combined standard uncertainty is given in parentheses). Our value differs by 1.5 combined standard deviations from the current recommended value of the Committee on Data for Science and Technology. A conceptually different experiment such as ours helps to identify the systematic errors that have proved elusive in previous experiments, thus improving the confidence in the value of G. There is no definitive relationship between G and the other fundamental constants, and there is no theoretical prediction for its value, against which to test experimental results. Improving the precision with which we know G has not only a pure metrological interest, but is also important because of the key role that G has in theories of gravitation, cosmology, particle physics and astrophysics and in geophysical models.Comment: 3 figures, 1 tabl

    Structural diversity in alkali metal and alkali metal magnesiate chemistry of the bulky 2,6-diisopropyl-N-(trimethylsilyl)anilino ligand

    Get PDF
    Bulky amido ligands are precious in s-block chemistry since they can implant complementary strong basic and weak nucleophilic properties within compounds. Recent work has shown the pivotal importance of the base structure with enhancement of basicity and extraordinary regioselectivities possible for cyclic alkali metal magnesiates containing mixed n-butyl/amido ligand sets. This work advances alkali metal and alkali metal magnesiate chemistry of the bulky aryl-silyl amido ligand [N(SiMe3)(Dipp)] (Dipp = 2,6-iPr2-C6H3). Infinite chain structures of the parent sodium and potassium amides are disclosed, adding to the few known crystallographically characterised unsolvated s-block metal amides. Solvation by PMDETA or TMEDA gives molecular variants of the lithium and sodium amides; whereas for potassium, PMDETA gives a molecular structure but TMEDA affords a novel, hemi-solvated infinite chain. Crystal structures of the first magnesiate examples of this amide in [MMg{N(SiMe3)(Dipp)}2(μ-nBu)]∞ (M = Na or K), are also revealed though these breakdown to their homometallic components in donor solvent as revealed through NMR and DOSY studies

    Arduous implementation: Does the Normalisation Process Model explain why it's so difficult to embed decision support technologies for patients in routine clinical practice

    Get PDF
    Background: decision support technologies (DSTs, also known as decision aids) help patients and professionals take part in collaborative decision-making processes. Trials have shown favorable impacts on patient knowledge, satisfaction, decisional conflict and confidence. However, they have not become routinely embedded in health care settings. Few studies have approached this issue using a theoretical framework. We explained problems of implementing DSTs using the Normalization Process Model, a conceptual model that focuses attention on how complex interventions become routinely embedded in practice.Methods: the Normalization Process Model was used as the basis of conceptual analysis of the outcomes of previous primary research and reviews. Using a virtual working environment we applied the model and its main concepts to examine: the 'workability' of DSTs in professional-patient interactions; how DSTs affect knowledge relations between their users; how DSTs impact on users' skills and performance; and the impact of DSTs on the allocation of organizational resources.Results: conceptual analysis using the Normalization Process Model provided insight on implementation problems for DSTs in routine settings. Current research focuses mainly on the interactional workability of these technologies, but factors related to divisions of labor and health care, and the organizational contexts in which DSTs are used, are poorly described and understood.Conclusion: the model successfully provided a framework for helping to identify factors that promote and inhibit the implementation of DSTs in healthcare and gave us insights into factors influencing the introduction of new technologies into contexts where negotiations are characterized by asymmetries of power and knowledge. Future research and development on the deployment of DSTs needs to take a more holistic approach and give emphasis to the structural conditions and social norms in which these technologies are enacte

    Incorporation of enzyme concentrations into FBA and identification of optimal metabolic pathways

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In the present article, we propose a method for determining optimal metabolic pathways in terms of the level of concentration of the enzymes catalyzing various reactions in the entire metabolic network. The method, first of all, generates data on reaction fluxes in a pathway based on steady state condition. A set of constraints is formulated incorporating weighting coefficients corresponding to concentration of enzymes catalyzing reactions in the pathway. Finally, the rate of yield of the target metabolite, starting with a given substrate, is maximized in order to identify an optimal pathway through these weighting coefficients.</p> <p>Results</p> <p>The effectiveness of the present method is demonstrated on two synthetic systems existing in the literature, two pentose phosphate, two glycolytic pathways, core carbon metabolism and a large network of carotenoid biosynthesis pathway of various organisms belonging to different phylogeny. A comparative study with the existing extreme pathway analysis also forms a part of this investigation. Biological relevance and validation of the results are provided. Finally, the impact of the method on metabolic engineering is explained with a few examples.</p> <p>Conclusions</p> <p>The method may be viewed as determining an optimal set of enzymes that is required to get an optimal metabolic pathway. Although it is a simple one, it has been able to identify a carotenoid biosynthesis pathway and the optimal pathway of core carbon metabolic network that is closer to some earlier investigations than that obtained by the extreme pathway analysis. Moreover, the present method has identified correctly optimal pathways for pentose phosphate and glycolytic pathways. It has been mentioned using some examples how the method can suitably be used in the context of metabolic engineering.</p

    Assessing Internet addiction using the parsimonious Internet addiction components model - a preliminary study [forthcoming]

    Get PDF
    Internet usage has grown exponentially over the last decade. Research indicates that excessive Internet use can lead to symptoms associated with addiction. To date, assessment of potential Internet addiction has varied regarding populations studied and instruments used, making reliable prevalence estimations difficult. To overcome the present problems a preliminary study was conducted testing a parsimonious Internet addiction components model based on Griffiths’ addiction components (2005), including salience, mood modification, tolerance, withdrawal, conflict, and relapse. Two validated measures of Internet addiction were used (Compulsive Internet Use Scale [CIUS], Meerkerk et al., 2009, and Assessment for Internet and Computer Game Addiction Scale [AICA-S], Beutel et al., 2010) in two independent samples (ns = 3,105 and 2,257). The fit of the model was analysed using Confirmatory Factor Analysis. Results indicate that the Internet addiction components model fits the data in both samples well. The two sample/two instrument approach provides converging evidence concerning the degree to which the components model can organize the self-reported behavioural components of Internet addiction. Recommendations for future research include a more detailed assessment of tolerance as addiction component
    corecore