16,424 research outputs found
RNA interference is ineffective as a routine method for gene silencing in chick embryos as monitored by fgf8 silencing.
The in vivo accessibility of the chick embryo makes it a favoured model system for experimental developmental biology. Although the range of available techniques now extends to miss-expression of genes through in ovo electroporation, it remains difficult to knock out individual gene expression. Recently, the possibility of silencing gene expression by RNAi in chick embryos has been reported. However, published studies show only discrete quantitative differences in the expression of the endogenous targeted genes and unclear morphological alterations. To elucidate whether the tools currently available are adequate to silence gene expression sufficiently to produce a clear and specific null-like mutant phenotype, we have performed several experiments with different molecules that trigger RNAi: dsRNA, siRNA, and shRNA produced from a plasmid coexpressing green fluorescent protein as an internal marker. Focussing on fgf8 expression in the developing isthmus, we show that no morphological defects are observed, and that fgf8 expression is neither silenced in embryos microinjected with dsRNA nor in embryos microinjected and electroporated with a pool of siRNAs. Moreover, fgf8 expression was not significantly silenced in most isthmic cells transformed with a plasmid producing engineered shRNAs to fgf8. We also show that siRNA molecules do not spread significantly from cell to cell as reported for invertebrates, suggesting the existence of molecular differences between different model systems that may explain the different responses to RNAi. Although our results are basically in agreement with previously reported studies, we suggest, in contrast to them, that with currently available tools and techniques the number of cells in which fgf8 gene expression is decreased, if any, is not sufficient to generate a detectable mutant phenotype, thus making RNAi useless as a routine method for functional gene analysis in chick embryos
The value of myocardial perfusion scintigraphy in the diagnosis and management of angina and myocardial infarction : a probabilistic analysis
Background and Aim. Coronary heart disease (CHD) is the most common cause of death in the United Kingdom,
accounting for more than 120,000 deaths in 2001, among the highest rates in the world. This study reports an economic evaluation of single photon emission computed tomography myocardial perfusion scintigraphy (SPECT) for the diagnosis and management of coronary artery disease (CAD). Methods. Strategies involving SPECT with and without stress electrocardiography (ECG) and coronary angiography (CA) were compared to diagnostic strategies not involving SPECT. The diagnosis decision was modelled with a decision tree model and long-term costs and consequences using a Markov model. Data to populate the models were obtained from a series of systematic reviews. Unlike earlier evaluations, a probabilistic analysis was included to assess the statistical imprecision of the results. The results are presented in terms of incremental cost per quality-adjusted life year (QALY). Results. At prevalence levels of CAD of 10.5%, SPECT-based strategies are costeffective; ECG-CA is highly unlikely to be optimal. At a ceiling ratio of _20,000 per QALY, SPECT-CA has a 90% likelihood of being optimal. Beyond this threshold, this strategy becomes less likely to be cost-effective. At more than _75,000 per QALY, coronary angiography is most likely to be optimal. For higher levels of prevalence (around 50%) and more than a _10,000 per QALY threshold, coronary angiography is the optimal decision. Conclusions. SPECTbased strategies are likely to be cost-effective when risk of CAD is modest (10.5%). Sensitivity analyses show these strategies dominated non-SPECT-based strategies for risk of CAD up to 4%. At higher levels of prevalence, invasive strategies may become worthwhile. Finally, sensitivity analyses show stress echocardiography as a potentially costeffective option, and further research to assess the relative cost-effectiveness of echocardiography should also be performed.This article was developed from a Technology Assessment Review conducted on behalf of the National Institute for Clinical Excellence (NICE) and was funded by the Department of Health on a grant administered by the National Coordinating Centre for Health Technology Assessment. The Health Economics Research Unit and the Health Services Research Unit are core funded by the Chief Scientist Office of the Scottish Executive Health Department.Peer reviewedAuthor versio
A neural network for semantic labelling of structured information
Intelligent systems rely on rich sources of information to make informed decisions. Using information from external sources requires establishing correspondences between the information and known information classes. This can be achieved with semantic labelling, which assigns known labels to structured information by classifying it according to computed features. The existing proposals have explored different sets of features, without focusing on what classification techniques are used. In this paper we present three contributions: first, insights on architectural issues that arise when using neural networks for semantic labelling; second, a novel implementation of semantic labelling that uses a state-of-the-art neural network classifier which achieves significantly better results than other four traditional classifiers; third, a comparison of the results obtained by the former network when using different subsets of features, comparing textual features to structural ones, and domain-dependent features to domain-independent ones. The experiments were carried away with datasets from three real world sources. Our results show that there is a need to develop more semantic labelling proposals with sophisticated classification techniques and large features catalogues.Ministerio de Economía y Competitividad TIN2016-75394-
TAPON: a two-phase machine learning approach for semantic labelling
Through semantic labelling we enrich structured information from sources such as HTML pages, tables, or JSON files, with labels to integrate it into a local ontology. This process involves measuring some features of the information and then nding the classes that best describe it. The problem with current techniques is that they do not model relationships between classes. Their features fall short when some classes have very similar structures or textual formats. In order to deal with this problem, we have devised TAPON: a new semantic labelling technique that computes novel features that take into account the relationships. TAPON computes these features by means of a two-phase approach. In the first phase, we compute simple features and obtain a preliminary set of labels (hints). In the second phase, we inject our novel features and obtain a refined set of labels. Our experimental results show that our technique, thanks to our rich feature catalogue and novel modelling, achieves higher accuracy than other state-of-the-art techniques.Ministerio de Economía y Competitividad TIN2016-75394-
Proving Continuity of Coinductive Global Bisimulation Distances: A Never Ending Story
We have developed a notion of global bisimulation distance between processes
which goes somehow beyond the notions of bisimulation distance already existing
in the literature, mainly based on bisimulation games. Our proposal is based on
the cost of transformations: how much we need to modify one of the compared
processes to obtain the other. Our original definition only covered finite
processes, but a coinductive approach allows us to extend it to cover infinite
but finitary trees. After having shown many interesting properties of our
distance, it was our intention to prove continuity with respect to projections,
but unfortunately the issue remains open. Nonetheless, we have obtained several
partial results that are presented in this paper.Comment: In Proceedings PROLE 2015, arXiv:1512.0617
Report on identifying a protocol to elicit flowering in Brachiaria humidicola with photoperiod management
Two Genotypes of Brachiaria humidicola (A and B) were planted on the grounds of CIAT headquarters in Palmira during 2018 – 2019, 10 lamps were placed in the lot to evaluate 6 different photoperiods (1 - 6) with Light in 2 different wavelength range (W.R.) α and β, for this, 17 samples were carried out on the variables height, vigor, chlorophyll content and number of inflorescences; a total of 93 field work were carried out to support the trial, finding that the photoperiod 5 in the W.R. β and 3 photoperiod in the W.R. α for the B genotype show significant differences (p <0.05, Tukey) with respect to the other treatments for height and number of inflorescences, performing the statistical analysis in the SAS software. As to the seed production, it was found that any light stimulus generates greater seed production, despite the conditions under which the crops were made and the method of harvest used. I order to refine the protocol and validate the results in bigger genotype sample another trial with the 2 most efficient treatments was proposed for 2020, focusing on number of inflorescences and seed production
AYNEC: All you need for evaluating completion techniques in knowledge graphs
The popularity of knowledge graphs has led to the development of techniques to refine them and increase their quality. One of the main refinement tasks is completion (also known as link prediction for knowledge graphs), which seeks to add missing triples to the graph, usually by classifying potential ones as true or false. While there is a wide variety of graph completion techniques, there is no standard evaluation setup, so each proposal is evaluated using different datasets and metrics. In this paper we present AYNEC, a suite for the evaluation of knowledge graph completion techniques that covers the entire evaluation workflow. It includes a customisable tool for the generation of datasets with multiple variation points related to the preprocessing of graphs, the splitting into training and testing examples, and the generation of negative examples. AYNEC also provides a visual summary of the graph and the optional exportation of the datasets in an open format for their visualisation. We use AYNEC to generate a library of datasets ready to use for evaluation purposes based on several popular knowledge graphs. Finally, it includes a tool that computes relevant metrics and uses significance tests to compare each pair of techniques. These open source tools, along with the datasets, are freely available to the research community and will be maintained.Ministerio de Economía y Competitividad TIN2016-75394-
The role of snow cover in the Northern Hemisphere winter to summer transition
This paper examines the role of North Hemisphere snow cover in the linkage between the winter North Atlantic Oscillation (NAO) and the summer Northern Annular Mode (NAM). This transition is partially supported by the persistence of the NAO-induced snow cover anomalies and the asymmetric thermal distribution induced by summer snow cover. We define an index of subpolar temperature difference which links winter NAO with the subsequent summer NAM. The index is also significant in the linkage between summer and winter climates and can be used as an useful predictor of the upcoming winter NAO
A characterization of dual quermassintegrals and the roots of dual steiner polynomials
For any finite with , we provide a
characterization of those tuples of positive numbers
which are dual querma\ss integrals of two star bodies. It turns out that this
problem is related to the moment problem. Based on this relation we also get
new inequalities for the dual querma\ss integrals. Moreover, the above
characterization will be the key tool in order to investigate structural
properties of the set of roots of dual Steiner polynomials of star bodies
- …