175 research outputs found

    SNPmplexViewer--toward a cost-effective traceability system

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Beef traceability has become mandatory in many regions of the world and is typically achieved through the use of unique numerical codes on ear tags and animal passports. DNA-based traceability uses the animal's own DNA code to identify it and the products derived from it. Using <it>SNaPshot</it>, a primer-extension-based method, a multiplex of 25 SNPs in a single reaction has been practiced for reducing the expense of genotyping a panel of SNPs useful for identity control.</p> <p>Findings</p> <p>To further decrease <it>SNaPshot</it>'s cost, we introduced the Perl script <it>SNPmplexViewer</it>, which facilitates the analysis of trace files for reactions performed without the use of fluorescent size standards. <it>SNPmplexViewer </it>automatically aligns reference and target trace electropherograms, run with and without fluorescent size standards, respectively. <it>SNPmplexViewer </it>produces a modified target trace file containing a normalised trace in which the reference size standards are embedded. <it>SNPmplexViewer </it>also outputs aligned images of the two electropherograms together with a difference profile.</p> <p>Conclusions</p> <p>Modified trace files generated by <it>SNPmplexViewer </it>enable genotyping of <it>SnaPshot </it>reactions performed without fluorescent size standards, using common fragment-sizing software packages. <it>SNPmplexViewer</it>'s normalised output may also improve the genotyping software's performance. Thus, <it>SNPmplexViewer </it>is a general free tool enabling the reduction of <it>SNaPshot</it>'s cost as well as the fast viewing and comparing of trace electropherograms for fragment analysis. <it>SNPmplexViewer </it>is available at <url>http://cowry.agri.huji.ac.il/cgi-bin/SNPmplexViewer.cgi</url>.</p

    Implementation and performance of adaptive mesh refinement in the Ice Sheet System Model (ISSM v4.14)

    Get PDF
    Accurate projections of the evolution of ice sheets in a changing climate require a fine mesh/grid resolution in ice sheet models to correctly capture fundamental physical processes, such as the evolution of the grounding line, the region where grounded ice starts to float. The evolution of the grounding line indeed plays a major role in ice sheet dynamics, as it is a fundamental control on marine ice sheet stability. Numerical modeling of a grounding line requires significant computational resources since the accuracy of its position depends on grid or mesh resolution. A technique that improves accuracy with reduced computational cost is the adaptive mesh refinement (AMR) approach. We present here the implementation of the AMR technique in the finite element Ice Sheet System Model (ISSM) to simulate grounding line dynamics under two different benchmarks: MISMIP3d and MISMIP+. We test different refinement criteria: (a) distance around the grounding line, (b) a posteriori error estimator, the Zienkiewicz–Zhu (ZZ) error estimator, and (c) different combinations of (a) and (b). In both benchmarks, the ZZ error estimator presents high values around the grounding line. In the MISMIP+ setup, this estimator also presents high values in the grounded part of the ice sheet, following the complex shape of the bedrock geometry. The ZZ estimator helps guide the refinement procedure such that AMR performance is improved. Our results show that computational time with AMR depends on the required accuracy, but in all cases, it is significantly shorter than for uniformly refined meshes. We conclude that AMR without an associated error estimator should be avoided, especially for real glaciers that have a complex bed geometry.</p

    Coupling computer-interpretable guidelines with a drug-database through a web-based system – The PRESGUID project

    Get PDF
    BACKGROUND: Clinical Practice Guidelines (CPGs) available today are not extensively used due to lack of proper integration into clinical settings, knowledge-related information resources, and lack of decision support at the point of care in a particular clinical context. OBJECTIVE: The PRESGUID project (PREScription and GUIDelines) aims to improve the assistance provided by guidelines. The project proposes an online service enabling physicians to consult computerized CPGs linked to drug databases for easier integration into the healthcare process. METHODS: Computable CPGs are structured as decision trees and coded in XML format. Recommendations related to drug classes are tagged with ATC codes. We use a mapping module to enhance computerized guidelines coupling with a drug database, which contains detailed information about each usable specific medication. In this way, therapeutic recommendations are backed up with current and up-to-date information from the database. RESULTS: Two authoritative CPGs, originally diffused as static textual documents, have been implemented to validate the computerization process and to illustrate the usefulness of the resulting automated CPGs and their coupling with a drug database. We discuss the advantages of this approach for practitioners and the implications for both guideline developers and drug database providers. Other CPGs will be implemented and evaluated in real conditions by clinicians working in different health institutions

    Decoding of Superimposed Traces Produced by Direct Sequencing of Heterozygous Indels

    Get PDF
    Direct Sanger sequencing of a diploid template containing a heterozygous insertion or deletion results in a difficult-to-interpret mixed trace formed by two allelic traces superimposed onto each other. Existing computational methods for deconvolution of such traces require knowledge of a reference sequence or the availability of both direct and reverse mixed sequences of the same template. We describe a simple yet accurate method, which uses dynamic programming optimization to predict superimposed allelic sequences solely from a string of letters representing peaks within an individual mixed trace. We used the method to decode 104 human traces (mean length 294 bp) containing heterozygous indels 5 to 30 bp with a mean of 99.1% bases per allelic sequence reconstructed correctly and unambiguously. Simulations with artificial sequences have demonstrated that the method yields accurate reconstructions when (1) the allelic sequences forming the mixed trace are sufficiently similar, (2) the analyzed fragment is significantly longer than the indel, and (3) multiple indels, if present, are well-spaced. Because these conditions occur in most encountered DNA sequences, the method is widely applicable. It is available as a free Web application Indelligent at http://ctap.inhs.uiuc.edu/dmitriev/indel.asp

    Algorithms for optimizing drug therapy

    Get PDF
    BACKGROUND: Drug therapy has become increasingly efficient, with more drugs available for treatment of an ever-growing number of conditions. Yet, drug use is reported to be sub optimal in several aspects, such as dosage, patient's adherence and outcome of therapy. The aim of the current study was to investigate the possibility to optimize drug therapy using computer programs, available on the Internet. METHODS: One hundred and ten officially endorsed text documents, published between 1996 and 2004, containing guidelines for drug therapy in 246 disorders, were analyzed with regard to information about patient-, disease- and drug-related factors and relationships between these factors. This information was used to construct algorithms for identifying optimum treatment in each of the studied disorders. These algorithms were categorized in order to define as few models as possible that still could accommodate the identified factors and the relationships between them. The resulting program prototypes were implemented in HTML (user interface) and JavaScript (program logic). RESULTS: Three types of algorithms were sufficient for the intended purpose. The simplest type is a list of factors, each of which implies that the particular patient should or should not receive treatment. This is adequate in situations where only one treatment exists. The second type, a more elaborate model, is required when treatment can by provided using drugs from different pharmacological classes and the selection of drug class is dependent on patient characteristics. An easily implemented set of if-then statements was able to manage the identified information in such instances. The third type was needed in the few situations where the selection and dosage of drugs were depending on the degree to which one or more patient-specific factors were present. In these cases the implementation of an established decision model based on fuzzy sets was required. Computer programs based on one of these three models could be constructed regarding all but one of the studied disorders. The single exception was depression, where reliable relationships between patient characteristics, drug classes and outcome of therapy remain to be defined. CONCLUSION: Algorithms for optimizing drug therapy can, with presumably rare exceptions, be developed for any disorder, using standard Internet programming methods

    Diverse M-Best Solutions by Dynamic Programming

    Get PDF
    Many computer vision pipelines involve dynamic programming primitives such as finding a shortest path or the minimum energy solution in a tree-shaped probabilistic graphical model. In such cases, extracting not merely the best, but the set of M-best solutions is useful to generate a rich collection of candidate proposals that can be used in downstream processing. In this work, we show how M-best solutions of tree-shaped graphical models can be obtained by dynamic programming on a special graph with M layers. The proposed multi-layer concept is optimal for searching M-best solutions, and so flexible that it can also approximate M-best diverse solutions. We illustrate the usefulness with applications to object detection, panorama stitching and centerline extraction

    Estudios sefardíes dedicados a la memoria de Iacob M. Hassán (ź"l)

    Get PDF
    Elena Romero y Aitor García Moreno son los editores de este volumen.[EN] This work aims to honour Iacob. M. Hassán, who set up, promoted, and for decades maintained, the CSIC's School of Sephardic studies (Escuela de Estudios Sefardíes) in Madrid. It comprises a collection of articles on the Jews in the medieval Spanish kingdoms, along with other articles on a wide variety of language issues, and the study and publication of literary works produced or handed down by the Sephardim of the Balkans and Morocco between the sixteenth and the twentieth centuries, such as biblical commentaries and lexicons, liturgical poetry, rabbinic literature, biographies, folk tales, popular folk songs, ballads, and modern songs ... These studies also include an article by Iacob. M. Hassán published here for the first time in the form of a facsimile of his original typed manuscript. The work is preceded by a foreword and an unpublished text of one of his lectures, which contains a wealth of autobiographical information, as well as his views on the vicissitudes of Sephardic Studies as an academic discipline.[ES] Con esta obra se quiere honrar al creador, impulsor y mantenedor durante decenios de la llamada Escuela de Estudios Sefardíes del CSIC (Madrid). Se recogen en ella artículos relativos a los judíos en los reinos hispanos medievales, y otros dedicados a muy variados temas de lengua, y al estudio y edición de obras literarias producidas o transmitidas por los sefardíes de los Balcanes y de Marruecos entre el siglo XVI y el XX: comentarios y léxicos bíblicos, poesía litúrgica, literatura rabínica, biografías, cuentos tradicionales, coplas, romances, cancionero moderno, etc., etc. Entre los estudios se incluye además, como primicia, un artículo mecanografiado de Iacob. M. Hassán que se publica por primera vez en edición facsímil. La obra va precedida de un Prólogo y del texto inédito de una de sus conferencias, en la que aporta numerosos datos autobiográficos, así como su visión sobre los avatares de los Estudios Sefardíes como disciplina académica

    Protein Phosphatase Magnesium Dependent 1A (PPM1A) Plays a Role in the Differentiation and Survival Processes of Nerve Cells

    Get PDF
    The serine/threonine phosphatase type 2C (PPM1A) has a broad range of substrates, and its role in regulating stress response is well established. We have investigated the involvement of PPM1A in the survival and differentiation processes of PC6-3 cells, a subclone of the PC12 cell line. This cell line can differentiate into neuron like cells upon exposure to nerve growth factor (NGF). Overexpression of PPM1A in naive PC6-3 cells caused cell cycle arrest at the G2/M phase followed by apoptosis. Interestingly, PPM1A overexpression did not affect fully differentiated cells. Using PPM1A overexpressing cells and PPM1A knockdown cells, we show that this phosphatase affects NGF signaling in PC6-3 cells and is engaged in neurite outgrowth. In addition, the ablation of PPM1A interferes with NGF-induced growth arrest during differentiation of PC6-3 cells
    corecore