175 research outputs found
SNPmplexViewer--toward a cost-effective traceability system
<p>Abstract</p> <p>Background</p> <p>Beef traceability has become mandatory in many regions of the world and is typically achieved through the use of unique numerical codes on ear tags and animal passports. DNA-based traceability uses the animal's own DNA code to identify it and the products derived from it. Using <it>SNaPshot</it>, a primer-extension-based method, a multiplex of 25 SNPs in a single reaction has been practiced for reducing the expense of genotyping a panel of SNPs useful for identity control.</p> <p>Findings</p> <p>To further decrease <it>SNaPshot</it>'s cost, we introduced the Perl script <it>SNPmplexViewer</it>, which facilitates the analysis of trace files for reactions performed without the use of fluorescent size standards. <it>SNPmplexViewer </it>automatically aligns reference and target trace electropherograms, run with and without fluorescent size standards, respectively. <it>SNPmplexViewer </it>produces a modified target trace file containing a normalised trace in which the reference size standards are embedded. <it>SNPmplexViewer </it>also outputs aligned images of the two electropherograms together with a difference profile.</p> <p>Conclusions</p> <p>Modified trace files generated by <it>SNPmplexViewer </it>enable genotyping of <it>SnaPshot </it>reactions performed without fluorescent size standards, using common fragment-sizing software packages. <it>SNPmplexViewer</it>'s normalised output may also improve the genotyping software's performance. Thus, <it>SNPmplexViewer </it>is a general free tool enabling the reduction of <it>SNaPshot</it>'s cost as well as the fast viewing and comparing of trace electropherograms for fragment analysis. <it>SNPmplexViewer </it>is available at <url>http://cowry.agri.huji.ac.il/cgi-bin/SNPmplexViewer.cgi</url>.</p
Implementation and performance of adaptive mesh refinement in the Ice Sheet System Model (ISSM v4.14)
Accurate projections of the evolution of ice sheets in a changing climate
require a fine mesh/grid resolution in ice sheet models to correctly capture
fundamental physical processes, such as the evolution of the grounding line,
the region where grounded ice starts to float. The evolution of the grounding
line indeed plays a major role in ice sheet dynamics, as it is a fundamental
control on marine ice sheet stability. Numerical modeling of a grounding line
requires significant computational resources since the accuracy of its
position depends on grid or mesh resolution. A technique that improves
accuracy with reduced computational cost is the adaptive mesh refinement
(AMR) approach. We present here the implementation of the AMR technique in
the finite element Ice Sheet System Model (ISSM) to simulate grounding line
dynamics under two different benchmarks: MISMIP3d and MISMIP+. We test
different refinement criteria: (a) distance around the grounding line, (b) a
posteriori error estimator, the Zienkiewicz–Zhu (ZZ) error estimator, and
(c) different combinations of (a) and (b). In both benchmarks, the ZZ error
estimator presents high values around the grounding line. In the MISMIP+ setup,
this estimator also presents high values in the grounded
part of the ice sheet, following the complex shape of the bedrock geometry.
The ZZ estimator helps guide the refinement procedure such that AMR
performance is improved. Our results show that computational time with AMR
depends on the required accuracy, but in all cases, it is significantly
shorter than for uniformly refined meshes. We conclude that AMR without an
associated error estimator should be avoided, especially for real glaciers
that have a complex bed geometry.</p
Coupling computer-interpretable guidelines with a drug-database through a web-based system – The PRESGUID project
BACKGROUND: Clinical Practice Guidelines (CPGs) available today are not extensively used due to lack of proper integration into clinical settings, knowledge-related information resources, and lack of decision support at the point of care in a particular clinical context. OBJECTIVE: The PRESGUID project (PREScription and GUIDelines) aims to improve the assistance provided by guidelines. The project proposes an online service enabling physicians to consult computerized CPGs linked to drug databases for easier integration into the healthcare process. METHODS: Computable CPGs are structured as decision trees and coded in XML format. Recommendations related to drug classes are tagged with ATC codes. We use a mapping module to enhance computerized guidelines coupling with a drug database, which contains detailed information about each usable specific medication. In this way, therapeutic recommendations are backed up with current and up-to-date information from the database. RESULTS: Two authoritative CPGs, originally diffused as static textual documents, have been implemented to validate the computerization process and to illustrate the usefulness of the resulting automated CPGs and their coupling with a drug database. We discuss the advantages of this approach for practitioners and the implications for both guideline developers and drug database providers. Other CPGs will be implemented and evaluated in real conditions by clinicians working in different health institutions
Decoding of Superimposed Traces Produced by Direct Sequencing of Heterozygous Indels
Direct Sanger sequencing of a diploid template containing a heterozygous insertion or deletion results in a difficult-to-interpret mixed trace formed by two allelic traces superimposed onto each other. Existing computational methods for deconvolution of such traces require knowledge of a reference sequence or the availability of both direct and reverse mixed sequences of the same template. We describe a simple yet accurate method, which uses dynamic programming optimization to predict superimposed allelic sequences solely from a string of letters representing peaks within an individual mixed trace. We used the method to decode 104 human traces (mean length 294 bp) containing heterozygous indels 5 to 30 bp with a mean of 99.1% bases per allelic sequence reconstructed correctly and unambiguously. Simulations with artificial sequences have demonstrated that the method yields accurate reconstructions when (1) the allelic sequences forming the mixed trace are sufficiently similar, (2) the analyzed fragment is significantly longer than the indel, and (3) multiple indels, if present, are well-spaced. Because these conditions occur in most encountered DNA sequences, the method is widely applicable. It is available as a free Web application Indelligent at http://ctap.inhs.uiuc.edu/dmitriev/indel.asp
Algorithms for optimizing drug therapy
BACKGROUND: Drug therapy has become increasingly efficient, with more drugs available for treatment of an ever-growing number of conditions. Yet, drug use is reported to be sub optimal in several aspects, such as dosage, patient's adherence and outcome of therapy. The aim of the current study was to investigate the possibility to optimize drug therapy using computer programs, available on the Internet. METHODS: One hundred and ten officially endorsed text documents, published between 1996 and 2004, containing guidelines for drug therapy in 246 disorders, were analyzed with regard to information about patient-, disease- and drug-related factors and relationships between these factors. This information was used to construct algorithms for identifying optimum treatment in each of the studied disorders. These algorithms were categorized in order to define as few models as possible that still could accommodate the identified factors and the relationships between them. The resulting program prototypes were implemented in HTML (user interface) and JavaScript (program logic). RESULTS: Three types of algorithms were sufficient for the intended purpose. The simplest type is a list of factors, each of which implies that the particular patient should or should not receive treatment. This is adequate in situations where only one treatment exists. The second type, a more elaborate model, is required when treatment can by provided using drugs from different pharmacological classes and the selection of drug class is dependent on patient characteristics. An easily implemented set of if-then statements was able to manage the identified information in such instances. The third type was needed in the few situations where the selection and dosage of drugs were depending on the degree to which one or more patient-specific factors were present. In these cases the implementation of an established decision model based on fuzzy sets was required. Computer programs based on one of these three models could be constructed regarding all but one of the studied disorders. The single exception was depression, where reliable relationships between patient characteristics, drug classes and outcome of therapy remain to be defined. CONCLUSION: Algorithms for optimizing drug therapy can, with presumably rare exceptions, be developed for any disorder, using standard Internet programming methods
Diverse M-Best Solutions by Dynamic Programming
Many computer vision pipelines involve dynamic programming primitives such as finding a shortest path or the minimum energy solution in a tree-shaped probabilistic graphical model. In such cases, extracting not merely the best, but the set of M-best solutions is useful to generate a rich collection of candidate proposals that can be used in downstream processing. In this work, we show how M-best solutions of tree-shaped graphical models can be obtained by dynamic programming on a special graph with M layers. The proposed multi-layer concept is optimal for searching M-best solutions, and so flexible that it can also approximate M-best diverse solutions. We illustrate the usefulness with applications to object detection, panorama stitching and centerline extraction
Recommended from our members
A High-End Estimate of Sea Level Rise for Practitioners
Sea level rise (SLR) is a long-lasting consequence of climate change because global anthropogenic warming takes centuries to millennia to equilibrate for the deep ocean and ice sheets. SLR projections based on climate models support policy analysis, risk assessment and adaptation planning today, despite their large uncertainties. The central range of the SLR distribution is estimated by process-based models. However, risk-averse practitioners often require information about plausible future conditions that lie in the tails of the SLR distribution, which are poorly defined by existing models. Here, a community effort combining scientists and practitioners builds on a framework of discussing physical evidence to quantify high-end global SLR for practitioners. The approach is complementary to the IPCC AR6 report and provides further physically plausible high-end scenarios. High-end estimates for the different SLR components are developed for two climate scenarios at two timescales. For global warming of +2°C in 2100 (RCP2.6/SSP1-2.6) relative to pre-industrial values our high-end global SLR estimates are up to 0.9 m in 2100 and 2.5 m in 2300. Similarly, for a (RCP8.5/SSP5-8.5), we estimate up to 1.6 m in 2100 and up to 10.4 m in 2300. The large and growing differences between the scenarios beyond 2100 emphasize the long-term benefits of mitigation. However, even a modest 2°C warming may cause multi-meter SLR on centennial time scales with profound consequences for coastal areas. Earlier high-end assessments focused on instability mechanisms in Antarctica, while here we emphasize the importance of the timing of ice shelf collapse around Antarctica. This is highly uncertain due to low understanding of the driving processes. Hence both process understanding and emission scenario control high-end SLR
Estudios sefardíes dedicados a la memoria de Iacob M. Hassán (ź"l)
Elena Romero y Aitor García Moreno son los editores de este volumen.[EN] This work aims to honour Iacob. M. Hassán, who set up, promoted, and for decades maintained, the CSIC's School of Sephardic studies (Escuela de Estudios Sefardíes) in Madrid. It comprises a collection of articles on the Jews in the medieval Spanish kingdoms, along with other articles on a wide variety of language issues, and the study and publication of literary works produced or handed down by the Sephardim of the Balkans and Morocco between the sixteenth and the twentieth centuries, such as biblical commentaries and lexicons, liturgical poetry, rabbinic literature, biographies, folk tales, popular folk songs, ballads, and modern songs ... These studies also include an article by Iacob. M. Hassán published here for the first time in the form of a facsimile of his original typed manuscript. The work is preceded by a foreword and an unpublished text of one of his lectures, which contains a wealth of autobiographical information, as well as his views on the vicissitudes of Sephardic Studies as an academic discipline.[ES] Con esta obra se quiere honrar al creador, impulsor y mantenedor durante decenios de la llamada Escuela de Estudios Sefardíes del CSIC (Madrid). Se recogen en ella artículos relativos a los judíos en los reinos hispanos medievales, y otros dedicados a muy variados temas de lengua, y al estudio y edición de obras literarias producidas o transmitidas por los sefardíes de los Balcanes y de Marruecos entre el siglo XVI y el XX: comentarios y léxicos bíblicos, poesía litúrgica, literatura rabínica, biografías, cuentos tradicionales, coplas, romances, cancionero moderno, etc., etc. Entre los estudios se incluye además, como primicia, un artículo mecanografiado de Iacob. M. Hassán que se publica por primera vez en edición facsímil. La obra va precedida de un Prólogo y del texto inédito de una de sus conferencias, en la que aporta numerosos datos autobiográficos, así como su visión sobre los avatares de los Estudios Sefardíes como disciplina académica
Recommended from our members
initMIP-Antarctica: an ice sheet model initialization experiment of ISMIP6
Ice sheet numerical modeling is an important tool to estimate the dynamic contribution of the Antarctic ice sheet to sea level rise over the coming centuries. The influence of initial conditions on ice sheet model simulations, however, is still unclear. To better understand this influence, an initial state intercomparison exercise (initMIP) has been developed to compare, evaluate, and improve initialization procedures and estimate their impact on century-scale simulations. initMIP is the first set of experiments of the Ice Sheet Model Intercomparison Project for CMIP6 (ISMIP6), which is the primary Coupled Model Intercomparison Project Phase 6 (CMIP6) activity focusing on the Greenland and Antarctic ice sheets. Following initMIP-Greenland, initMIP-Antarctica has been designed to explore uncertainties associated with model initialization and spin-up and to evaluate the impact of changes in external forcings. Starting from the state of the Antarctic ice sheet at the end of the initialization procedure, three forward experiments are each run for 100 years: a control run, a run with a surface mass balance anomaly, and a run with a basal melting anomaly beneath floating ice. This study presents the results of initMIP-Antarctica from 25 simulations performed by 16 international modeling groups. The submitted results use different initial conditions and initialization methods, as well as ice flow model parameters and reference external forcings. We find a good agreement among model responses to the surface mass balance anomaly but large variations in responses to the basal melting anomaly. These variations can be attributed to differences in the extent of ice shelves and their upstream tributaries, the numerical treatment of grounding line, and the initial ocean conditions applied, suggesting that ongoing efforts to better represent ice shelves in continental-scale models should continue
Protein Phosphatase Magnesium Dependent 1A (PPM1A) Plays a Role in the Differentiation and Survival Processes of Nerve Cells
The serine/threonine phosphatase type 2C (PPM1A) has a broad range of substrates, and its role in regulating stress response is well established. We have investigated the involvement of PPM1A in the survival and differentiation processes of PC6-3 cells, a subclone of the PC12 cell line. This cell line can differentiate into neuron like cells upon exposure to nerve growth factor (NGF). Overexpression of PPM1A in naive PC6-3 cells caused cell cycle arrest at the G2/M phase followed by apoptosis. Interestingly, PPM1A overexpression did not affect fully differentiated cells. Using PPM1A overexpressing cells and PPM1A knockdown cells, we show that this phosphatase affects NGF signaling in PC6-3 cells and is engaged in neurite outgrowth. In addition, the ablation of PPM1A interferes with NGF-induced growth arrest during differentiation of PC6-3 cells
- …