282 research outputs found
ArrayWiki: an enabling technology for sharing public microarray data repositories and meta-analyses
© 2008 Stokes et al.; licensee BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.DOI: 10.1186/1471-2105-9-S6-S18Background. A survey of microarray databases reveals that most of the repository contents and data models are heterogeneous (i.e., data obtained from different chip manufacturers), and that the repositories provide only basic biological keywords linking to PubMed. As a result, it is difficult to find datasets using research context or analysis parameters information beyond a few keywords. For example, to reduce the "curse-of-dimension" problem in microarray analysis, the number of samples is often increased by merging array data from different datasets. Knowing chip data parameters such as pre-processing steps (e.g., normalization, artefact removal, etc), and knowing any previous biological validation of the dataset is essential due to the heterogeneity of the data. However, most of the microarray repositories do not have meta-data information in the first place, and do not have a a mechanism to add or insert this information. Thus, there is a critical need to create "intelligent" microarray repositories that (1) enable update of meta-data with the raw array data, and (2) provide standardized archiving protocols to minimize bias from the raw data sources. Results. To address the problems discussed, we have developed a community maintained system called ArrayWiki that unites disparate meta-data of microarray meta-experiments from multiple primary sources with four key features. First, ArrayWiki provides a user-friendly knowledge management interface in addition to a programmable interface using standards developed by Wikipedia. Second, ArrayWiki includes automated quality control processes (caCORRECT) and novel visualization methods (BioPNG, Gel Plots), which provide extra information about data quality unavailable in other microarray repositories. Third, it provides a user-curation capability through the familiar Wiki interface. Fourth, ArrayWiki provides users with simple text-based searches across all experiment meta-data, and exposes data to search engine crawlers (Semantic Agents) such as Google to further enhance data discovery. Conclusions. Microarray data and meta information in ArrayWiki are distributed and visualized using a novel and compact data storage format, BioPNG. Also, they are open to the research community for curation, modification, and contribution. By making a small investment of time to learn the syntax and structure common to all sites running MediaWiki software, domain scientists and practioners can all contribute to make better use of microarray technologies in research and medical practices. ArrayWiki is available at http://www.bio-miblab.org/arraywiki
caCORRECT2: Improving the accuracy and reliability of microarray data in the presence of artifacts
© 2011 Moffitt et al.; licensee BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.DOI: 10.1186/1471-2105-12-383Background. In previous work, we reported the development of caCORRECT, a novel microarray quality control system built to identify and correct spatial artifacts commonly found on Affymetrix arrays. We have made recent improvements to caCORRECT, including the development of a model-based data-replacement strategy and integration with typical microarray workflows via caCORRECT's web portal and caBIG grid services. In this report, we demonstrate that caCORRECT improves the reproducibility and reliability of experimental results across several common Affymetrix microarray platforms. caCORRECT represents an advance over state-of-art quality control methods such as Harshlighting, and acts to improve gene expression calculation techniques such as PLIER, RMA and MAS5.0, because it incorporates spatial information into outlier detection as well as outlier information into probe normalization. The ability of caCORRECT to recover accurate gene expressions from low quality probe intensity data is assessed using a combination of real and synthetic artifacts with PCR follow-up confirmation and the affycomp spike in data. The caCORRECT tool can be accessed at the website: http://cacorrect.bme.gatech.edu webcite. Results. We demonstrate that (1) caCORRECT's artifact-aware normalization avoids the undesirable global data warping that happens when any damaged chips are processed without caCORRECT; (2) When used upstream of RMA, PLIER, or MAS5.0, the data imputation of caCORRECT generally improves the accuracy of microarray gene expression in the presence of artifacts more than using Harshlighting or not using any quality control; (3) Biomarkers selected from artifactual microarray data which have undergone the quality control procedures of caCORRECT are more likely to be reliable, as shown by both spike in and PCR validation experiments. Finally, we present a case study of the use of caCORRECT to reliably identify biomarkers for renal cell carcinoma, yielding two diagnostic biomarkers with potential clinical utility, PRKAB1 and NNMT. Conclusions. caCORRECT is shown to improve the accuracy of gene expression, and the reproducibility of experimental results in clinical application. This study suggests that caCORRECT will be useful to clean up possible artifacts in new as well as archived microarray data
EGFR interacts with the fusion protein of respiratory syncytial virus strain 2-20 and mediates infection and mucin expression.
Respiratory syncytial virus (RSV) is the major cause of viral lower respiratory tract illness in children. In contrast to the RSV prototypic strain A2, clinical isolate RSV 2-20 induces airway mucin expression in mice, a clinically relevant phenotype dependent on the fusion (F) protein of the RSV strain. Epidermal growth factor receptor (EGFR) plays a role in airway mucin expression in other systems; therefore, we hypothesized that the RSV 2-20 F protein stimulates EGFR signaling. Infection of cells with chimeric strains RSV A2-2-20F and A2-2-20GF or over-expression of 2-20 F protein resulted in greater phosphorylation of EGFR than infection with RSV A2 or over-expression of A2 F, respectively. Chemical inhibition of EGFR signaling or knockdown of EGFR resulted in diminished infectivity of RSV A2-2-20F but not RSV A2. Over-expression of EGFR enhanced the fusion activity of 2-20 F protein in trans. EGFR co-immunoprecipitated most efficiently with RSV F proteins derived from "mucogenic" strains. RSV 2-20 F and EGFR co-localized in H292 cells, and A2-2-20GF-induced MUC5AC expression was ablated by EGFR inhibitors in these cells. Treatment of BALB/c mice with the EGFR inhibitor erlotinib significantly reduced the amount of RSV A2-2-20F-induced airway mucin expression. Our results demonstrate that RSV F interacts with EGFR in a strain-specific manner, EGFR is a co-factor for infection, and EGFR plays a role in RSV-induced mucin expression, suggesting EGFR is a potential target for RSV disease
Recommended from our members
Informatics and Standards for Nanomedicine Technology
There are several issues to be addressed concerning the management and effective use of information (or data), generated from nanotechnology studies in biomedical research and medicine. These data are large in volume, diverse in content, and are beset with gaps and ambiguities in the description and characterization of nanomaterials. In this work, we have reviewed three areas of nanomedicine informatics: information resources; taxonomies, controlled vocabularies, and ontologies; and information standards. Informatics methods and standards in each of these areas are critical for enabling collaboration; data sharing; unambiguous representation and interpretation of data; semantic (meaningful) search and integration of data; and for ensuring data quality, reliability, and reproducibility. In particular, we have considered four types of information standards in this article, which are standard characterization protocols, common terminology standards, minimum information standards, and standard data communication (exchange) formats. Currently, because of gaps and ambiguities in the data, it is also difficult to apply computational methods and machine learning techniques to analyze, interpret, and recognize patterns in data that are high dimensional in nature, and also to relate variations in nanomaterial properties to variations in their chemical composition, synthesis, characterization protocols, and so on. Progress toward resolving the issues of information management in nanomedicine using informatics methods and standards discussed in this article will be essential to the rapidly growing field of nanomedicine informatics.This article is a U.S. Government work, and as such, is in the public domain in the United States of America
Abstract Reasoning and Friendship in High Functioning Preadolescents with Autism Spectrum Disorders
To investigate the relationship between cognitive and social functioning, 20 Israeli individuals with HFASD aged 8â12 and 22 age, maternal education, and receptive vocabularyâmatched preadolescents with typical development (TYP) came to the lab with a close friend. Measures of abstract reasoning, friendship quality, and dyadic interaction during a play session were obtained. As hypothesized, individuals with HFASD were significantly impaired in abstract reasoning, and there were significant group differences in friend and observer reports of friendship quality. There also was consistency in reports between friends. Two factorsâârelationship appearanceâ and ârelationship qualityâ described positive aspects of the relationships. Disability status and age related to relationship appearance. Proband abstract reasoning was related to relationship quality
Knowledge synthesis of benefits and adverse effects of measles vaccination: the Lasbela balance sheet
<p>Abstract</p> <p>Background</p> <p>In preparation for a cluster-randomized controlled trial of a community intervention to increase the demand for measles vaccination in Lasbela district of Pakistan, a balance sheet summarized published evidence on benefits and possible adverse effects of measles vaccination.</p> <p>Methods</p> <p>The balance sheet listed: 1) major health conditions associated with measles; 2) the risk among the unvaccinated who contract measles; 3) the risk among the vaccinated; 4) the risk difference between vaccinated and unvaccinated; and 5) the likely net gain from vaccination for each condition.</p> <p>Results</p> <p>Two models revealed very different projections of net gain from measles vaccine. A Lasbela-specific combination of low period prevalence of measles among the unvaccinated, medium vaccination coverage and low vaccine efficacy rate, as revealed by the baseline survey, resulted in less-than-expected gains attributable to vaccination. Modelled on estimates where the vaccine had greater efficacy, the gains from vaccination would be more substantial.</p> <p>Conclusion</p> <p>Specific local conditions probably explain the low rates among the unvaccinated while the high vaccine failure rate is likely due to weaknesses in the vaccination delivery system. Community perception of these realities may have had some role in household decisions about whether to vaccinate, although the major discouraging factor was inadequate access. The balance sheet may be useful as a communication tool in other circumstances, applied to up-to-date local evidence.</p
Prospects for beyond the Standard Model physics searches at the Deep Underground Neutrino Experiment
The Deep Underground Neutrino Experiment (DUNE) will be a powerful tool for a variety of physics topics. The high-intensity proton beams provide a large neutrino flux, sampled by a near detector system consisting of a combination of capable precision detectors, and by the massive far detector system located deep underground. This configuration sets up DUNE as a machine for discovery, as it enables opportunities not only to perform precision neutrino measurements that may uncover deviations from the present three-flavor mixing paradigm, but also to discover new particles and unveil new interactions and symmetries beyond those predicted in the Standard Model (SM). Of the many potential beyond the Standard Model (BSM) topics DUNE will probe, this paper presents a selection of studies quantifying DUNEâs sensitivities to sterile neutrino mixing, heavy neutral leptons, non-standard interactions, CPT symmetry violation, Lorentz invariance violation, neutrino trident production, dark matter from both beam induced and cosmogenic sources, baryon number violation, and other new physics topics that complement those at high-energy colliders and significantly extend the present reach
Prospects for Beyond the Standard Model Physics Searches at the Deep Underground Neutrino Experiment
The Deep Underground Neutrino Experiment (DUNE) will be a powerful tool for a variety of physics topics. The high-intensity proton beams provide a large neutrino flux, sampled by a near detector system consisting of a combination of capable precision detectors, and by the massive far detector system located deep underground. This configuration sets up DUNE as a machine for discovery, as it enables opportunities not only to perform precision neutrino measurements that may uncover deviations from the present three-flavor mixing paradigm, but also to discover new particles and unveil new interactions and symmetries beyond those predicted in the Standard Model (SM). Of the many potential beyond the Standard Model (BSM) topics DUNE will probe, this paper presents a selection of studies quantifying DUNE's sensitivities to sterile neutrino mixing, heavy neutral leptons, non-standard interactions, CPT symmetry violation, Lorentz invariance violation, neutrino trident production, dark matter from both beam induced and cosmogenic sources, baryon number violation, and other new physics topics that complement those at high-energy colliders and significantly extend the present reach
Prospects for Beyond the Standard Model Physics Searches at the Deep Underground Neutrino Experiment
The Deep Underground Neutrino Experiment (DUNE) will be a powerful tool for a
variety of physics topics. The high-intensity proton beams provide a large
neutrino flux, sampled by a near detector system consisting of a combination of
capable precision detectors, and by the massive far detector system located
deep underground. This configuration sets up DUNE as a machine for discovery,
as it enables opportunities not only to perform precision neutrino measurements
that may uncover deviations from the present three-flavor mixing paradigm, but
also to discover new particles and unveil new interactions and symmetries
beyond those predicted in the Standard Model (SM). Of the many potential beyond
the Standard Model (BSM) topics DUNE will probe, this paper presents a
selection of studies quantifying DUNE's sensitivities to sterile neutrino
mixing, heavy neutral leptons, non-standard interactions, CPT symmetry
violation, Lorentz invariance violation, neutrino trident production, dark
matter from both beam induced and cosmogenic sources, baryon number violation,
and other new physics topics that complement those at high-energy colliders and
significantly extend the present reach.Comment: 55 pages, 40 figures, paper based on the DUNE Technical Design Report
(arXiv:2002.03005
- âŠ