1,550 research outputs found
Semiconductor Electronic Label-Free Assay for Predictive Toxicology.
While animal experimentations have spearheaded numerous breakthroughs in biomedicine, they also have spawned many logistical concerns in providing toxicity screening for copious new materials. Their prioritization is premised on performing cellular-level screening in vitro. Among the screening assays, secretomic assay with high sensitivity, analytical throughput, and simplicity is of prime importance. Here, we build on the over 3-decade-long progress on transistor biosensing and develop the holistic assay platform and procedure called semiconductor electronic label-free assay (SELFA). We demonstrate that SELFA, which incorporates an amplifying nanowire field-effect transistor biosensor, is able to offer superior sensitivity, similar selectivity, and shorter turnaround time compared to standard enzyme-linked immunosorbent assay (ELISA). We deploy SELFA secretomics to predict the inflammatory potential of eleven engineered nanomaterials in vitro, and validate the results with confocal microscopy in vitro and confirmatory animal experiment in vivo. This work provides a foundation for high-sensitivity label-free assay utility in predictive toxicology
A Distributed Solution to the PTE Problem
Proceeding of: AAAI Spring Symposium on Predictive Toxicology, AAAI Press, Stanford, March 1999A wide panoply of machine learning methods is available for application to the Predictive Toxicology Evaluation (PTE) problem. The authors have built four monolithic classification systems based on Tilde, Progol, C4.5 and naive bayesian classification. These systems have been trained using the PTE dataset, and their accuracy has been tested using the unseen PTE1 data set as test set. A Multi Agent Decision System (MADES) has been built using the aforementioned monolithic systems to build classification agents. The MADES was trained and tested with the same data sets used with the monolithic systems. Results show that the accuracy of the MADES improves the accuracies obtained by the monolithic systems. We believe that in most real world domains the combination of several approaches is stronger than the individuals. Introduction The Predictive Toxicology Evaluation (PTE) Challenge (Srinivasan et al. 1997) was devised by the Oxford University Computing Laboratory to test the suitability ...Publicad
Collaborative development of predictive toxicology applications
OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals
Data Quality in Predictive Toxicology: Identification of Chemical Structures and Calculation of Chemical Descriptors
Every technique for toxicity prediction and for the detection of structure–activity relationships relies on the accurate estimation and representation of chemical and toxicologic properties. In this paper we discuss the potential sources of errors associated with the identification of compounds, the representation of their structures, and the calculation of chemical descriptors. It is based on a case study where machine learning techniques were applied to data from noncongeneric compounds and a complex toxicologic end point (carcinogenicity). We propose methods applicable to the routine quality control of large chemical datasets, but our main intention is to raise awareness about this topic and to open a discussion about quality assurance in predictive toxicology. The accuracy and reproducibility of toxicity data will be reported in another paper
Recommended from our members
Induced pluripotent stem cells in hematology: current and future applications
Reprogramming somatic cells into induced pluripotent stem (iPS) cells is nowadays approaching effectiveness and clinical grade. Potential uses of this technology include predictive toxicology, drug screening, pathogenetic studies and transplantation. Here, we review the basis of current iPS cell technology and potential applications in hematology, ranging from disease modeling of congenital and acquired hemopathies to hematopoietic stem and other blood cell transplantation
Data quality in predictive toxicology: reproducibility of rodent carcinogenicity experiments.
We compared 121 replicate rodent carcinogenicity assays from the two parts (National Cancer Institute/National Toxicology Program and literature) of the Carcinogenic Potency Database (CPDB) to estimate the reliability of these experiments. We estimated a concordance of 57% between the overall rodent carcinogenicity classifications from both sources. This value did not improve substantially when additional biologic information (species, sex, strain, target organs) was considered. These results indicate that rodent carcinogenicity assays are much less reproducible than previously expected, an effect that should be considered in the development of structure-activity relationship models and the risk assessment process
- …