61,580 research outputs found
How Does Science Come to Speak in the Courts? Citations Intertexts, Expert Witnesses, Consequential Facts, and Reasoning
Citations, in their highly conventionalized forms, visibly indicate each texts explicit use of the prior literature that embodies the knowledge and contentions of its field. This relation to prior texts has been called intertextuality in literary and literacy studies. Here, Bazerman discusses the citation practices and intertextuality in science and the law in theoretical and historical perspective, and considers the intersection of science and law by identifying the judicial rules that limit and shape the role of scientific literature in court proceedings. He emphasizes that from the historical and theoretical analysis, it is clear that, in the US, judicial reasoning is an intertextually tight and self-referring system that pays only limited attention to documents outside the laws, precedents, and judicial rules. The window for scientific literature to enter the courts is narrow, focused, and highly filtered. It serves as a warrant for the expert witnesses\u27 expertise, which in turn makes opinion admissible in a way not available to ordinary witnesses
Using Machine Learning and Natural Language Processing to Review and Classify the Medical Literature on Cancer Susceptibility Genes
PURPOSE: The medical literature relevant to germline genetics is growing
exponentially. Clinicians need tools monitoring and prioritizing the literature
to understand the clinical implications of the pathogenic genetic variants. We
developed and evaluated two machine learning models to classify abstracts as
relevant to the penetrance (risk of cancer for germline mutation carriers) or
prevalence of germline genetic mutations. METHODS: We conducted literature
searches in PubMed and retrieved paper titles and abstracts to create an
annotated dataset for training and evaluating the two machine learning
classification models. Our first model is a support vector machine (SVM) which
learns a linear decision rule based on the bag-of-ngrams representation of each
title and abstract. Our second model is a convolutional neural network (CNN)
which learns a complex nonlinear decision rule based on the raw title and
abstract. We evaluated the performance of the two models on the classification
of papers as relevant to penetrance or prevalence. RESULTS: For penetrance
classification, we annotated 3740 paper titles and abstracts and used 60% for
training the model, 20% for tuning the model, and 20% for evaluating the model.
The SVM model achieves 89.53% accuracy (percentage of papers that were
correctly classified) while the CNN model achieves 88.95 % accuracy. For
prevalence classification, we annotated 3753 paper titles and abstracts. The
SVM model achieves 89.14% accuracy while the CNN model achieves 89.13 %
accuracy. CONCLUSION: Our models achieve high accuracy in classifying abstracts
as relevant to penetrance or prevalence. By facilitating literature review,
this tool could help clinicians and researchers keep abreast of the burgeoning
knowledge of gene-cancer associations and keep the knowledge bases for clinical
decision support tools up to date
- …