212 research outputs found

    Additional concentrates do not affect feeding times of cows, but social positions of cows do

    Get PDF
    ArticleIn robotic milking dairy systems lack of control over intakes can be problematic for balancing the forage and concentrate portions of diets. This can lead to proble ms associated with high concentrate intakes and concomitant low forage intakes. To check this as a problem, the feeding behaviour of cows was observed: the number of daily visits to the feed barrier, the duration of these visits and actual feeding, of high and low yielding cows. The cows were robot - milked and fed a ration comprising, separately, concentrate feed from a robot and a feeder, and a grass/clover silage mix forage at the feed barrier. Individual variation in visiting times and times spent at the feed barrier were greater than the effect of level of production. There was no evidence that cows with higher milk yields are differentially motivated to feed from forage. But more dominant cows spent more time feeding than submissive cows

    A Systematic Review of Research Studies Examining Telehealth Privacy and Security Practices Used By Healthcare Providers

    Get PDF
    The objective of this systematic review was to systematically review papers in the United States that examine current practices in privacy and security when telehealth technologies are used by healthcare providers. A literature search was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P). PubMed, CINAHL and INSPEC from 2003 – 2016 were searched and returned 25,404 papers (after duplications were removed). Inclusion and exclusion criteria were strictly followed to examine title, abstract, and full text for 21 published papers which reported on privacy and security practices used by healthcare providers using telehealth.  Data on confidentiality, integrity, privacy, informed consent, access control, availability, retention, encryption, and authentication were all searched and retrieved from the papers examined. Papers were selected by two independent reviewers, first per inclusion/exclusion criteria and, where there was disagreement, a third reviewer was consulted. The percentage of agreement and Cohen’s kappa was 99.04% and 0.7331 respectively. The papers reviewed ranged from 2004 to 2016 and included several types of telehealth specialties. Sixty-seven percent were policy type studies, and 14 percent were survey/interview studies. There were no randomized controlled trials. Based upon the results, we conclude that it is necessary to have more studies with specific information about the use of privacy and security practices when using telehealth technologies as well as studies that examine patient and provider preferences on how data is kept private and secure during and after telehealth sessions.Keywords: Computer security, Health personnel, Privacy, Systematic review, Telehealth

    Electric Polarizability of Neutral Hadrons from Lattice QCD

    Full text link
    By simulating a uniform electric field on a lattice and measuring the change in the rest mass, we calculate the electric polarizability of neutral mesons and baryons using the methods of quenched lattice QCD. Specifically, we measure the electric polarizability coefficient from the quadratic response to the electric field for 10 particles: the vector mesons ρ0\rho^0 and K0K^{*0}; the octet baryons n, Σ0\Sigma^0, Λo0\Lambda_{o}^{0}, Λs0\Lambda_{s}^{0}, and Ξ0\Xi^0; and the decouplet baryons Δ0\Delta^0, Σ0\Sigma^{*0}, and Ξ0\Xi^{*0}. Independent calculations using two fermion actions were done for consistency and comparison purposes. One calculation uses Wilson fermions with a lattice spacing of a=0.10a=0.10 fm. The other uses tadpole improved L\"usher-Weiss gauge fields and clover quark action with a lattice spacing a=0.17a=0.17 fm. Our results for neutron electric polarizability are compared to experiment.Comment: 25 pages, 20 figure

    Automated Fidelity Assessment for Strategy Training in Inpatient Rehabilitation using Natural Language Processing

    Full text link
    Strategy training is a multidisciplinary rehabilitation approach that teaches skills to reduce disability among those with cognitive impairments following a stroke. Strategy training has been shown in randomized, controlled clinical trials to be a more feasible and efficacious intervention for promoting independence than traditional rehabilitation approaches. A standardized fidelity assessment is used to measure adherence to treatment principles by examining guided and directed verbal cues in video recordings of rehabilitation sessions. Although the fidelity assessment for detecting guided and directed verbal cues is valid and feasible for single-site studies, it can become labor intensive, time consuming, and expensive in large, multi-site pragmatic trials. To address this challenge to widespread strategy training implementation, we leveraged natural language processing (NLP) techniques to automate the strategy training fidelity assessment, i.e., to automatically identify guided and directed verbal cues from video recordings of rehabilitation sessions. We developed a rule-based NLP algorithm, a long-short term memory (LSTM) model, and a bidirectional encoder representation from transformers (BERT) model for this task. The best performance was achieved by the BERT model with a 0.8075 F1-score. This BERT model was verified on an external validation dataset collected from a separate major regional health system and achieved an F1 score of 0.8259, which shows that the BERT model generalizes well. The findings from this study hold widespread promise in psychology and rehabilitation intervention research and practice.Comment: Accepted at the AMIA Informatics Summit 202

    Scapegoat: John Dewey and the character education crisis

    Get PDF
    Many conservatives, including some conservative scholars, blame the ideas and influence of John Dewey for what has frequently been called a crisis of character, a catastrophic decline in moral behavior in the schools and society of North America. Dewey’s critics claim that he is responsible for the undermining of the kinds of instruction that could lead to the development of character and the strengthening of the will, and that his educational philosophy and example exert a ubiquitous and disastrous influence on students’ conceptions of moral behavior. This article sets forth the views of some of these critics and juxtaposes them with what Dewey actually believed and wrote regarding character education. The juxtaposition demonstrates that Dewey neither called for nor exemplified the kinds of character-eroding pedagogy his critics accuse him of championing; in addition, this paper highlights the ways in which Dewey argued consistently and convincingly that the pedagogical approaches advocated by his critics are the real culprits in the decline of character and moral education

    Revealing the missing expressed genes beyond the human reference genome by RNA-Seq

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The complete and accurate human reference genome is important for functional genomics researches. Therefore, the incomplete reference genome and individual specific sequences have significant effects on various studies.</p> <p>Results</p> <p>we used two RNA-Seq datasets from human brain tissues and 10 mixed cell lines to investigate the completeness of human reference genome. First, we demonstrated that in previously identified ~5 Mb Asian and ~5 Mb African novel sequences that are absent from the human reference genome of NCBI build 36, ~211 kb and ~201 kb of them could be transcribed, respectively. Our results suggest that many of those transcribed regions are not specific to Asian and African, but also present in Caucasian. Then, we found that the expressions of 104 RefSeq genes that are unalignable to NCBI build 37 in brain and cell lines are higher than 0.1 RPKM. 55 of them are conserved across human, chimpanzee and macaque, suggesting that there are still a significant number of functional human genes absent from the human reference genome. Moreover, we identified hundreds of novel transcript contigs that cannot be aligned to NCBI build 37, RefSeq genes and EST sequences. Some of those novel transcript contigs are also conserved among human, chimpanzee and macaque. By positioning those contigs onto the human genome, we identified several large deletions in the reference genome. Several conserved novel transcript contigs were further validated by RT-PCR.</p> <p>Conclusion</p> <p>Our findings demonstrate that a significant number of genes are still absent from the incomplete human reference genome, highlighting the importance of further refining the human reference genome and curating those missing genes. Our study also shows the importance of <it>de novo </it>transcriptome assembly. The comparative approach between reference genome and other related human genomes based on the transcriptome provides an alternative way to refine the human reference genome.</p

    The EDKB: an established knowledge base for endocrine disrupting chemicals

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Endocrine disruptors (EDs) and their broad range of potential adverse effects in humans and other animals have been a concern for nearly two decades. Many putative EDs are widely used in commercial products regulated by the Food and Drug Administration (FDA) such as food packaging materials, ingredients of cosmetics, medical and dental devices, and drugs. The Endocrine Disruptor Knowledge Base (EDKB) project was initiated in the mid 1990’s by the FDA as a resource for the study of EDs. The EDKB database, a component of the project, contains data across multiple assay types for chemicals across a broad structural diversity. This paper demonstrates the utility of EDKB database, an integral part of the EDKB project, for understanding and prioritizing EDs for testing.</p> <p>Results</p> <p>The EDKB database currently contains 3,257 records of over 1,800 EDs from different assays including estrogen receptor binding, androgen receptor binding, uterotropic activity, cell proliferation, and reporter gene assays. Information for each compound such as chemical structure, assay type, potency, etc. is organized to enable efficient searching. A user-friendly interface provides rapid navigation, Boolean searches on EDs, and both spreadsheet and graphical displays for viewing results. The search engine implemented in the EDKB database enables searching by one or more of the following fields: chemical structure (including exact search and similarity search), name, molecular formula, CAS registration number, experiment source, molecular weight, etc. The data can be cross-linked to other publicly available and related databases including TOXNET, Cactus, ChemIDplus, ChemACX, Chem Finder, and NCI DTP. </p> <p>Conclusion</p> <p>The EDKB database enables scientists and regulatory reviewers to quickly access ED data from multiple assays for specific or similar compounds. The data have been used to categorize chemicals according to potential risks for endocrine activity, thus providing a basis for prioritizing chemicals for more definitive but expensive testing. The EDKB database is publicly available and can be found online at <url>http://edkb.fda.gov/webstart/edkb/index.html</url>.</p> <p><b>Disclaimer:</b><it>The views presented in this article do not necessarily reflect those of the US Food and Drug Administration.</it></p

    Gene Expression Profiles Distinguish the Carcinogenic Effects of Aristolochic Acid in Target (Kidney) and Non-target (Liver) Tissues in Rats

    Get PDF
    BACKGROUND: Aristolochic acid (AA) is the active component of herbal drugs derived from Aristolochia species that have been used for medicinal purposes since antiquity. AA, however, induced nephropathy and urothelial cancer in people and malignant tumors in the kidney and urinary tract of rodents. Although AA is bioactivated in both kidney and liver, it only induces tumors in kidney. To evaluate whether microarray analysis can be used for distinguishing the tissue-specific carcinogenicity of AA, we examined gene expression profiles in kidney and liver of rats treated with carcinogenic doses of AA. RESULTS: Microarray analysis was performed using the Rat Genome Survey Microarray and data analysis was carried out within ArrayTrack software. Principal components analysis and hierarchical cluster analysis of the expression profiles showed that samples were grouped together according to the tissues and treatments. The gene expression profiles were significantly altered by AA treatment in both kidney and liver (p < 0.01; fold change > 1.5). Functional analysis with Ingenuity Pathways Analysis showed that there were many more significantly altered genes involved in cancer-related pathways in kidney than in liver. Also, analysis with Gene Ontology for Functional Analysis (GOFFA) software indicated that the biological processes related to defense response, apoptosis and immune response were significantly altered by AA exposure in kidney, but not in liver. CONCLUSION: Our results suggest that microarray analysis is a useful tool for detecting AA exposure; that analysis of the gene expression profiles can define the differential responses to toxicity and carcinogenicity of AA from kidney and liver; and that significant alteration of genes associated with defense response, apoptosis and immune response in kidney, but not in liver, may be responsible for the tissue-specific toxicity and carcinogenicity of AA

    Effect of training-sample size and classification difficulty on the accuracy of genomic predictors

    Get PDF
    Introduction: As part of the MicroArray Quality Control (MAQC)-II project, this analysis examines how the choice of univariate feature-selection methods and classification algorithms may influence the performance of genomic predictors under varying degrees of prediction difficulty represented by three clinically relevant endpoints. Methods: We used gene-expression data from 230 breast cancers (grouped into training and independent validation sets), and we examined 40 predictors (five univariate feature-selection methods combined with eight different classifiers) for each of the three endpoints. Their classification performance was estimated on the training set by using two different resampling methods and compared with the accuracy observed in the independent validation set. Results: A ranking of the three classification problems was obtained, and the performance of 120 models was estimated and assessed on an independent validation set. The bootstrapping estimates were closer to the validation performance than were the cross-validation estimates. The required sample size for each endpoint was estimated, and both gene-level and pathway-level analyses were performed on the obtained models. Conclusions: We showed that genomic predictor accuracy is determined largely by an interplay between sample size and classification difficulty. Variations on univariate feature-selection methods and choice of classification algorithm have only a modest impact on predictor performance, and several statistically equally good predictors can be developed for any given classification problem
    corecore