14 research outputs found

    Research data management and integrity practices

    Get PDF
    Presented at the National data integrity conference: enabling research: new challenges & opportunities held on May 7-8, 2015 at Colorado State University, Fort Collins, Colorado. Researchers, administrators and integrity officers are encountering new challenges regarding research data and integrity. This conference aims to provide attendees with both a high level understanding of these challenges and impart practical tools and skills to deal with them. Topics will include data reproducibility, validity, privacy, security, visualization, reuse, access, preservation, rights and management.William Trenkle is the Scientist-Investigator, Division of Investigative Oversight, Office of Research Integrity, United States Department of Health and Human Services. Dr. William Trenkle serves as a Scientist-Investigator in the Division of Investigative Oversight in the Office of Research Integrity (ORI), HHS. Dr. Trenkle received his B.S. from Alma College, his Ph.D. in Organic Chemistry from the University of California, Irvine and was an NIH NRSA postdoctoral fellow at Harvard University. Upon completion of his post-doctoral training, Dr. Trenkle began his independent career as a professor in the Chemistry Department at Brown University. Prior to joining ORI, Dr. Trenkle served as the Director of the Chemical Biology Core Facility in the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) and as a Program Director with the Division of Pharmacology, Physiology and Biological Chemistry in the National Institute of General Medical Sciences (NIGMS). At ORI, Dr. Trenkle is the Chemistry subject matter expert and consults on forensic analysis of images, electronic evidence and computer files. Dr. Trenkle was recently appointed to serve in the new National Institute of Standards and Technology, Forensic Sciences, Organization for Scientific Area Committees (OSAC) as a member of the Imaging Technology (IT) Subcommittee. The OSAC IT Subcommittee had its first meeting in January 2015 and will be providing direction to the Forensic Science Standards Board on the development and enactment of standards related to the application of technologies and systems to capture, store, process, analyze, transmit, produce and archive images.PowerPoint presentation given on May 8, 2015

    The ERAD Inhibitor Eeyarestatin I Is a Bifunctional Compound with a Membrane-Binding Domain and a p97/VCP Inhibitory Group

    Get PDF
    Protein homeostasis in the endoplasmic reticulum (ER) has recently emerged as a therapeutic target for cancer treatment. Disruption of ER homeostasis results in ER stress, which is a major cause of cell death in cells exposed to the proteasome inhibitor Bortezomib, an anti-cancer drug approved for treatment of multiple myeloma and Mantle cell lymphoma. We recently reported that the ERAD inhibitor Eeyarestatin I (EerI) also disturbs ER homeostasis and has anti-cancer activities resembling that of Bortezomib.Here we developed in vitro binding and cell-based functional assays to demonstrate that a nitrofuran-containing (NFC) group in EerI is the functional domain responsible for the cytotoxicity. Using both SPR and pull down assays, we show that EerI directly binds the p97 ATPase, an essential component of the ERAD machinery, via the NFC domain. An aromatic domain in EerI, although not required for p97 interaction, can localize EerI to the ER membrane, which improves its target specificity. Substitution of the aromatic module with another benzene-containing domain that maintains membrane localization generates a structurally distinct compound that nonetheless has similar biologic activities as EerI.Our findings reveal a class of bifunctional chemical agents that can preferentially inhibit membrane-bound p97 to disrupt ER homeostasis and to induce tumor cell death. These results also suggest that the AAA ATPase p97 may be a potential drug target for cancer therapeutics

    N-grambased text categorization

    No full text
    Text categorization is a fundamental task in document processing, allowing the automated handling of enormous streams of documents in electronic form. One difficulty in handling some classes of documents is the presence of different kinds of textual errors, such as spelling and grammatical errors in email, and character recognition errors in documents that come through OCR. Text categorization must work reliably on all input, and thus must tolerate some level of these kinds of problems. We describe here an N-gram-based approach to text categorization that is tolerant of textual errors. The system is small, fast and robust. This system worked very well for language classification, achieving in one test a 99.8 % correct classification rate on Usenet newsgroup articles written in different languages. The system also worked reasonably well for classifying articles from a number of different computer-oriented newsgroups according to subject, achieving as high as an 80 % correct classification rate. There are also several obvious directions for improving the system’s classification performance in those cases where it did not do as well. The system is based on calculating and comparing profiles of N-gram frequencies. First, we use the system to compute profiles on training set data that represent the various categories, e.g., language samples or newsgroup content samples. Then the system computes a profile for a particular document that is to be classified. Finally, the system computes a distance measure between the document’s profile and each of th
    corecore