7,966 research outputs found

    Chemoinformatics Research at the University of Sheffield: A History and Citation Analysis

    Get PDF
    This paper reviews the work of the Chemoinformatics Research Group in the Department of Information Studies at the University of Sheffield, focusing particularly on the work carried out in the period 1985-2002. Four major research areas are discussed, these involving the development of methods for: substructure searching in databases of three-dimensional structures, including both rigid and flexible molecules; the representation and searching of the Markush structures that occur in chemical patents; similarity searching in databases of both two-dimensional and three-dimensional structures; and compound selection and the design of combinatorial libraries. An analysis of citations to 321 publications from the Group shows that it attracted a total of 3725 residual citations during the period 1980-2002. These citations appeared in 411 different journals, and involved 910 different citing organizations from 54 different countries, thus demonstrating the widespread impact of the Group's work

    Description and Experience of the Clinical Testbeds

    Get PDF
    This deliverable describes the up-to-date technical environment at three clinical testbed demonstrator sites of the 6WINIT Project, including the adapted clinical applications, project components and network transition technologies in use at these sites after 18 months of the Project. It also provides an interim description of early experiences with deployment and usage of these applications, components and technologies, and their clinical service impact

    Opportunities in biotechnology

    Get PDF

    CRISPR Screens in Synthetic Lethality and Combinatorial Therapies for Cancer

    Get PDF
    Cancer is a complex disease resulting from the accumulation of genetic dysfunctions. Tumor heterogeneity causes the molecular variety that divergently controls responses to chemotherapy, leading to the recurrent problem of cancer reappearance. For many decades, efforts have focused on identifying essential tumoral genes and cancer driver mutations. More recently, prompted by the clinical success of the synthetic lethality (SL)-based therapy of the PARP inhibitors in homologous recombinant deficient tumors, scientists have centered their novel research on SL interactions (SLI). The state of the art to find new genetic interactions are currently large-scale forward genetic CRISPR screens. CRISPR technology has rapidly evolved to be a common tool in the vast majority of laboratories, as tools to implement CRISPR screen protocols are available to all researchers. Taking advantage of SLI, combinatorial therapies have become the ultimate model to treat cancer with lower toxicity, and therefore better efficiency. This review explores the CRISPR screen methodology, integrates the up-to-date published findings on CRISPR screens in the cancer field and proposes future directions to uncover cancer regulation and individual responses to chemotherapy

    Genome engineering and plant breeding : impact on trait discovery and development

    Get PDF
    Key message: New tools for the precise modification of crops genes are now available for the engineering of new ideotypes. A future challenge in this emerging field of genome engineering is to develop efficient methods for allele mining. Abstract: Genome engineering tools are now available in plants, including major crops, to modify in a predictable manner a given gene. These new techniques have a tremendous potential for a spectacular acceleration of the plant breeding process. Here, we discuss how genetic diversity has always been the raw material for breeders and how they have always taken advantage of the best available science to use, and when possible, increase, this genetic diversity. We will present why the advent of these new techniques gives to the breeders extremely powerful tools for crop breeding, but also why this will require the breeders and researchers to characterize the genes underlying this genetic diversity more precisely. Tackling these challenges should permit the engineering of optimized alleles assortments in an unprecedented and controlled way

    In silico design and analysis of targeted genome editing with CRISPR

    Get PDF
    CRISPR/Cas systems have become a tool of choice for targeted genome engineering in recent years. Scientists around the world want to accelerate their research with the use of CRISPR/Cas systems, but are being slowed down by the need to understand the technology and computational steps needed for design and analysis. However, bioinformatics tools for the design and analysis of CRISPR experiments are being created to aid those scientists. For the design of CRISPR targeted genome editing experiments, CHOPCHOP has become one of the most cited and most used tools. After the initial publication of CHOPCHOP, our understanding of the CRISPR system underwent a scientific evolution. I therefore updated CHOPCHOP to accommodate the latest discoveries, such as designs for nickase and isoform targeting, machine learning algorithms for efficiency scoring and repair profile prediction, in addition to many others. On the other spectrum of genome engineering with CRISPR, there is a need for analysis of the data and validation of mutants. For the analysis of the CRISPR targeted genome editing experiments, I have created ampliCan, an R package that with the use of ‘editing aware’ alignment and automated normalization, performs precise estimation of editing efficiencies for thousands of CRISPR experiments. I have benchmarked ampliCan to display its strengths at handling a variety of editing indels, filtering out contaminant reads and performing HDR editing estimates. Both of these tools were developed with the idea that biologists without a deep understanding of CRISPR should be able to use them, and at the same time seasoned experts can adjust the settings for their purposes. I hope that these tools will facilitate adaptation of CRISPR systems for targeted genome editing and indirectly allow for great discoveries in the future

    Development of a context-specific search engine, an executive information system, and a novel www ready external cost model

    Get PDF
    NJPIES is associated with Information Ecology and Sustainability, a holistic approach to environmental data collection, compilation, integration and provision that puts people, not technology, at the center of the environmental information world. The first main goal of this project was to develop an algorithm and associated computer-based tool that could perform a lifecycle cost analysis for a model system. The application developed solved the primary problem associated with the lifecycle cost analysis of a product: it accounted for all costs (e.g., environmental costs such as ecological costs and health costs associated with emissions) of the activity. A lifecycle cost analysis attempts to identify, measure, and quantify the social costs of human activities such as manufacturing that are not considered with traditional accounting systems. The application developed will quantify, monetize, and rank the damage or external costs to the environment of certain types of emissions. We developed a preliminary algorithm and software and implemented it at two plants: load assembly pack operation at Iowa Army Ammunition Plant (IAAAP) and Armtec, a manufacturer of combustible cartridge cases. The second main goal of this project is to act as a credible information-clearing house in pollution prevention (P2) and related environmental matters, and to educate the public and keep them aware of facts taking place in the environmental/manufacturing world. Intelligent search engines have been built to access these huge databases in human readable format and correlate the data to various reports providing information on the environmentally hazardous chemicals, releases, and facilities in different regions. The third main goal is the enhancement of EnviroDaemon with a hierarchical information search interface. This project describes some approaches that locate information according to syntactic criteria, augmented by pragmatic aspects like the utilization of information in a certain context. The main emphasis of this project lies in the treatment of structured knowledge, where essential aspects about the topic of interest are encoded not only by the individual items, but also by their relationships among each other. Benefits of this approach are enhanced precision and approximate search in an already focused, context specific search engine for the environment

    Experimental Design in Game Testing

    Get PDF
    The gaming industry has been on constant rise over the last few years. Companies invest huge amounts of money for the release of their games. A part of this money is invested in testing the games. Current game testing methods include manual execution of pre-written test cases in the game. Each test case may or may not result in a bug. In a game, a bug is said to occur when the game does not behave according to its intended design. The process of writing the test cases to test games requires standardization. We believe that this standardization can be achieved by implementing experimental design to video game testing. In this thesis, we discuss the implementation of combinatorial testing to test games. Combinatorial testing is a method of experimental design that is used to generate test cases and is primarily used for commercial software testing. In addition to the discussion of the implementation of combinatorial testing techniques in video game testing, we present a method for finding combinations resulting in video game bugs
    corecore