12 research outputs found

    Lessons learnt on the analysis of large sequence data in animal genomics

    Get PDF
    The ’omics revolution has made a large amount of sequence data available to researchers and the industry. This has had a profound impact in the field of bioinformatics, stimulating unprecedented advancements in this discipline. Mostly, this is usually looked at from the perspective of human ’omics, in particular human genomics. Plant and animal genomics, however, have also been deeply influenced by next‐generation sequencing technologies, with several genomics applications now popular among researchers and the breeding industry. Genomics tends to generate huge amounts of data, and genomic sequence data account for an increasing proportion of big data in biological sciences, due largely to decreasing sequencing and genotyping costs and to large‐scale sequencing and resequencing projects. The analysis of big data poses a challenge to scientists, as data gathering currently takes place at a faster pace than does data processing and analysis, and the associated computational burden is increasingly taxing, making even simple manipulation, visualization and transferring of data a cumbersome operation. The time consumed by the processing and analysing of huge data sets may be at the expense of data quality assessment and critical interpretation. Additionally, when analysing lots of data, something is likely to go awry—the software may crash or stop—and it can be very frustrating to track the error. We herein review the most relevant issues related to tackling these challenges and problems, from the perspective of animal genomics, and provide researchers that lack extensive computing experience with guidelines that will help when processing large genomic data sets

    Uncertainty and innovation: Understanding the role of cell-based manufacturing facilities in shaping regulatory and commercialization environments

    No full text
    The purpose of this qualitative study is to elucidate stakeholder perceptions of, and institutional practices related to cell-based therapies and products (CTP) regulation and commercialization in Canada. The development of reproducible, safe and effective CTPs is predicated on regulatory and commercialization environments that enable innovation. Manufacturing processes constitute a critical step for CTP development in this regard. The road from CTP manufacturing to translation in the clinic, however, has yet to be paved. This study aims to fill an empirical gap in the literature by exploring how CTP manufacturing facilities navigate Canadian regulatory and commercialization environments, which together drive the translation of novel CTPs from bench to bedside. Using the multi-level model of practice-driven institutional change proposed by Smets et al., we demonstrate how CTP manufacturing practices are governed by established standards, yet meaningfully shape higher-order regulatory and commercial norms in CTP research and development. We identify four key themes that undergird such processes of innovation: 1) managing regulatory uncertainty, which stems from an inability to classify CTPs within existing regulatory categories for approval and commercialization purposes; 2) building a ‘business case’ whereby a CTP's market potential is determined in large part by proving its safety and effectiveness; 3) standardizing manufacturing procedures that mobilize CTPs from a research and development phase to a commercialization one; and 4) networking between researchers and regulators to develop responsible commercialization processes that reflect the uniqueness of CTPs as distinct from other biologics and medical devices
    corecore