39 research outputs found

    Solid-Solid Interfacial Contact of Tubing Walls Drives Therapeutic Protein Aggregation During Peristaltic Pumping

    Get PDF
    Peristaltic pumping during bioprocessing can cause therapeutic protein loss and aggregation during use. Due to the complexity of this apparatus, root-cause mechanisms behind protein loss have been long sought. We have developed new methodologies isolating various peristaltic pump mechanisms to determine their effect on monomer loss. Closed-loops of peristaltic tubing were used to investigate the effects of peristaltic pump parameters on temperature and monomer loss, whilst two mechanism isolation methodologies are used to isolate occlusion and lateral expansion-relaxation of peristaltic tubing. Heat generated during peristaltic pumping can cause heat-induced monomer loss and the extent of heat gain is dependent on pump speed and tubing type. Peristaltic pump speed was inversely related to the rate of monomer loss whereby reducing speed 2.0-fold increased loss rates by 2.0- to 5.0-fold. Occlusion is a parameter that describes the amount of tubing compression during pumping. Varying this to start the contacting of inner tubing walls is a threshold that caused an immediate 20-30% additional monomer loss and turbidity increase. During occlusion, expansion-relaxation of solid-liquid interfaces and solid-solid interface contact of tubing walls can occur simultaneously. Using two mechanisms isolation methods, the latter mechanism was found to be most destructive and a function of solid-solid contact area, where increasing the contact area 2.0-fold increased monomer loss by 1.6-fold. We establish that a form of solid-solid contact mechanism whereby the contact solid interfaces disrupt adsorbed protein films is the root-cause behind monomer loss and protein aggregation during peristaltic pumping

    Development of analytical characterization tools for process monitoring of adenovirus-based vaccines (ChAdOx and Ad5)

    Get PDF
    Product quality understanding is a critical part of viral vector vaccine manufacturing and regulation. Mass spectrometry is a technique that has widely been applied to protein-based therapeutics and could be used as a characterisation tool to monitor viral vector vaccine product quality. The ultimate objective of this Bill and Melinda Gates Foundation funded project is to enable vaccine manufacturing in Low and Middle-income countries (LMIC) through increased scientific understanding of viral vector vaccine manufacturing bottlenecks and therefore de-risking of vaccine development and manufacturing. Please click Download on the upper right corner to see the full abstract

    Measurement of Adenovirus-Based Vector Heterogeneity

    Get PDF
    Adenovirus vectors have become an important class of vaccines with the recent approval of Ebola and COVID-19 products. In-process quality attribute data collected during Adenovirus vector manufacturing has focused on particle concentration and infectivity ratios (based on viral genome: cell-based infectivity), and data suggest only a fraction of viral particles present in the final vaccine product are efficacious. To better understand this product heterogeneity, lab-scale preparations of two Adenovirus viral vectors, (Chimpanzee adenovirus (ChAdOx1) and Human adenovirus Type 5 (Ad5), were studied using transmission electron microscopy (TEM). Different adenovirus morphologies were characterized, and the proportion of empty and full viral particles were quantified. These proportions showed a qualitative correlation with the sample's infectivity values. Liquid chromatography-mass spectrometry (LC-MS) peptide mapping was used to identify key adenovirus proteins involved in viral maturation. Using peptide abundance analysis, a ∼5-fold change in L1 52/55k abundance was observed between low-(empty) and high-density (full) fractions taken from CsCl ultracentrifugation preparations of ChAdOx1 virus. The L1 52/55k viral protein is associated with DNA packaging and is cleaved during viral maturation, so it may be a marker for infective particles. TEM and LC-MS peptide mapping are promising higher-resolution analytical characterization tools to help differentiate between relative proportions of empty, non-infectious, and infectious viral particles as part of Adenovirus vector in-process monitoring, and these results are an encouraging initial step to better differentiate between the different product-related impurities

    Analysis of Sample Correlations for Monte Carlo Rendering

    Get PDF
    Modern physically based rendering techniques critically depend on approximating integrals of high dimensional functions representing radiant light energy. Monte Carlo based integrators are the choice for complex scenes and effects. These integrators work by sampling the integrand at sample point locations. The distribution of these sample points determines convergence rates and noise in the final renderings. The characteristics of such distributions can be uniquely represented in terms of correlations of sampling point locations. Hence, it is essential to study these correlations to understand and adapt sample distributions for low error in integral approximation. In this work, we aim at providing a comprehensive and accessible overview of the techniques developed over the last decades to analyze such correlations, relate them to error in integrators, and understand when and how to use existing sampling algorithms for effective rendering workflows.publishe

    Socioeconomic deprivation is associated with reduced response and lower treatment persistence with TNF inhibitors in rheumatoid arthritis

    Get PDF
    Objective To investigate the association between socioeconomic deprivation and outcomes following TNF inhibitor (TNFi) treatment. Methods Individuals commencing their first TNFi in the British Society for Rheumatology Biologics Register for RA (BSRBR-RA) and Biologics in RA Genetics and Genomics Study Syndicate (BRAGGSS) cohort were included. Socioeconomic deprivation was proxied using the Index of Multiple Deprivation and categorized as 20% most deprived, middle 40% or 40% least deprived. DAS28-derived outcomes at 6 months (BSRBR-RA) and 3 months (BRAGGSS) were compared using regression models with the least deprived as referent. Risks of all-cause and cause-specific drug discontinuation were compared using Cox models in the BSRBR-RA. Additional analyses adjusted for lifestyle factors (e.g. smoking, BMI) as potential mediators. Results 16 085 individuals in the BSRBR-RA were included (mean age 56 years, 76% female), of whom 18%, 41% and 41% were in the most, middle and least deprived groups, respectively. Of 3459 included in BRAGGSS (mean age 57, 77% female), proportions were 22%, 36% and 41%, respectively. The most deprived group had 0.3-unit higher 6-month DAS28 (95% CI 0.22, 0.37) and were less likely to achieve low disease activity (odds ratio [OR] 0.76; 95% CI 0.68, 0.84) in unadjusted models. Results were similar for 3-month DAS28 (β = 0.23; 95% CI 0.11, 0.36) and low disease activity (OR 0.77; 95% CI 0.63, 0.94). The most deprived were more likely to discontinue treatment (hazard ratio 1.18; 95% CI 1.12, 1.25), driven by ineffectiveness rather than adverse events. Adjusted estimates were generally attenuated. Conclusion Socioeconomic deprivation is associated with reduced response to TNFi. Improvements in determinants of health other than lifestyle factors are needed to address socioeconomic inequities

    Identification, Expansion, And Disambiguation Of Acronyms In Biomedical Texts

    No full text
    With the ever growing amount of biomedical literature there is an increasing desire to use sophisticated language processing algorithms to mine these texts. In order to use these algorithms we must first deal with acronyms, abbreviations, and misspellings.In this paper we look at identifying, expanding, and disambiguating acronyms in biomedical texts. We break the task up into three modular steps: Identification, Expansion, and Disambiguation. For Identification we use a hybrid approach that is composed of a naive Bayesian classifier and a couple of handcrafted rules. We are able to achieve results of 99.96% accuracy with a small training set. We break the expansion up into two categories, local and global expansion. For local expansion we use windowing and longest common subsequence to generate the possible expansions. Global expansion requires an acronym database. To disambiguate the different candidate expansions we use WordNet and semantic similarity. Overall we obtain a recall and precision of over 91%. © Springer-Verlag Berlin Heidelberg 2005

    Dealing with Acronyms in Biomedical Texts

    No full text
    Recently, there has been a growth in the amount of machine readable information pertaining to the biomedical field. With this growth comes a desire to be able to extract information, answer questions, etc. based on the information in the documents. Many of these desired tasks require sophisticated language processing algorithms, such as part-of-speech tagging, parsing, and semantic interpretation. In order to use these algorithms the text must first be cleansed of acronyms, abbreviations, and misspellings. In this paper we look at identifying, expanding, and disambiguating acronyms in biomedical texts. We present an integrated system that combines previously used methods for dealing with acronyms and Natural Language Processing techniques in new way for a new domain. The result is an integrated system that achieves a high precision and recall. We break the task up into three modular steps: Identification, Expansion, and Disambiguation. During identification, each word is examined to determine if it is an acronym or not. For this, a hybrid approach that is composed of a Naive Bayesian classifier and a set of handcrafted rules is used. We are able to achieve results of 99.96 % accuracy with a small training set. During the expansion step, a list of possible meanings for the words determined to be acronyms is created. We break the expansion up into two categories, local and global expansion. For local expansion we use windowing and longest common subsequence to generate the possible expansions. Global expansion requires an acronym database to retrieve the possible expansions. The disambiguation step takes the list of possible meanings and determines which meaning is the correct one. To disambiguate the different candidate expansions we use WordNet and semantic similarity. Overall we obtain a recall and precision of over 91%. Keywords: Acronyms, Text Cleansing, Bioinformatic
    corecore