29 research outputs found
A priori optimization of cell culture feeds using metabolic engineering
Traditional media optimization procedures, such as component titration and spent media analysis, are costly, time consuming and frequently rely on heuristics that differ between scientists to reduce the amount of chemical species, and their respective concentration ranges, to investigate â which can result in suboptimal media compositions for promoting cell growth. This is despite the fact that cellular metabolism provides a mechanistic framework for assessing the impact of media composition, and by extension feed composition, on cell growth. The highly nonlinear relationship between media component concentration and cell growth, or biomass production, has precluded the use of standard regression techniques to optimize the cell culture feeds over a wide range of conditions â but metabolic engineering techniques are a promising alternative. Currently, the populationâs metabolic state can be well described, in aggregate, with metabolic flux analysis techniques; it has already been shown that flux balance analysis (FBA) is a useful tool for the a posteriori phenotyping of a cell culture in academic settings. However, a frequent criticism of FBA is that there is not a practical way for manufacturers to realize the purported benefits of this technique. In this work, the amino acid transport fluxes were measured during the exponential growth phase of a CHO cell culture and used as constraints for a FBA model. This served as the center point of an in Silico design of experiment (DoE) that was performed by perturbing the measured fluxes and generating a new FBA model with the perturbed fluxes used as model constraints. The DoE results allow for an assessment of the impact that amino acid supplementation will have on the ongoing cell culture a priori. The results show that feeding tyrosine, cysteine, proline and asparagine increased the duration of the exponential growth phase by 70% in normal batches. In addition, the growth rate was doubled for a batch in which growth was stunted after its pH control failed due to a faulty sensor. Overall, this work demonstrates a practical application of FBA that will improve yields and reduce losses while significantly reducing the time and economic requirements necessary for extensive experimentation. Caveats and expected future improvements are also explored. Disclaimer: This article reflects the views of the authors and should not be construed to represent official FDAâs views or policies
Enhancing multivariate calibration model reproducibility for the online monitoring of upstream processes in continuous biomanufacturing
The complex mixtures present in biomanufacturing processes have traditionally required slow and expensive experimental assays, as well as time consuming and complicated analyses to be characterized properly. Multivariate Data Analysis (MVDA) can be integrated with spectroscopy to uniquely solve both of these problems simultaneously. Spectroscopic data has been generated in real-time, eliminating the need for offline assays; and MVDA has been used to rapidly analyze the data in a straightforward manner. Prior experiments have shown that this paradigm can be used offline to characterize the raw materials that are used to supplement cell culture media. However, online models that reliably quantify extracellular component concentrations in continuous bioprocesses require additional considerations. Even when the componentsâ absorbance properties are well understood, cellular metabolism ensures that nutrient and product profiles vary collinearly with one another. This work explored supplementation strategies that break this collinearity to ensure that proper multivariate calibration models are constructed, instead of soft sensor models whose performance is inconsistent due to their reliance on component concentration collinearity for accurate predictions. This allows for more robust corrective action to be taken. Furthermore, the advantages of training multivariate calibration models from continuous bioprocessesâ data, whose steady-state operation allows for more robust and complete design space coverage relative to batch processes, are explored as a way to guide ongoing and future research in this area. Disclaimer: This article reflects the views of the authors and should not be construed to represent official FDAâs views or policies
FDA/OBP laboratory research to support continuous bioprocessing
Recently, there has been a movement in commercial biotechnology drug production to develop fully a continuous manufacturing scheme capable of consistent production of high quality therapeutics. The FDA is increasingly reviewing applications implementing elements of continuous manufacturing or enabling technologies. This includes product and process engineering, and integration of enabling technology during development. Achieving a true continuous process can be difficult and raise additional unknown regulatory concerns (i.e. how to handle process interruptions or unmatched liquid flow rates between linked unit operations, how to measure viral clearance and establish safety). This poster will provide an overview of CDERâs, Office of Biotechnology Productâs lab capabilities and selected regulatory research case studies on continuous biomanufacturing and enabling technologies. These lab-based capabilities are being leveraged to study continuous bioreactor cell culture production, continuous chromatography, viral safety, and Process Analytical Technology (PAT) tools to enable these operations. Preliminary results have provided encouraging data to broaden technological challenges and potential benefits of continuous biomanufacturing approaches
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe
Integrating industrial engineering and lean techniques at a contract pharmaceutical manufacturer
This article presents the application of industrial engineering and lean techniques to a contract pharmaceutical manufacturing facility
Integrating graduate and undergraduate education with real world projects
This paper discusses three case studies where graduate students worked alongside undergraduate students on projects suggested by local companies. In the first case, graduate students were assigned to work in teams with undergraduate students, and later continued the project to fulfill both company expectations and publication of a conference paper. In the second case, a graduate student who had previously completed a course continued working with the instructor on a thesis with an industry case study. The master\u27s student proposed the involvement of a new student team the next time the course was offered, and the team worked in a separate, but related area of the company. In the third case, a senior graduate student became the teaching assistant instructor for an undergraduate level course. A company project was used to fulfill graduate student independent study credits, as well as a team project for the undergraduate course. The educational advantages and benefits of these models for undergraduate students, graduate students, and faculty advisors will be discussed in this paper. © 2008 IEEE
Hybridoma cell-culture and glycan profile dataset at various bioreactor conditions
This is an â11 factor-2 level-12 runâ Plackett-Burman experimental design dataset. The dataset includes 11 engineering bioreactor parameters as input variables. These 11 factors were varied at 2 levels and 23 response variables that are glycan profile attributes, were measured âA Design Space Exploration for Control of Critical Quality Attributes of mAbâ (H. Bhatia, E.K. Read, C.D. Agarabi, K.A. Brorson, S.C. Lute, S. Yoon S, 2016) [2]