131,520 research outputs found

    Modeling and Optimization of Lactic Acid Synthesis by the Alkaline Degradation of Fructose in a Batch Reactor

    Get PDF
    The present work deals with the determination of the optimal operating conditions of lactic acid synthesis by the alkaline degradation of fructose. It is a complex transformation for which detailed knowledge is not available. It is carried out in a batch or semi-batch reactor. The ‘‘Tendency Modeling’’ approach, which consists of the development of an approximate stoichiometric and kinetic model, has been used. An experimental planning method has been utilized as the database for model development. The application of the experimental planning methodology allows comparison between the experimental and model response. The model is then used in an optimization procedure to compute the optimal process. The optimal control problem is converted into a nonlinear programming problem solved using the sequencial quadratic programming procedure coupled with the golden search method. The strategy developed allows simultaneously optimizing the different variables, which may be constrained. The validity of the methodology is illustrated by the determination of the optimal operating conditions of lactic acid production

    Normalization of large-scale behavioural data collected from zebrafish

    Get PDF
    Many contemporary neuroscience experiments utilize high-throughput approaches to simultaneously collect behavioural data from many animals. The resulting data are often complex in structure and are subjected to systematic biases, which require new approaches for analysis and normalization. This study addressed the normalization need by establishing an approach based on linear-regression modeling. The model was established using a dataset of visual motor response (VMR) obtained from several strains of wild-type (WT) zebrafish collected at multiple stages of development. The VMR is a locomotor response triggered by drastic light change, and is commonly measured repeatedly from multiple larvae arrayed in 96-well plates. This assay is subjected to several systematic variations. For example, the light emitted by the machine varies slightly from well to well. In addition to the light-intensity variation, biological replication also created batch-batch variation. These systematic variations may result in differences in the VMR and must be normalized. Our normalization approach explicitly modeled the effect of these systematic variations on VMR. It also normalized the activity profiles of different conditions to a common baseline. Our approach is versatile, as it can incorporate different normalization needs as separate factors. The versatility was demonstrated by an integrated normalization of three factors: light-intensity variation, batch-batch variation and baseline. After normalization, new biological insights were revealed from the data. For example, we found larvae of TL strain at 6 days post-fertilization (dpf) responded to light onset much stronger than the 9-dpf larvae, whereas previous analysis without normalization shows that their responses were relatively comparable. By removing systematic variations, our model-based normalization can facilitate downstream statistical comparisons and aid detecting true biological differences in high-throughput studies of neurobehaviour

    Transfer learning for batch process optimal control using LV-PTM and adaptive control strategy

    Get PDF
    In this study, we investigate a data-driven optimal control for a new batch process. Existing data-driven optimal control methods often ignore an important problem, namely, because of the short operation time of the new batch process, the modeling data in the initial stage can be insufficient. To address this issue, we introduce the idea of transfer learning, i.e., a latent variable process transfer model (LV-PTM) is adopted to transfer sufficient data and process information from similar processes to a new one to assist its modeling and quality optimization control. However, due to fluctuations in raw materials, equipment, etc., differences between similar batch processes are always inevitable, which lead to the serious and complicated mismatch of the necessary condition of optimality (NCO) between the new batch process and the LV-PTM-based optimization problem. In this work, we propose an LV-PTM-based batch-to-batch adaptive optimal control strategy, which consists of three stages, to ensure the best optimization performance during the whole operation lifetime of the new batch process. This adaptive control strategy includes model updating, data removal, and modifier-adaptation methodology using final quality measurements in response. Finally, the feasibility of the proposed method is demonstrated by simulations

    Iterative design of dynamic experiments in modeling for optimization of innovative bioprocesses

    Get PDF
    Finding optimal operating conditions fast with a scarce budget of experimental runs is a key problem to speed up the development and scaling up of innovative bioprocesses. In this paper, a novel iterative methodology for the model-based design of dynamic experiments in modeling for optimization is developed and successfully applied to the optimization of a fed-batch bioreactor related to the production of r-interleukin-11 (rIL-11) whose DNA sequence has been cloned in an Escherichia coli strain. At each iteration, the proposed methodology resorts to a library of tendency models to increasingly bias bioreactor operating conditions towards an optimum. By selecting the ‘most informative’ tendency model in the sequel, the next dynamic experiment is defined by re-optimizing the input policy and calculating optimal sampling times. Model selection is based on minimizing an error measure which distinguishes between parametric and structural uncertainty to selectively bias data gathering towards improved operating conditions. The parametric uncertainty of tendency models is iteratively reduced using Global Sensitivity Analysis (GSA) to pinpoint which parameters are keys for estimating the objective function. Results obtained after just a few iterations are very promising.Fil: Cristaldi, Mariano Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo y Diseño. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Instituto de Desarrollo y Diseño; ArgentinaFil: Grau, Ricardo José Antonio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo Tecnológico para la Industria Química. Universidad Nacional del Litoral. Instituto de Desarrollo Tecnológico para la Industria Química; ArgentinaFil: Martínez, Ernesto Carlos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo y Diseño. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Instituto de Desarrollo y Diseño; Argentin

    G\mathcal{G}-SELC: Optimization by sequential elimination of level combinations using genetic algorithms and Gaussian processes

    Full text link
    Identifying promising compounds from a vast collection of feasible compounds is an important and yet challenging problem in the pharmaceutical industry. An efficient solution to this problem will help reduce the expenditure at the early stages of drug discovery. In an attempt to solve this problem, Mandal, Wu and Johnson [Technometrics 48 (2006) 273--283] proposed the SELC algorithm. Although powerful, it fails to extract substantial information from the data to guide the search efficiently, as this methodology is not based on any statistical modeling. The proposed approach uses Gaussian Process (GP) modeling to improve upon SELC, and hence named G\mathcal{G}-SELC. The performance of the proposed methodology is illustrated using four and five dimensional test functions. Finally, we implement the new algorithm on a real pharmaceutical data set for finding a group of chemical compounds with optimal properties.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS199 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Response time distribution in a tandem pair of queues with batch processing

    Get PDF
    Response time density is obtained in a tandem pair of Markovian queues with both batch arrivals and batch departures. The method uses conditional forward and reversed node sojourn times and derives the Laplace transform of the response time probability density function in the case that batch sizes are finite. The result is derived by a generating function method that takes into account that the path is not overtake-free in the sense that the tagged task being tracked is affected by later arrivals at the second queue. A novel aspect of the method is that a vector of generating functions is solved for, rather than a single scalar-valued function, which requires investigation of the singularities of a certain matrix. A recurrence formula is derived to obtain arbitrary moments of response time by differentiation of the Laplace transform at the origin, and these can be computed rapidly by iteration. Numerical results for the first four moments of response time are displayed for some sample networks that have product-form solutions for their equilibrium queue length probabilities, along with the densities themselves by numerical inversion of the Laplace transform. Corresponding approximations are also obtained for (non-product-form) pairs of “raw” batch-queues – with no special arrivals – and validated against regenerative simulation, which indicates good accuracy. The methods are appropriate for modeling bursty internet and cloud traffic and a possible role in energy-saving is considered

    Memory-Efficient Topic Modeling

    Full text link
    As one of the simplest probabilistic topic modeling techniques, latent Dirichlet allocation (LDA) has found many important applications in text mining, computer vision and computational biology. Recent training algorithms for LDA can be interpreted within a unified message passing framework. However, message passing requires storing previous messages with a large amount of memory space, increasing linearly with the number of documents or the number of topics. Therefore, the high memory usage is often a major problem for topic modeling of massive corpora containing a large number of topics. To reduce the space complexity, we propose a novel algorithm without storing previous messages for training LDA: tiny belief propagation (TBP). The basic idea of TBP relates the message passing algorithms with the non-negative matrix factorization (NMF) algorithms, which absorb the message updating into the message passing process, and thus avoid storing previous messages. Experimental results on four large data sets confirm that TBP performs comparably well or even better than current state-of-the-art training algorithms for LDA but with a much less memory consumption. TBP can do topic modeling when massive corpora cannot fit in the computer memory, for example, extracting thematic topics from 7 GB PUBMED corpora on a common desktop computer with 2GB memory.Comment: 20 pages, 7 figure
    corecore