212 research outputs found

    Multiple-Peptidase Mutants of Lactococcus lactis Are Severely Impaired in Their Ability To Grow in Milk

    Get PDF
    To examine the contribution of peptidases to the growth of Lactococcus lactis in milk, 16 single- and multiple-deletion mutants were constructed. In successive rounds of chromosomal gene replacement mutagenesis, up to all five of the following peptidase genes were inactivated (fivefold mutant): pepX, pepO, pepT, pepC, and pepN. Multiple mutations led to slower growth rates in milk, the general trend being that growth rates decreased when more peptidases were inactivated. The fivefold mutant grew more than 10 times more slowly in milk than the wild-type strain. In one of the fourfold mutants and in the fivefold mutant, the intracellular pools of amino acids were lower than those of the wild type, whereas peptides had accumulated inside the cell. No significant differences in the activities of the cell envelope-associated proteinase and of the oligopeptide transport system were observed. Also, the expression of the peptidases still present in the various mutants was not detectably affected. Thus, the lower growth rates can directly be attributed to the inability of the mutants to degrade casein-derived peptides. These results supply the first direct evidence for the functioning of lactococcal peptidases in the degradation of milk proteins. Furthermore, the study provides critical information about the relative importance of the peptidases for growth in milk, the order of events in the proteolytic pathway, and the regulation of its individual components.

    Route choice control of automated baggage handling systems.

    Get PDF
    Abstract State-of-the-art baggage handling systems transport luggage in an automated way using destination coded vehicles (DCVs). These vehicles transport the bags at high speeds on a "mini" railway network. Currently, the networks are simple, with only a few junctions, since otherwise bottlenecks would be created at the junctions. This makes the system inefficient. In the research we conduct, more complex networks are considered. In order to optimize the performance of the system we develop and compare centralized and decentralized control methods that can be used to route the DCVs through the track network. The proposed centralized control method is model predictive control (MPC). Due to the large computation effort centralized MPC requires, decentralized MPC and a fast decentralized heuristic approach are also proposed. When implementing the decentralized approaches, each junction has its own local controller for positioning the switch going into the junction and the switch going out of the junction. In order to assess the advantages and disadvantages of centralized MPC, decentralized MPC, and the decentralized heuristic approach, we also discuss a simple benchmark case study. The considered control methods are compared for several scenarios. Results indicate that centralized MPC becomes intractable when a large stream of bags has to be handled, while decentralized MPC can still be used to suboptimally solve the problem. Moreover, the decentralized heuristic approach usually gives worse results than those obtained when using decentralized MPC, but with very low computation time. Tarȃu, De Schutter, Hellendoorn

    Memorization and Generalization in Neural Code Intelligence Models

    Full text link
    Deep Neural Networks (DNN) are increasingly commonly used in software engineering and code intelligence tasks. These are powerful tools that are capable of learning highly generalizable patterns from large datasets through millions of parameters. At the same time, training DNNs means walking a knife's edges, because their large capacity also renders them prone to memorizing data points. While traditionally thought of as an aspect of over-training, recent work suggests that the memorization risk manifests especially strongly when the training datasets are noisy and memorization is the only recourse. Unfortunately, most code intelligence tasks rely on rather noise-prone and repetitive data sources, such as GitHub, which, due to their sheer size, cannot be manually inspected and evaluated. We evaluate the memorization and generalization tendencies in neural code intelligence models through a case study across several benchmarks and model families by leveraging established approaches from other fields that use DNNs, such as introducing targeted noise into the training dataset. In addition to reinforcing prior general findings about the extent of memorization in DNNs, our results shed light on the impact of noisy dataset in training.Comment: manuscript in preparatio
    • ā€¦
    corecore