178 research outputs found

    Increased hydrogen production by Escherichia coli strain HD701 in comparison with the wild-type parent strain MC4100

    Get PDF
    Hydrogen production by Escherichia coli is mediated by the formate hydrogenlyase (FHL) complex. E. coli strain HD701 cannot synthesize the FHL complex repressor, Hyc A. Consequently, it has an up-regulated FHL system and can, therefore, evolve hydrogen at a greater rate than its parental wild type, E. coli MC4100. Resting cells of E. coli strain HD701 and MC4100 were set up in batch mode in\ud phosphate buffered saline (PBS) to decouple growth from hydrogen production at the expense of sugar solutions of varying composition. Strain HD701 evolved several times more hydrogen than MC4100 at glucose concentrations ranging from 3 to 200 mM. The difference in the amount of H2 evolved by both strains decreased as the concentration of glucose increased. The highest rate of H2 evolution by strain HD701was 31ml h−1 ODunit −1 l−1 at a glucose concentration of 100 mM.With strain MC4100, the highest ratewas 16ml h−1 ODunit −1 l−1 under these conditions. Experiments using industrial wastes with a high sugar content yielded similar results. In each case, strain HD701\ud evolved hydrogen at a faster rate than the wild type, showing a possible potential for commercial hydrogen production

    FaDIn: Fast Discretized Inference for Hawkes Processes with General Parametric Kernels

    Full text link
    Temporal point processes (TPP) are a natural tool for modeling event-based data. Among all TPP models, Hawkes processes have proven to be the most widely used, mainly due to their simplicity and computational ease when considering exponential or non-parametric kernels. Although non-parametric kernels are an option, such models require large datasets. While exponential kernels are more data efficient and relevant for certain applications where events immediately trigger more events, they are ill-suited for applications where latencies need to be estimated, such as in neuroscience. This work aims to offer an efficient solution to TPP inference using general parametric kernels with finite support. The developed solution consists of a fast L2 gradient-based solver leveraging a discretized version of the events. After supporting the use of discretization theoretically, the statistical and computational efficiency of the novel approach is demonstrated through various numerical experiments. Finally, the effectiveness of the method is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG). Given the use of general parametric kernels, results show that the proposed approach leads to a more plausible estimation of pattern latency compared to the state-of-the-art

    Learning Neural Point Processes with Latent Graphs

    Get PDF
    Neural point processes (NPPs) employ neural networks to capture complicated dynamics of asynchronous event sequences. Existing NPPs feed all history events into neural networks, assuming that all event types contribute to the prediction of the target type. How- ever, this assumption can be problematic because in reality some event types do not contribute to the predictions of another type. To correct this defect, we learn to omit those types of events that do not contribute to the prediction of one target type during the formulation of NPPs. Towards this end, we simultaneously consider the tasks of (1) finding event types that contribute to predictions of the target types and (2) learning a NPP model from event se- quences. For the former, we formulate a latent graph, with event types being vertices and non-zero contributing relationships being directed edges; then we propose a probabilistic graph generator, from which we sample a latent graph. For the latter, the sampled graph can be readily used as a plug-in to modify an existing NPP model. Because these two tasks are nested, we propose to optimize the model parameters through bilevel programming, and develop an efficient solution based on truncated gradient back-propagation. Experimental results on both synthetic and real-world datasets show the improved performance against state-of-the-art baselines. This work removes disturbance of non-contributing event types with the aid of a validation procedure, similar to the practice to mitigate overfitting used when training machine learning models
    • …
    corecore