2 research outputs found

    The Separation and Purification of E.coli Bacteriophage and Its Characteristics

    No full text
    通讯作者: msshen@ xmu. edu. cn[中文文摘]探讨了从生活污水中分离、纯化大肠杆菌噬菌体的方法.经富集,从厦门大学、厦门港的生活污水中获得3种噬菌体.一是:蝌蚪状不可收缩尾的噬菌体,其头部为二十面体,直径约110~120 nm,尾部长220~230 nm,尾宽13~15 nm,无尾鞘、基板、尾丁、尾丝等结构.二是蝌蚪状可收缩尾的噬菌体,其头部为三十面体,约70 nm×110 nm,尾部长约120~130 nm,尾宽约18~22 nm,有尾鞘、基板、尾丁、尾丝等结构,该类噬菌体在污水中占绝大多数.三是短尾噬菌体,其头部为二十面体,大小为20 nm,尾部长约2~3 nm,在污水中占有一定的比例.2种噬菌体在-18℃冰箱可存活56天.经噬菌体处理的污水,总菌数下降了30%左右,大肠杆菌总数下降90%以上.[英文文摘]How to separate and purify coliphage from the sewage drained from daily life was discussed.Three kinds of phage from the daily sewage of the Xiamen University and the Xiamen harbor were separated enrichedly.One was a pollywog shape phage,which had a head of icosahedrons with diameter about 110~120 nm,a un-contractive tail with length about 220~230 nm and width around 13~15 nm.This phage doesn′t have the structures of tail sheath,base-tube and tail fibers etc.The second was pollywog-like phage,which had a contractive.tail with length about 120~ 130 nm and width about 18~ 22 nm,and a triacon tahedron head with size about 70 nm nm×110 nm. It had the structure of tail sheath,tail tube and base-plate.This category was the overwhemlingm ajor ity of the daily-life sewage.The third is a short-tail phage with an icosahedron head abou t 20 nm,the length of the tail is only 2~ 3 nm.Th is phage covered considerable propor tion in the daily-life sew age. The phage obtained from the sewage could live abou t 56 days in the - 18 refrigerator. The total number of bacterial in the sew age was redu ced about 30% and theE. coli was reduced about 90% .国家基础科学人才培养基金(J0630649);2007年教育部大学生创新计划项目资

    Proximal policy optimization with model-based methods

    No full text
    Model-free reinforcement learning methods have successfully been applied to practical applications such as decision-making problems in Atari games. However, these methods have inherent shortcomings, such as a high variance and low sample efficiency. To improve the policy performance and sample efficiency of model-free reinforcement learning, we propose proximal policy optimization with model-based methods (PPOMM), a fusion method of both model-based and model-free reinforcement learning. PPOMM not only considers the information of past experience but also the prediction information of the future state. PPOMM adds the information of the next state to the objective function of the proximal policy optimization (PPO) algorithm through a model-based method. This method uses two components to optimize the policy: the error of PPO and the error of model-based reinforcement learning. We use the latter to optimize a latent transition model and predict the information of the next state. For most games, this method outperforms the state-of-the-art PPO algorithm when we evaluate across 49 Atari games in the Arcade Learning Environment (ALE). The experimental results show that PPOMM performs better or the same as the original algorithm in 33 games.</p
    corecore