487 research outputs found

    Peptidyltransfer Reaction Catalyzed by the Ribosome and the Ribozyme: a Dissertation

    Get PDF
    The RNA world hypothesis makes two predictions that RNA should have been able both to catalyze RNA replication and to direct protein synthesis. The evolution of RNA-catalyzed protein synthesis should be critical in the transition from the RNA world to the modem biological systems. Peptide bond formation is a fundamental step in modem protein biosynthesis. Although many evidence suggests that the ribosome is a ribozyme, peptide bond formation has not been achieved with ribosomal RNAs only. The goal of this thesis is to investigate whether RNA could catalyze peptide bond formation and how RNA catalyzes peptide bond formation. Two systems have been employed to approach these questions, the ribozyme system and the ribosome system. Ribozymes have been isolated by in vitro selection that can catalyze peptide bond formation using the aminoacyl-adenylate as the substrate. The isolation of such peptide-synthesizing ribozymes suggests that RNA of antiquity might have directed protein synthesis and bolsters the RNA world hypothesis. In the other approach, a novel assay has been established to probe the ribosomal peptidyltransferase reaction in the presence of intact ribosome, ribosomal subunit, or ribosomal RNA alone. Several aspects of the peptidyltransfer reaction have been examined in both systems including metal ion requirement, pH dependence and substrate specificity. The coherence between the two systems is discussed and their potential applications are explored. Although the ribozyme system might not be a reminiscence of the ribosome catalysis, it is still unique in other studies. The newly established assay for ribosomal peptidyltransferase reaction provides a good system to investigate the mechanism of ribosomal reaction and may have potential application in drug screening to search for the specific peptidyltransferase inhibitors

    A Selected Ribozyme Catalyzing Diverse Dipeptide Synthesis

    Get PDF
    AbstractThe sequence of events by which protein, RNA, and DNA emerged during early biological evolution is one of the most profound questions regarding the origin of life. The contemporary role of aminoacyl-adenylates as intermediates in both ribosomal and nonribosomal peptide synthesis suggests that they may have served as substrates for uncoded peptide synthesis during early evolution. We report a highly active peptidyl transferase ribozyme family, isolated by in vitro selection, that efficiently catalyzes dipeptide synthesis by using an aminoacyl-adenylate substrate. It was characterized by sequence and structural analysis and kinetic studies. Remarkably, the ribozyme catalyzed the formation of 30 different dipeptides, the majority of rates being within 5-fold that of the Met-Phe dipeptide required by the selection. The isolation of this synthetic ribozyme fosters speculation that ribozyme-mediated uncoded peptide synthesis may have preceded coded peptide synthesis

    DGMem: Learning Visual Navigation Policy without Any Labels by Dynamic Graph Memory

    Full text link
    In recent years, learning-based approaches have demonstrated significant promise in addressing intricate navigation tasks. Traditional methods for training deep neural network navigation policies rely on meticulously designed reward functions or extensive teleoperation datasets as navigation demonstrations. However, the former is often confined to simulated environments, and the latter demands substantial human labor, making it a time-consuming process. Our vision is for robots to autonomously learn navigation skills and adapt their behaviors to environmental changes without any human intervention. In this work, we discuss the self-supervised navigation problem and present Dynamic Graph Memory (DGMem), which facilitates training only with on-board observations. With the help of DGMem, agents can actively explore their surroundings, autonomously acquiring a comprehensive navigation policy in a data-efficient manner without external feedback. Our method is evaluated in photorealistic 3D indoor scenes, and empirical studies demonstrate the effectiveness of DGMem.Comment: 8 pages, 6 figure

    Multimodal Token Fusion for Vision Transformers

    Full text link
    Many adaptations of transformers have emerged to address the single-modal vision tasks, where self-attention modules are stacked to handle input sources like images. Intuitively, feeding multiple modalities of data to vision transformers could improve the performance, yet the inner-modal attentive weights may also be diluted, which could thus undermine the final performance. In this paper, we propose a multimodal token fusion method (TokenFusion), tailored for transformer-based vision tasks. To effectively fuse multiple modalities, TokenFusion dynamically detects uninformative tokens and substitutes these tokens with projected and aggregated inter-modal features. Residual positional alignment is also adopted to enable explicit utilization of the inter-modal alignments after fusion. The design of TokenFusion allows the transformer to learn correlations among multimodal features, while the single-modal transformer architecture remains largely intact. Extensive experiments are conducted on a variety of homogeneous and heterogeneous modalities and demonstrate that TokenFusion surpasses state-of-the-art methods in three typical vision tasks: multimodal image-to-image translation, RGB-depth semantic segmentation, and 3D object detection with point cloud and images.Comment: CVPR 202

    Bridged Transformer for Vision and Point Cloud 3D Object Detection

    Full text link
    3D object detection is a crucial research topic in computer vision, which usually uses 3D point clouds as input in conventional setups. Recently, there is a trend of leveraging multiple sources of input data, such as complementing the 3D point cloud with 2D images that often have richer color and fewer noises. However, due to the heterogeneous geometrics of the 2D and 3D representations, it prevents us from applying off-the-shelf neural networks to achieve multimodal fusion. To that end, we propose Bridged Transformer (BrT), an end-to-end architecture for 3D object detection. BrT is simple and effective, which learns to identify 3D and 2D object bounding boxes from both points and image patches. A key element of BrT lies in the utilization of object queries for bridging 3D and 2D spaces, which unifies different sources of data representations in Transformer. We adopt a form of feature aggregation realized by point-to-patch projections which further strengthen the correlations between images and points. Moreover, BrT works seamlessly for fusing the point cloud with multi-view images. We experimentally show that BrT surpasses state-of-the-art methods on SUN RGB-D and ScanNetV2 datasets.Comment: CVPR 202

    Styrene-ethylene-butadiene-styrene copolymer/carbon nanotubes composite fiber based strain sensor with wide sensing range and high linearity for human motion detection

    Get PDF
    Flexible strain sensors have attracted extensive attention due to their potential applications in wearable electronics and health monitoring. However, it is still a challenge to obtain flexible strain sensors with both high stretchability and wide linear strain sensing range. In this study, styrene-ethylene-butadiene-styrene copolymer/carbon nanotubes (SEBS/CNTs) composite fiber which showed both electrical conductivity and high stretchability was fabricated through a scalable wet spinning method. The effect of CNTs content on the strain sensing behavior of the SEBS/CNTs fiber based strain sensor was investigated. The results showed that when the CNTs content reached 7 wt%, the SEBS/CNTs composite fiber was capable of sensing strains as high as 500.20% and showed a wide linear strain sensing range of 0-500.2% with a gauge factor (GF) of 38.57. Combining high stretchability, high linearity and reliable stability, the SEBS/CNTs composite fiber based strain sensor had the ability to monitor the activities of different human body parts including hand, wrist, elbow, shoulder and knee

    Application of probabilistic modeling and automated machine learning framework for high-dimensional stress field

    Full text link
    Modern computational methods, involving highly sophisticated mathematical formulations, enable several tasks like modeling complex physical phenomenon, predicting key properties and design optimization. The higher fidelity in these computer models makes it computationally intensive to query them hundreds of times for optimization and one usually relies on a simplified model albeit at the cost of losing predictive accuracy and precision. Towards this, data-driven surrogate modeling methods have shown a lot of promise in emulating the behavior of the expensive computer models. However, a major bottleneck in such methods is the inability to deal with high input dimensionality and the need for relatively large datasets. With such problems, the input and output quantity of interest are tensors of high dimensionality. Commonly used surrogate modeling methods for such problems, suffer from requirements like high number of computational evaluations that precludes one from performing other numerical tasks like uncertainty quantification and statistical analysis. In this work, we propose an end-to-end approach that maps a high-dimensional image like input to an output of high dimensionality or its key statistics. Our approach uses two main framework that perform three steps: a) reduce the input and output from a high-dimensional space to a reduced or low-dimensional space, b) model the input-output relationship in the low-dimensional space, and c) enable the incorporation of domain-specific physical constraints as masks. In order to accomplish the task of reducing input dimensionality we leverage principal component analysis, that is coupled with two surrogate modeling methods namely: a) Bayesian hybrid modeling, and b) DeepHyper's deep neural networks. We demonstrate the applicability of the approach on a problem of a linear elastic stress field data.Comment: 17 pages, 16 figures, IDETC Conference Submissio

    Experimental study on evolution behaviors of triaxial-shearing parameters for hydrate-bearing intermediate fine sediment

    Get PDF
    Evolution behaviors of triaxial shearing parameters are very important for geo-technical re- sponse analysis during the process of extracting natural gas from hydrate-bearing reservoirs. In order to explore the effects of hydrate formation/decomposition on triaxial shearing behaviors of intermediate fine sediment, natural beach sand in Qingdao, China, which was sieved from 0.1 to 0.85 mm, was used and a series of triaxial shear tests were carried out in this paper. The principle of critical state was firstly used to explain the mechanism of strain softening and/or hardening failure mode. Moreover, an empirical model was provided for axial-lateral strain and corresponding model parameters calculation. Evolution rules of critical strength parameters were analyzed prominently. The results show that failure mode of sediment is controlled by several parameters, such as effective confining pressure, hydrate saturation, etc. Different axial-lateral strain model coefficients’ effect on strain relationships are different, probing into the physical meaning of each coefficient is essential for further understanding of strain relationships. Complex geo-technical response should be faced with the progress of producing natural gas from hydrate-bearing reservoir, because of sudden change of failure pattern and formation modulus. Further compressive study on critical condition of failure pattern is needed for proposed promising hydrate-bearing reservoirs.Cited as: Li, Y., Liu, C., Liu, L., Sun, J., Liu, H., Meng, Q. Experimental study on evolution behaviors of triaxial-shearing parameters for hydrate-bearing intermediate fine sediment. Advances in Geo-Energy Research, 2018, 2(1): 43-52, doi: 10.26804/ager.2018.01.0

    Hybrid strategies for efficient intra prediction in spatial SHVC

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI linkWith multi-layer encoding and Inter-layer prediction, Spatial Scalable High Efficiency Video Coding (SSHVC) has extremely high coding complexity. It is very crucial to speed up its coding to promote widespread and cost-effective SSHVC applications. Specifically, we first reveal that the average RD cost of Inter-layer Reference (ILR) mode is different from that of Intra mode, but they both follow the Gaussian distribution. Based on this discovery, we apply the classic Gaussian Mixture Model and Expectation Maximization to determine whether ILR mode is the best mode thus skipping Intra mode. Second, when coding units (CUs) in enhancement layer use Intra mode, it indicates very simple texture is presented. We investigate their Directional Mode (DM) distribution, and divide all DMs into three classes, and then develop different methods with respect to classes to progressively predict the best DMs. Third, by jointly considering rate distortion costs, residual coefficients and neighboring CUs, we propose to employ the Conditional Random Fields model to early terminate depth selection. Experimental results demonstrate that the proposed algorithm can significantly improve coding speed with negligible coding efficiency losses
    corecore