25 research outputs found

    Distal hinge of plasminogen activator inhibitor-1 involves its latency transition and specificities toward serine proteases

    Get PDF
    BACKGROUND: The plasminogen activator inhibitor-1 (PAI-1) spontaneously converts from an inhibitory into a latent form. Specificity of PAI-1 is mainly determined by its reactive site (Arg346-Met347), which interacts with serine residue of tissue-type plasminogen activator (tPA) with concomitant formation of SDS-stable complex. Other sites may also play roles in determining the specificity of PAI-1 toward serine proteases. RESULTS: To understand more about the role of distal hinge for PAI-1 specificities towards serine proteases and for its conformational transition, wild type PAI-1 and its mutants were expressed in baculovirus system. WtPAI-1 was found to be about 12 fold more active than the fibrosarcoma PAI-1. Single site mutants within the Asp355-Arg356-Pro357 segment of PAI-1 yield guanidine activatable inhibitors (a) that can still form SDS stable complexes with tPA and urokinase plasminogen activator (uPA), and (b) that have inhibition rate constants towards plasminogen activators which resemble those of the fibrosarcoma inhibitor. More importantly, latency conversion rate of these mutants was found to be ~3–4 fold faster than that of wtPAI-1. We also tested if Glu351 is important for serine protease specificity. The functional stability of wtPAI-1, Glu351Ala, Glu351Arg was about 18 ± 5, 90 ± 8 and 14 ± 3 minutes, respectively, which correlated well with both their corresponding specific activities (84 ± 15 U/ug, 112 ± 18 U/ug and 68 ± 9 U/ug, respectively) and amount of SDS-stable complex formed with tPA after denatured by Guanidine-HCl and dialyzed against 50 mM sodium acetate at 4°C. The second-order rate constants of inhibition for uPA, plasmin and thrombin by Glu351Ala and Glu351Arg were increased about 2–10 folds compared to wtPAI-1, but there was no change for tPA. CONCLUSION: The Asp355-Pro357 segment and Glu351 in distal hinge are involved in maintaining the inhibitory conformation of PAI-1. Glu351 is a specificity determinant of PAI-1 toward uPA, plasmin and thrombin, but not for tPA

    Introducing BEREL: BERT Embeddings for Rabbinic-Encoded Language

    Full text link
    We present a new pre-trained language model (PLM) for Rabbinic Hebrew, termed Berel (BERT Embeddings for Rabbinic-Encoded Language). Whilst other PLMs exist for processing Hebrew texts (e.g., HeBERT, AlephBert), they are all trained on modern Hebrew texts, which diverges substantially from Rabbinic Hebrew in terms of its lexicographical, morphological, syntactic and orthographic norms. We demonstrate the superiority of Berel on Rabbinic texts via a challenge set of Hebrew homographs. We release the new model and homograph challenge set for unrestricted use

    A cAMP-triggered release of a hormone-like peptide

    Get PDF
    AbstractPreparations of the catalytic subunit of cAMP-dependent protein kinase from rabbit skeletal muscle, which appear to be homogeneous by SDS-polyacrylamide gel electrophoresis, were often found to contain a hormone-like factor (HLF) which causes an immediate rise, then a decline of intracellular cAMP in a B-lymphoma cell line. Active HLF is released when the fractions that contain it in an inactive form are incubated with cAMP prior to chromatography, or passed through an immobilized cAMP column. HLF seems to be a peptide: it loses its cell-stimulating capability after proteolysis and has an apparent molecular mass of 2.2 – 2.5 kDa

    Large Pre-Trained Models with Extra-Large Vocabularies: A Contrastive Analysis of Hebrew BERT Models and a New One to Outperform Them All

    Full text link
    We present a new pre-trained language model (PLM) for modern Hebrew, termed AlephBERTGimmel, which employs a much larger vocabulary (128K items) than standard Hebrew PLMs before. We perform a contrastive analysis of this model against all previous Hebrew PLMs (mBERT, heBERT, AlephBERT) and assess the effects of larger vocabularies on task performance. Our experiments show that larger vocabularies lead to fewer splits, and that reducing splits is better for model performance, across different tasks. All in all this new model achieves new SOTA on all available Hebrew benchmarks, including Morphological Segmentation, POS Tagging, Full Morphological Analysis, NER, and Sentiment Analysis. Subsequently we advocate for PLMs that are larger not only in terms of number of layers or training data, but also in terms of their vocabulary. We release the new model publicly for unrestricted use
    corecore