5 research outputs found

    Rise and Fall of an Anti-MUC1 Specific Antibody

    Get PDF
    So far, human antibodies with good affinity and specificity for MUC1, a transmembrane protein overexpressed on breast cancers and ovarian carcinomas, and thus a promising target for therapy, were very difficult to generate.A human scFv antibody was isolated from an immune library derived from breast cancer patients immunised with MUC1. The anti-MUC1 scFv reacted with tumour cells in more than 80% of 228 tissue sections of mamma carcinoma samples, while showing very low reactivity with a large panel of non-tumour tissues. By mutagenesis and phage display, affinity of scFvs was increased up to 500fold to 5,7×10(-10) M. Half-life in serum was improved from below 1 day to more than 4 weeks and was correlated with the dimerisation tendency of the individual scFvs. The scFv bound to T47D and MCF-7 mammalian cancer cell lines were recloned into the scFv-Fc and IgG format resulting in decrease of affinity of one binder. The IgG variants with the highest affinity were tested in mouse xenograft models using MCF-7 and OVCAR tumour cells. However, the experiments showed no significant decrease in tumour growth or increase in the survival rates. To study the reasons for the failure of the xenograft experiments, ADCC was analysed in vitro using MCF-7 and OVCAR3 target cells, revealing a low ADCC, possibly due to internalisation, as detected for MCF-7 cells.Antibody phage display starting with immune libraries and followed by affinity maturation is a powerful strategy to generate high affinity human antibodies to difficult targets, in this case shown by the creation of a highly specific antibody with subnanomolar affinity to a very small epitope consisting of four amino acids. Despite these "best in class" binding parameters, the therapeutic success of this antibody was prevented by the target biology

    Investigating cross-lingual training for offensive language detection.

    Get PDF
    Platforms that feature user-generated content (social media, online forums, newspaper comment sections etc.) have to detect and filter offensive speech within large, fast-changing datasets. While many automatic methods have been proposed and achieve good accuracies, most of these focus on the English language, and are hard to apply directly to languages in which few labeled datasets exist. Recent work has therefore investigated the use of cross-lingual transfer learning to solve this problem, training a model in a well-resourced language and transferring to a less-resourced target language; but performance has so far been significantly less impressive. In this paper, we investigate the reasons for this performance drop, via a systematic comparison of pre-trained models and intermediate training regimes on five different languages. We show that using a better pre-trained language model results in a large gain in overall performance and in zero-shot transfer, and that intermediate training on other languages is effective when little target-language data is available. We then use multiple analyses of classifier confidence and language model vocabulary to shed light on exactly where these gains come from and gain insight into the sources of the most typical mistakes
    corecore