10 research outputs found

    Effects of zero morphology on syncretism and allomorphy in Western Armenian verbs

    Get PDF
    Verbs in Western Armenian (Indo-European) inflect for both subject agreement and tense. Subject and tense marking is often fused, which makes segmentation difficult. We show that, despite surface fusion, verbal inflection in Western Armenian is fundamentally agglutinative. By segmenting subject and tense suffixes across the verbal paradigm, we capture syncretic patterns and other interactions between inflectional slots that a fusional account does not. Our analysis requires limited but systematic use of zero morphs. Our agglutinative model of Western Armenian verbs reveals that inwardly-sensitive morphologically-conditioned allomorphy has priority over its outwardly-sensitive counterpart

    Prunus spinosa l. Ekstresinin çeşitli kanser hücre soylarındaki antioksidan ve sitotoksik etkileri

    Get PDF
    Objective: Blackthorn (Prunus spinosa L. (Rosaceae) is a shrup whose fruits are consumed as food in Turkey. This study was aimed to evaluate antioxidant activity of methanol extract of P. spinosa and its cytotoxic effects on cancer cell lines. Method: Methanol extract of P. spinosa fruit was evaluated for its in vitro cytotoxic activity on multiform (GBM) brain cancer (LN229, U87 and T98G) and pancreas cancer (PANC-1 and AsPC-1) cell lines. Cell viability assays were performed by calculating the percentage of viable cells using a luminescence system, and spectrophotometrically. measuring its antioxidant ABTS and DPPH radical scavenging activities. Differences were considered as statistically significant at p*<0.001 and p**<0.0005 according to unpaired student t-test. Results: Methanol extract of P. spinosa fruit showed 2548±18 mg GAE/100 g corresponding to the total phenolic content, and moderate antioxidant activity (0.1896±0.1143 and 0.0729±0.0348) by ABTS• and DPPH• assays. Conclusion: To the best of our knowledge, after evaluating the results of brain and pancreas cancer cell lines, significant cytotoxic activities with 50-63% cell viability of GBM brain cancer cells were determined while no cytotoxicity was observed on pancreas cancer cell lines, PANC-1; and AsPC-1. The results of this study showed that the methanol extract of P. spinosa fruit has significant antioxidant capacity and leads to statistically significant decreased viability on glioblastoma brain cancer cells.Amaç: Çakal eriği (Prunus spinosa L.), Gülgiller (Rosaceae) familyasından bir ağaççık türüdür ve Türkiye’de meyvesi besin olarak tüketilmektedir. Bu çalışma P. spinosa metanol ekstresinin antioksidan aktivitesini ve kanser hücre hatları üzerindeki sitotoksik etkilerini değerlendirmeyi amaçlamıştır. Yöntem: P. spinosa meyvesi metanol ekstresi, glioblastoma multiform (GBM) beyin kanseri (LN229, U-87 ve T98G) ve pankreas kanseri (PANC-1 ve AsPC-1) hücre hatları kullanılarak in vitro sitotoksik aktivitesi araştırılmıştır. Hücre canlılığı deneyleri, biyolüminesans sistemi kullanılarak canlı hücrelerin yüzdesinin ve antioksidan aktivitelerinin, spektrofotometrik olarak ABTS ve DPPH radikalleri ile ölçülmesi yoluyla gerçekleştirilmiştir. İstatiksel anlamlılık eşleştirilmemiş öğrenci t testi ile p*<0.001 ve p**<0.0005 olarak belirlenmiştir. Bulgular: P. spinosa meyve metanol ekstresi, ABTS• ve DPPH• testlerinde toplam fenolik içeriğe karşılık gelen 2548±18 mg GAE/100 g ve orta düzeyde antioksidan aktivite (0,1896±0,1143 ve 0,0729±0,0348) göstermiştir. Sonuç: Elde ettiğimiz bilgiler ışığında, beyin ve pankreas kanseri hücre hatlarındaki sonuçlarının değerlendirilmesinden sonra, GBM beyin kanseri hücrelerinde %50-63 arasındaki hücre canlılığı ile önemli derecede sitotoksik aktivitesi belirlenmiş ancak PANC-1 ve AsPC-1 pankreas kanseri hücre hatlarında sitotoksisite gözlenmemiştir. Sonuç olarak, P. spinosa meyvesi metanol ekstresinin önemli antioksidan kapasiteye sahip olduğu ve glioblastoma beyin kanseri hücrelerinin canlılığında istatistiksel olarak anlamlı bir azalmaya yol açtığı gösterilmiştir

    Potential role of Hsa-Mir-8072 in prostate cancer DU 145 cells

    No full text
    Amaç: Çalışmamızda, insan prostat kanser hücre hattı (DU145) ve prostat normal epitel hücre hatları (RWPE) arasında miRNA ifadesinin analizini yapmak ve kanser gelişiminde olası rolünü incelemeği amaçladık. Gereçler ve Yöntem: İnsan prostat epitel hücre hattı (RWPE) ve prostat kanseri hücre hatları (DU-145) Amerikan Tip Kültür Koleksiyonu (ATCC)’den temin edildi. Hücre hatlarının çoğaltılmasında ve sürdürülmesinde RPMI 1640 besi ortamı kullanıldı. Transkriptom analizi için RNA izolasyonu yapılarak, kütüphane oluşturuldu, kütüphanenin kantitasyonunun ardından NextSeq500 (illumina) ile sekanslama yapıldı. Dizileme, haritalandırma, bağıl gen ifade ölcümleri gibi biyoinformatik analizler Genomics Workbench v 8 yazılımı kullanılarak GRCh38 referans sekansı ile yapıldı. Bulgular: RWPE normal prostat epitel hücre kültürleri ile DU145 prostat kanser hücreleri karşılaştırıldığı zaman DU145 prostat kanser hücre kültürlerinde, miRNA (hsa-mir-8072) ifadesinde anlamlı bir artma (p<0,05) görüldü. Sonuç: Bu sonuç bize hsa-mir-8072 ifadesinin prostat kanserinde onkogenik miRNA olarak rol oynayabileceğini düşündürdü.Objective: In this study, we aimed to analyze miRNA expression between prostate cancer cell line (DU145) and prostate normal epithelial cell lines (RWPE) and to investigate its possible role in cancer development. Material and Methods: Human prostate epithelial cell line (RWPE) and prostate cancer cell line (DU145) were acquired from ATCC. Both cell lines were maintanied in RPMI 1640 medium. Total RNA were isolated and fragmented. Adapters were ligated to prepare RNA library for whole trasncriptome experiments. Statistics and bioinformatics analysis including mapping, clustering, sequencing were done by using Genomics Workbench v 8. software. Results: As we compared the normal prostate ephitalial cells (RWPE) and prostate cancer cells (DU 145); miRNA (hsamir-8072) were significantly (p<0.05) up-regulated in DU145 cells. Conclusion: This result suggests that the hsa-mir-8072 expression may play a role as a oncogen in prostate cancer

    Case Reports Presentations

    No full text

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore