725 research outputs found

    Time-Out : Temporal Referencing for Robust Modeling of Lexical Semantic Change

    Get PDF
    Code produced for this paper is available at: https://github.com/Garrafao/TemporalReferencingState-of-the-art models of lexical semantic change detection suffer from noise stemming from vector space alignment. We have empirically tested the Temporal Referencing method for lexical semantic change and show that, by avoiding alignment, it is less affected by this noise. We show that, trained on a diachronic corpus, the skip-gram with negative sampling architecture with temporal referencing outperforms alignment models on a synthetic task as well as a manual testset. We introduce a principled way to simulate lexical semantic change and systematically control for possible biases.Peer reviewe

    Papillary thyroid carcinoma tall cell variant shares accumulation of mitochondria, mitochondrial DNA mutations, and loss of oxidative phosphorylation complex I integrity with oncocytic tumors

    Get PDF
    open12siMDL was supported by the Associazione Italiana per la Ricerca sul Cancro (AIRC) ‘Bruna Martelli’ Fellowship.Papillary thyroid carcinoma tall cell variant (PTC-TCV), a form of PTC regarded as an aggressive subtype, shares several morphologic features with oncocytic tumors. Pathogenic homoplasmic/highly heteroplasmic somatic mitochondrial DNA (mtDNA) mutations, usually affecting oxidative phosphorylation (OXPHOS) complex I subunits, are hallmarks of oncocytic cells. To clarify the relationship between PTC-TCV and oncocytic thyroid tumors, 17 PTC-TCV and 16 PTC non-TCV controls (cPTC) were subjected to: (1) transmission electron microscopy (TEM) to assess mitochondria accumulation, (2) next-generation sequencing to analyze mtDNA and nuclear genes frequently mutated in thyroid carcinoma, and (3) immunohistochemistry (IHC) for prohibitin and complex I subunit NDUFS4 to evaluate OXPHOS integrity. TEM showed replacement of cytoplasm by mitochondria in PTC-TCV but not in cPTC cells. All 17 PTC-TCV had at least one mtDNA mutation, totaling 21 mutations; 3/16 cPTC (19%) had mtDNA mutations (p < 0.001). PTC-TCV mtDNA mutations were homoplasmic/highly heteroplasmic, 16/21 (76%) mapping within mtDNA-encoded complex I subunits. MtDNA mutations in cPTC were homoplasmic in 2 cases and at low heteroplasmy in the third case, 2/3 mapping to mtDNA-encoded complex I subunits; 2/3 cPTC with mtDNA mutations had small tall cell subpopulations. PTC-TCV showed strong prohibitin expression and cPTC low-level expression, consistent with massive and limited mitochondrial content, respectively. All 17 PTC-TCV showed NDUFS4 loss (partial or complete) and 3 of 16 cPTC (19%) had (partial) NDUFS4 loss, consistent with lack of complex I integrity in PTC-TCV (p < 0.001). IHC loss of NDUFS4 was associated with mtDNA mutations (p < 0.001). Four BRAF V600E mutated PTCs had loss of NDUSF4 expression limited to neoplastic cell subpopulations with tall cell features, indicating that PTCs first acquire BRAF V600E and then mtDNA mutations. Similar to oncocytic thyroid tumors, PTC-TCV is characterized by mtDNA mutations, massive accumulation of mitochondria, and loss of OXPHOS integrity. IHC loss of NDUFS-4 can be used as a surrogate marker for OXPHOS disruption and to reliably diagnose PTC-TCV.openTsybrovskyy, Oleksiy; De Luise, Monica; Biase, Dario; Caporali, Leonardo; Fiorini, Claudio; Gasparre, Giuseppe; Carelli, Valerio; Hackl, Dominik; Imamovic, Larisa; Haim, Silke; Sobrinho‐SimĂ”es, Manuel; Tallini, GiovanniTsybrovskyy, Oleksiy; De Luise, Monica; Biase, Dario; Caporali, Leonardo; Fiorini, Claudio; Gasparre, Giuseppe; Carelli, Valerio; Hackl, Dominik; Imamovic, Larisa; Haim, Silke; Sobrinho‐SimĂ”es, Manuel; Tallini, Giovann

    3D-volldigitalisierte Behandlungsplanung bei Lippen-Kiefer-Gaumenspalten (LKGS-3D)

    Get PDF
    Die Idealvorstellung eines vollstĂ€ndig digitalisierten Behandlungsalltags rĂŒckt mit fortschreitender technologischer und informationeller Entwicklung stetig nĂ€her an die RealitĂ€t. Zu Beginn bestand lediglich die Möglichkeit einer elektronischen Patientenakte, hinzu kamen vielfĂ€ltige Möglichkeiten der digitalen Bildgebung und wurden schließlich um das Ziel eines vollstĂ€ndigen digitalen Workflows ergĂ€nzt. Die Planung der interdisziplinĂ€ren kieferorthopĂ€dischen / kieferchirurgischen Versorgung von Patienten mit Lippen-Kiefer-Gaumen-Spalten (LKGS) wurde bis vor kurzem am UniversitĂ€tsklinikum Dresden noch hauptsĂ€chlich analog durchgefĂŒhrt. Eine volldigitalisierte Behandlungsplanung unter Einbeziehung aller beteiligten Behandler fand nicht statt. Ziel des Projektes war es deshalb, eine digitale Plattform zur interdisziplinĂ€ren zahnmedizinischen Versorgung von LKGS-Patienten zu schaffen. Dazu wurde zuerst die bisher erforderliche Abdrucknahme mittels Alginat und die anschließende Herstellung eines Gipsmodells durch einen intraoralen 3D-Scan der Zahnbögen des Patienten abgelöst. Anhand des intraoralen 3D-Scans können nun die erforderlichen Trinkplatten mittels 3D-Druck erstellt werden. Zweiter Schritt war die Anfertigung von 3D-Aufnahmen der Weichteile des Gesichtes mittels eines extraoralen 3D-Scanners. Als dritter Schritt erfolgte die Anfertigung von Digitalen Volumentomografie (DVT)-Aufnahmen zur 3D-Darstellung des SchĂ€delknochens und Kieferskeletts. Nach der Anfertigung wurden diese bildbasierten DatensĂ€tze zu einem „digitalen Zwilling“ (virtuelles 3D-Modell aus DVT, intra- und extraoralen 3D-Scan) zusammengefasst, wodurch erstmalig ein umfassendes 3D-Modell des Mund-Kiefer-Raumes einschließlich wichtiger Informationen zum Kiefergelenk und der anliegenden Weichteile entstand. Dieses virtuelle Modell bildet jetzt die Grundlage fĂŒr die Behandlungsplanung und die Planung der weiteren zahnmedizinischen und medizinischen Versorgung. Es konnte also im Projekt die komplette Digitalisierung der Diagnostik, die Etablierung einer Fusionsplattform und der Datenaustausch zwischen Uniklinik und privater Praxis umgesetzt werden

    3D-volldigitalisierte Behandlungsplanung bei Lippen-Kiefer-Gaumenspalten

    Get PDF
    Die Idealvorstellung eines vollstĂ€ndig digitalisierten Behandlungsalltags rĂŒckt mit fortschreitender technologischer und informationeller Entwicklung stetig nĂ€her an die RealitĂ€t. Zu Beginn bestand lediglich die Möglichkeit einer elektronischen Patientenakte, hinzu kamen vielfĂ€ltige Möglichkeiten der digitalen Bildgebung und wurden schließlich um das Ziel eines vollstĂ€ndigen digitalen Workflows ergĂ€nzt. Die Planung der interdisziplinĂ€ren kieferorthopĂ€dischen / kieferchirurgischen Versorgung von Patienten mit Lippen-Kiefer-Gaumen-Spalten (LKGS) wird momentan noch hauptsĂ€chlich analog durchgefĂŒhrt. Eine volldigitalisierte Behandlungsplanung und –freigabe unter intersektoraler Einbeziehung aller beteiligten Behandler findet nicht statt. Neu entwickelte, digitale kieferorthopĂ€dische und –chirurgische Produkte, welche bis vor ein paar Jahren noch nicht denkbar waren, unterstĂŒtzen den digitalen Workflow maßgeblich. 3D-Scanner und 3D-Fotoaufnahmen erstellen digitale Modelle, deren Daten zur Integration in den digitalen Workflow durch Softwareprozesse verarbeitet werden mĂŒssen. Um einen einfachen und strukturierten Zugriff auf die gesamten 3D Daten zu gewĂ€hrleisten, ist die Idee entstanden, die anfallenden Daten und Unterlagen zu digitalisieren und in einer fĂŒr diesen Zweck entwickelten Datenbank zu speichern und zu bearbeiten. Die konsequente Verwendung von digitalen 3D-Analysen und der Verzicht auf die aufwĂ€ndige Erstellung der Diagnostik- und Therapieplanung aus einer Kombination von Gipsmodellen und 2D-Daten (z.B. Röntgenbilder) fĂŒhren zu einer fundamentalen und richtungsweisenden VerĂ€nderung des Behandlungsplanungsprozesses. RĂ€umlich und monetĂ€r aufwendige Bearbeitungsprozesse werden eingespart und BehandlungsplĂ€ne können rascher erstellt werden, da sich alle notwendigen Unterlagen gebĂŒndelt in einem System wiederfinden. Weiterhin wird die Strahlenbelastung der Patienten wesentlich verringert, da eine Doppeldiagnostik durch mehrmaliges Anfertigen von Röntgenaufnahmen, DVTs und CTs in verschiedenen Abteilungen vermieden wird. Die Kommunikation, der Austausch und die konsiliarische Zusammenarbeit zwischen den Behandlern erfolgt unter Verwendung der europaweit ersten offenen, ĂŒberregionalen telemedizinischen Plattform zur Verbesserung der medizinischen Versorgung – CCS TELEHEALTH OSTSACHSEN (THOS). [... aus der Einleitung

    Computational approaches to semantic change

    Get PDF
    Semantic change â€” how the meanings of words change over time â€” has preoccupied scholars since well before modern linguistics emerged in the late 19th and early 20th century, ushering in a new methodological turn in the study of language change. Compared to changes in sound and grammar, semantic change is the least  understood. Ever since, the study of semantic change has progressed steadily, accumulating a vast store of knowledge for over a century, encompassing many languages and language families. Historical linguists also early on realized the potential of computers as research tools, with papers at the very first international conferences in computational linguistics in the 1960s. Such computational studies still tended to be small-scale, method-oriented, and qualitative. However, recent years have witnessed a sea-change in this regard. Big-data empirical quantitative investigations are now coming to the forefront, enabled by enormous advances in storage capability and processing power. Diachronic corpora have grown beyond imagination, defying exploration by traditional manual qualitative methods, and language technology has become increasingly data-driven and semantics-oriented. These developments present a golden opportunity for the empirical study of semantic change over both long and short time spans. A major challenge presently is to integrate the hard-earned  knowledge and expertise of traditional historical linguistics with  cutting-edge methodology explored primarily in computational linguistics. The idea for the present volume came out of a concrete response to this challenge.  The 1st International Workshop on Computational Approaches to Historical Language Change (LChange'19), at ACL 2019, brought together scholars from both fields. This volume offers a survey of this exciting new direction in the study of semantic change, a discussion of the many remaining challenges that we face in pursuing it, and considerably updated and extended versions of a selection of the contributions to the LChange'19 workshop, addressing both more theoretical problems —  e.g., discovery of "laws of semantic change" â€” and practical applications, such as information retrieval in longitudinal text archives

    Computational approaches to semantic change

    Get PDF
    Semantic change â€” how the meanings of words change over time â€” has preoccupied scholars since well before modern linguistics emerged in the late 19th and early 20th century, ushering in a new methodological turn in the study of language change. Compared to changes in sound and grammar, semantic change is the least  understood. Ever since, the study of semantic change has progressed steadily, accumulating a vast store of knowledge for over a century, encompassing many languages and language families. Historical linguists also early on realized the potential of computers as research tools, with papers at the very first international conferences in computational linguistics in the 1960s. Such computational studies still tended to be small-scale, method-oriented, and qualitative. However, recent years have witnessed a sea-change in this regard. Big-data empirical quantitative investigations are now coming to the forefront, enabled by enormous advances in storage capability and processing power. Diachronic corpora have grown beyond imagination, defying exploration by traditional manual qualitative methods, and language technology has become increasingly data-driven and semantics-oriented. These developments present a golden opportunity for the empirical study of semantic change over both long and short time spans. A major challenge presently is to integrate the hard-earned  knowledge and expertise of traditional historical linguistics with  cutting-edge methodology explored primarily in computational linguistics. The idea for the present volume came out of a concrete response to this challenge.  The 1st International Workshop on Computational Approaches to Historical Language Change (LChange'19), at ACL 2019, brought together scholars from both fields. This volume offers a survey of this exciting new direction in the study of semantic change, a discussion of the many remaining challenges that we face in pursuing it, and considerably updated and extended versions of a selection of the contributions to the LChange'19 workshop, addressing both more theoretical problems —  e.g., discovery of "laws of semantic change" â€” and practical applications, such as information retrieval in longitudinal text archives

    Computational approaches to semantic change

    Get PDF
    Semantic change â€” how the meanings of words change over time â€” has preoccupied scholars since well before modern linguistics emerged in the late 19th and early 20th century, ushering in a new methodological turn in the study of language change. Compared to changes in sound and grammar, semantic change is the least  understood. Ever since, the study of semantic change has progressed steadily, accumulating a vast store of knowledge for over a century, encompassing many languages and language families. Historical linguists also early on realized the potential of computers as research tools, with papers at the very first international conferences in computational linguistics in the 1960s. Such computational studies still tended to be small-scale, method-oriented, and qualitative. However, recent years have witnessed a sea-change in this regard. Big-data empirical quantitative investigations are now coming to the forefront, enabled by enormous advances in storage capability and processing power. Diachronic corpora have grown beyond imagination, defying exploration by traditional manual qualitative methods, and language technology has become increasingly data-driven and semantics-oriented. These developments present a golden opportunity for the empirical study of semantic change over both long and short time spans. A major challenge presently is to integrate the hard-earned  knowledge and expertise of traditional historical linguistics with  cutting-edge methodology explored primarily in computational linguistics. The idea for the present volume came out of a concrete response to this challenge.  The 1st International Workshop on Computational Approaches to Historical Language Change (LChange'19), at ACL 2019, brought together scholars from both fields. This volume offers a survey of this exciting new direction in the study of semantic change, a discussion of the many remaining challenges that we face in pursuing it, and considerably updated and extended versions of a selection of the contributions to the LChange'19 workshop, addressing both more theoretical problems —  e.g., discovery of "laws of semantic change" â€” and practical applications, such as information retrieval in longitudinal text archives

    Computational approaches to semantic change

    Get PDF
    Semantic change â€” how the meanings of words change over time â€” has preoccupied scholars since well before modern linguistics emerged in the late 19th and early 20th century, ushering in a new methodological turn in the study of language change. Compared to changes in sound and grammar, semantic change is the least  understood. Ever since, the study of semantic change has progressed steadily, accumulating a vast store of knowledge for over a century, encompassing many languages and language families. Historical linguists also early on realized the potential of computers as research tools, with papers at the very first international conferences in computational linguistics in the 1960s. Such computational studies still tended to be small-scale, method-oriented, and qualitative. However, recent years have witnessed a sea-change in this regard. Big-data empirical quantitative investigations are now coming to the forefront, enabled by enormous advances in storage capability and processing power. Diachronic corpora have grown beyond imagination, defying exploration by traditional manual qualitative methods, and language technology has become increasingly data-driven and semantics-oriented. These developments present a golden opportunity for the empirical study of semantic change over both long and short time spans. A major challenge presently is to integrate the hard-earned  knowledge and expertise of traditional historical linguistics with  cutting-edge methodology explored primarily in computational linguistics. The idea for the present volume came out of a concrete response to this challenge.  The 1st International Workshop on Computational Approaches to Historical Language Change (LChange'19), at ACL 2019, brought together scholars from both fields. This volume offers a survey of this exciting new direction in the study of semantic change, a discussion of the many remaining challenges that we face in pursuing it, and considerably updated and extended versions of a selection of the contributions to the LChange'19 workshop, addressing both more theoretical problems —  e.g., discovery of "laws of semantic change" â€” and practical applications, such as information retrieval in longitudinal text archives

    Computational approaches to semantic change

    Get PDF
    Semantic change â€” how the meanings of words change over time â€” has preoccupied scholars since well before modern linguistics emerged in the late 19th and early 20th century, ushering in a new methodological turn in the study of language change. Compared to changes in sound and grammar, semantic change is the least  understood. Ever since, the study of semantic change has progressed steadily, accumulating a vast store of knowledge for over a century, encompassing many languages and language families. Historical linguists also early on realized the potential of computers as research tools, with papers at the very first international conferences in computational linguistics in the 1960s. Such computational studies still tended to be small-scale, method-oriented, and qualitative. However, recent years have witnessed a sea-change in this regard. Big-data empirical quantitative investigations are now coming to the forefront, enabled by enormous advances in storage capability and processing power. Diachronic corpora have grown beyond imagination, defying exploration by traditional manual qualitative methods, and language technology has become increasingly data-driven and semantics-oriented. These developments present a golden opportunity for the empirical study of semantic change over both long and short time spans. A major challenge presently is to integrate the hard-earned  knowledge and expertise of traditional historical linguistics with  cutting-edge methodology explored primarily in computational linguistics. The idea for the present volume came out of a concrete response to this challenge.  The 1st International Workshop on Computational Approaches to Historical Language Change (LChange'19), at ACL 2019, brought together scholars from both fields. This volume offers a survey of this exciting new direction in the study of semantic change, a discussion of the many remaining challenges that we face in pursuing it, and considerably updated and extended versions of a selection of the contributions to the LChange'19 workshop, addressing both more theoretical problems —  e.g., discovery of "laws of semantic change" â€” and practical applications, such as information retrieval in longitudinal text archives
    • 

    corecore