42 research outputs found
Self-protecting motivation, indexed by self-threat, modifies retrieval-induced-forgetting and confidence in employment decision bias against out-group targets
Human memory is malleable by both social and motivational factors and holds information relevant to workplace decisions. Retrieval-induced forgetting (RIF) describes a phenomenon where retrieval practice impairs subsequent memory for related (unpracticed) information. We report two RIF experiments. Chinese participants received a mild self-threat manipulation (Experiment 2) or not (Experiment 1) before an ethnicity-RIF task that involved practicing negative traits of either in-group (Chinese) or an out-group (Japanese) target. After a subsequent memory test, participants selected their preferred applicant for employment. RIF scores correspond to forgetting of unpracticed positive traits of one target (Rp−) relative to the recall of practiced negative traits of the other target (Rp+). Enhanced forgetting of positive traits was found in both experiments for both targets. Across experiments, a significant target by threat interaction showed that target ethnicity modified RIF (an ethnicity-RIF effect). Inducing a self-protecting motivation enhanced RIF effects for the out-group (Japanese) target. In a subsequent employment decision, there was a strong bias to select the in-group target, with the confidence in these decisions being associated with RIF scores. This study suggests that rehearsing negative traits of minority applicants can affect metacognitive aspects of employment decisions, possibly by shaping the schemas available to the majority (in-group) employer. To disrupt systemic racism, recruitment practices should aim to offset a human motivation to protect one-self, when exposed to a relatively mild threat to self-esteem. Discussing the negative traits of minority applicants is a critical, and sensitive, aspect of decision-making that warrants careful practice. These data suggest that recruiting individuals should be reminded of their personal strengths in this context, not their vulnerabilities, to secure their decision-making for fairer recruitment practice
Crosslingual Transfer Learning for Low-Resource Languages Based on Multilingual Colexification Graphs
In comparative linguistics, colexification refers to the phenomenon of a
lexical form conveying two or more distinct meanings. Existing work on
colexification patterns relies on annotated word lists, limiting scalability
and usefulness in NLP. In contrast, we identify colexification patterns of more
than 2,000 concepts across 1,335 languages directly from an unannotated
parallel corpus. We then propose simple and effective methods to build
multilingual graphs from the colexification patterns: ColexNet and ColexNet+.
ColexNet's nodes are concepts and its edges are colexifications. In ColexNet+,
concept nodes are additionally linked through intermediate nodes, each
representing an ngram in one of 1,334 languages. We use ColexNet+ to train
\overrightarrow{\mbox{ColexNet+}}, high-quality multilingual embeddings that
are well-suited for transfer learning. In our experiments, we first show that
ColexNet achieves high recall on CLICS, a dataset of crosslingual
colexifications. We then evaluate \overrightarrow{\mbox{ColexNet+}} on
roundtrip translation, sentence retrieval and sentence classification and show
that our embeddings surpass several transfer learning baselines. This
demonstrates the benefits of using colexification as a source of information in
multilingual NLP.Comment: EMNLP 2023 Finding
Recommended from our members
Cross-Lingual Transfer of Natural Language Processing Systems
Accurate natural language processing systems rely heavily on annotated datasets. In the absence of such datasets, transfer methods can help to develop a model by transferring annotations from one or more rich-resource languages to the target language of interest. These methods are generally divided into two approaches: 1) annotation projection from translation data, aka parallel data, using supervised models in rich-resource languages, and 2) direct model transfer from annotated datasets in rich-resource languages.
In this thesis, we demonstrate different methods for transfer of dependency parsers and sentiment analysis systems. We propose an annotation projection method that performs well in the scenarios for which a large amount of in-domain parallel data is available. We also propose a method which is a combination of annotation projection and direct transfer that can leverage a minimal amount of information from a small out-of-domain parallel dataset to develop highly accurate transfer models. Furthermore, we propose an unsupervised syntactic reordering model to improve the accuracy of dependency parser transfer for non-European languages. Finally, we conduct a diverse set of experiments for the transfer of sentiment analysis systems in different data settings.
A summary of our contributions are as follows:
* We develop accurate dependency parsers using parallel text in an annotation projection framework. We make use of the fact that the density of word alignments is a valuable indicator of reliability in annotation projection.
* We develop accurate dependency parsers in the absence of a large amount of parallel data. We use the Bible data, which is in orders of magnitude smaller than a conventional parallel dataset, to provide minimal cues for creating cross-lingual word representations. Our model is also capable of boosting the performance of annotation projection with a large amount of parallel data. Our model develops cross-lingual word representations for going beyond the traditional delexicalized direct transfer methods. Moreover, we propose a simple but effective word translation approach that brings in explicit lexical features from the target language in our direct transfer method.
* We develop different syntactic reordering models that can change the source treebanks in rich-resource languages, thus preventing learning a wrong model for a non-related language. Our experimental results show substantial improvements over non-European languages.
* We develop transfer methods for sentiment analysis in different data availability scenarios. We show that we can leverage cross-lingual word embeddings to create accurate sentiment analysis systems in the absence of annotated data in the target language of interest.
We believe that the novelties that we introduce in this thesis indicate the usefulness of transfer methods. This is appealing in practice, especially since we suggest eliminating the requirement for annotating new datasets for low-resource languages which is expensive, if not impossible, to obtain
بناء أداة تفاعلية متعددة اللغات لاسترجاع المعلومات
The growing requirement on the Internet have made users access to the information expressed in a language other than their own , which led to Cross lingual information retrieval (CLIR) .CLIR is established as a major topic in Information Retrieval (IR). One approach to CLIR uses different methods of translation to translate queries to documents and indexes in other languages. As queries submitted to search engines suffer lack of untranslatable query keys (i.e., words that the dictionary is missing) and translation ambiguity, which means difficulty in choosing between alternatives of translation. Our approach in this thesis is to build and develop the software tool (MORTAJA-IR-TOOL) , a new tool for retrieving information using programming JAVA language with JDK 1.6. This tool has many features, which is develop multiple systematic languages system to be use as a basis for translation when using CLIR, as well as the process of stemming the words entered in the query process as a stage preceding the translation process. The evaluation of the proposed methodology translator of the query comparing it with the basic translation that uses readable dictionary automatically the percentage of improvement is 8.96%. The evaluation of the impact of the process of stemming the words entered in the query on the quality of the output process in the retrieval of matched data in other process the rate of improvement is 4.14%. Finally the rated output of the merger between the use of stemming methodology proposed and translation process (MORTAJA-IR-TOOL) which concluded that the proportion of advanced in the process of improvement in data rate of retrieval is 15.86%. Keywords: Cross lingual information retrieval, CLIR, Information Retrieval, IR, Translation, stemming.الاحتياجات المتنامية على شبكة الإنترنت جعلت المستخدمين لهم حق الوصول إلى المعلومات بلغة غير لغتهم الاصلية، مما يقودنا الى مصطلح عبور اللغات لاسترجاع المعلومات (CLIR). CLIR أنشئت كموضوع رئيسي في "استرجاع المعلومات" (IR). نهج واحد ل CLIR يستخدم أساليب مختلفة للترجمة ومنها لترجمة الاستعلامات وترجمة الوثائق والفهارس في لغات أخرى. الاستفسارات والاستعلامات المقدمة لمحركات البحث تعاني من عدم وجود ترجمه لمفاتيح الاستعلام (أي أن العبارة مفقودة من القاموس) وايضا تعاني من غموض الترجمة، مما يعني صعوبة في الاختيار بين بدائل الترجمة. في نهجنا في هذه الاطروحة تم بناء وتطوير الأداة البرمجية (MORTAJA-IR-TOOL) أداة جديدة لاسترجاع المعلومات باستخدام لغة البرمجة JAVA مع JDK 1.6، وتمتلك هذه الأداة العديد من الميزات، حيث تم تطوير منظومة منهجية متعددة اللغات لاستخدامها كأساس للترجمة عند استخدام CLIR، وكذلك عملية تجذير للكلمات المدخلة في عملية الاستعلام كمرحلة تسبق عملية الترجمة. وتم تقييم الترجمة المنهجية المقترحة للاستعلام ومقارنتها مع الترجمة الأساسية التي تستخدم قاموس مقروء اليا كأساس للترجمة في تجربة تركز على المستخدم وكانت نسبة التحسين 8.96% , وكذلك يتم تقييم مدى تأثير عملية تجذير الكلمات المدخلة في عملية الاستعلام على جودة المخرجات في عملية استرجاع البيانات المتطابقة باللغة الاخرى وكانت نسبة التحسين 4.14% , وفي النهاية تم تقييم ناتج عملية الدمج بين استخدام التجذير والترجمة المنهجية المقترحة (MORTAJA-IR-TOOL) والتي خلصت الى نسبة متقدمة في عملية التحسين في نسبة البيانات المرجعة وكانت 15.86%
Lexical access in bilingual spoken word production: effects of lexical interference
When bilinguals decide to speak in one of their languages, parallel activation from both of their languages occurs. Selecting to speak in one language is therefore highly demanding since bilinguals have to constantly deal with the co-activation of translation equivalents in their other language, as well as interference from semantically related lexical representations in each language. Switching from one language to another poses the extra demand for a control mechanism to deal with cross-language activation. The aim of this thesis was to investigate the effects of semantic and cross-language interference on bilinguals’ lexical retrieval. In addition, the effects of language similarity and bilingual language profile on cognitive control abilities in language selection and switching were examined. Two groups of highly proficient bilinguals completed a detailed bilingual profile questionnaire: a group of Arabic-English bilingual with unrelated languages and a group of German-English bilinguals with closely related languages. Language switching performance in both groups of bilinguals was investigated in a picture naming paradigm. Lexical selection demands were manipulated by integrating a semantic blocking paradigm so that pictures were named in semantically heterogeneous versus homogeneous lexical selection contexts. Both groups of bilinguals were slower in the homogeneous as compared to the heterogeneous contexts. Importantly, a significant interaction of semantic blocking and language switching was observed such that latencies were slowest in homogeneous context when switching into L1, but only for the Arabic-English bilinguals. This finding suggests that switching into L1 as compared to L2 is demanding in terms of lexical selection. In addition, the performance of Arabic bilinguals when switching into L1 under high lexical selection demands correlated with their response inhibition/selection ability, as measured by a Flanker task. This suggests that bilinguals’ ability to resolve lexical competition is related to their domain-general response selection ability. This correlation was not observed for the German-English bilinguals, suggesting that similar languages may interfere less with each other. However, the analysis of bilingual language profile highlighted a number of subtle differences between the two groups of bilinguals, which might have contributed to the difference in their results. Taken together, the findings from this thesis have theoretical consequences for accounts of bilingual lexical processing and for the relationship of bilingualism to non-linguistic cognition
Understanding the structure and meaning of Finnish texts: From corpus creation to deep language modelling
Natural Language Processing (NLP) is a cross-disciplinary field combining elements of computer science, artificial intelligence, and linguistics, with the objective of developing means for computational analysis, understanding or generation of human language. The primary aim of this thesis is to advance natural language processing in Finnish by providing more resources and investigating the most effective machine learning based practices for their use. The thesis focuses on NLP topics related to understanding the structure and meaning of written language, mainly concentrating on structural analysis (syntactic parsing) as well as exploring the semantic equivalence of statements that vary in their surface realization (paraphrase modelling). While the new resources presented in the thesis are developed for Finnish, most of the methodological contributions are language-agnostic, and the accompanying papers demonstrate the application and evaluation of these methods across multiple languages.
The first set of contributions of this thesis revolve around the development of a state-of-the-art Finnish dependency parsing pipeline. Firstly, the necessary Finnish training data was converted to the Universal Dependencies scheme, integrating Finnish into this important treebank collection and establishing the foundations for Finnish UD parsing. Secondly, a novel word lemmatization method based on deep neural networks is introduced and assessed across a diverse set of over 50 languages. And finally, the overall dependency parsing pipeline is evaluated on a large number of languages, securing top ranks in two competitive shared tasks focused on multilingual dependency parsing. The overall outcome of this line of research is a parsing pipeline reaching state-of-the-art accuracy in Finnish dependency parsing, the parsing numbers obtained with the latest pre-trained language models approaching (at least near) human-level performance.
The achievement of large language models in the area of dependency parsing— as well as in many other structured prediction tasks— brings up the hope of the large pre-trained language models genuinely comprehending language, rather than merely relying on simple surface cues. However, datasets designed to measure semantic comprehension in Finnish have been non-existent, or very scarce at the best. To address this limitation, and to reflect the general change of emphasis in the field towards task more semantic in nature, the second part of the thesis shifts its focus to language understanding through an exploration of paraphrase modelling. The second contribution of the thesis is the creation of a novel, large-scale, manually annotated corpus of Finnish paraphrases. A unique aspect of this corpus is that its examples have been manually extracted from two related text documents, with the objective of obtaining non-trivial paraphrase pairs valuable for training and evaluating various language understanding models on paraphrasing. We show that manual paraphrase extraction can yield a corpus featuring pairs that are both notably longer and less lexically overlapping than those produced through automated candidate selection, the current prevailing practice in paraphrase corpus construction. Another distinctive feature in the corpus is that the paraphrases are identified and distributed within their document context, allowing for richer modelling and novel tasks to be defined
Recommended from our members
Efficient Machine Teaching Frameworks for Natural Language Processing
The past decade has seen tremendous growth in potential applications of language technologies in our daily lives due to increasing data, computational resources, and user interfaces. An important step to support emerging applications is the development of algorithms for processing the rich variety of human-generated text and extracting relevant information. Machine learning, especially deep learning, has seen increasing success on various text benchmarks. However, while standard benchmarks have static tasks with expensive human-labeled data, real-world applications are characterized by dynamic task specifications and limited resources for data labeling, thus making it challenging to transfer the success of supervised machine learning to the real world. To deploy language technologies at scale, it is crucial to develop alternative techniques for teaching machines beyond data labeling.
In this dissertation, we address this data labeling bottleneck by studying and presenting resource-efficient frameworks for teaching machine learning models to solve language tasks across diverse domains and languages. Our goal is to (i) support emerging real-world problems without the expensive requirement of large-scale manual data labeling; and (ii) assist humans in teaching machines via more flexible types of interaction. Towards this goal, we describe our collaborations with experts across domains (including public health, earth sciences, news, and e-commerce) to integrate weakly-supervised neural networks into operational systems, and we present efficient machine teaching frameworks that leverage flexible forms of declarative knowledge as supervision: coarse labels, large hierarchical taxonomies, seed words, bilingual word translations, and general labeling rules.
First, we present two neural network architectures that we designed to leverage weak supervision in the form of coarse labels and hierarchical taxonomies, respectively, and highlight their successful integration into operational systems. Our Hierarchical Sigmoid Attention Network (HSAN) learns to highlight important sentences of potentially long documents without sentence-level supervision by, instead, using coarse-grained supervision at the document level. HSAN improves over previous weakly supervised learning approaches across sentiment classification benchmarks and has been deployed to help inspections in health departments for the discovery of foodborne illness outbreaks. We also present TXtract, a neural network that extracts attributes for e-commerce products from thousands of diverse categories without using manually labeled data for each category, by instead considering category relationships in a hierarchical taxonomy. TXtract is a core component of Amazon’s AutoKnow, a system that collects knowledge facts for over 10K product categories, and serves such information to Amazon search and product detail pages.
Second, we present architecture-agnostic machine teaching frameworks that we applied across domains, languages, and tasks. Our weakly-supervised co-training framework can train any type of text classifier using just a small number of class-indicative seed words and unlabeled data. In contrast to previous work that use seed words to initialize embedding layers, our iterative seed word distillation (ISWD) method leverages the predictive power of seed words as supervision signals and shows strong performance improvements for aspect detection in reviews across domains and languages. We further demonstrate the cross-lingual transfer abilities of our co-training approach via cross-lingual teacher-student (CLTS), a method for training document classifiers across diverse languages using labeled documents only in English and a limited budget for bilingual translations. Not all classification tasks, however, can be effectively addressed using human supervision in the form of seed words. To capture a broader variety of tasks, we present weakly-supervised self-training (ASTRA), a weakly-supervised learning framework for training a classifier using more general labeling rules in addition to labeled and unlabeled data. As a complete set of accurate rules may be hard to obtain all in one shot, we further present an interactive framework that assists human annotators by automatically suggesting candidate labeling rules.
In conclusion, this thesis demonstrates the benefits of teaching machines with different types of interaction than the standard data labeling paradigm and shows promising results for new applications across domains and languages. To facilitate future research, we publish our code implementations and design new challenging benchmarks with various types of supervision. We believe that our proposed frameworks and experimental findings will influence research and will enable new applications of language technologies without the costly requirement of large manually labeled datasets
Recommended from our members
Codes of Modernity: Infrastructures of Language and Chinese Scripts in an Age of Global Information Revolution
This dissertation explores the global history of Chinese script reforms—the effort to phoneticize Chinese language and/or simplify the writing system—from its inception in the 1890s to its demise in the 1980s. These reforms took place at the intersection of industrialization, colonialism, and new information technologies, such as alphabet-based telegraphy and breakthroughs in printing technologies. As these social and technological transformations put unprecedented pressure on knowledge management and the use of mental and clerical labor, many Chinese intellectuals claimed that learning Chinese characters consumed too much time and mental energy. Chinese script reforms, this dissertation argues, were an effort to increase speed in producing, transmitting, and accessing information, and thus meet the demands of the industrializing knowledge economy.
The industrializing knowledge economy that this dissertation explores was built on and sustained by a psychological understanding of the human subject as a knowledge machine, and it was part of a global moment in which the optimization of labor in knowledge production was a key concern for all modernizing economies. While Chinese intellectuals were inventing new signs of inscription, American behavioral psychologists, Soviet psycho-economists, and Central Asian and Ottoman technicians were all experimenting with new scripts in order to increase mental efficiency and productivity. This dissertation reveals the intimate connections between the Chinese and non-Chinese script engineering projects that were taking place synchronically across the world. The chapters of this work demonstrate for the first time, for instance, that the simplification of Chinese characters in the 1920s and 1930s was intimately connected to the discipline of behavioral psychology in the US. The first generation of Chinese psychologists employed the American psychologists’ methods to track eye movements, count word-frequencies, and statistically analyze the speed of reading, writing, and memorizing in order to simplify and “rationalize” the Chinese writing system in an effort to discipline and optimize mental labor. Other chapters explore the issue of mental and clerical optimization by finding the origins of the Chinese Latin Alphabet (CLA), the mother of pinyin, in hitherto unknown Eurasian connections. The CLA, the pages of this work shows, was the product of a transnational exchange that involved Ottoman and Transcaucasian typographers as well as Russian engineers and Chinese communists who sought efficiency in knowledge production through inventing new scripts. Situating the Chinese script reforms at this global intersection of psychology, economy, and linguistics, this dissertation examines the global connections and forces that turned the human subject into a knowledge worker who was cognitively managed through education, literacy, propaganda, and other measures of organizing information, all of which had the script at the center.
The search for efficiency and productivity—the core values of industrialism—lay at the heart of script reforms in China, but this search was inseparable from linguistic orders and political ambitions. Even if writing, transmitting, and learning a phonetic script could theoretically be easier and more efficient than the Chinese characters, the alphabet opened a veritable Pandora’s Box around the issue of selection: given the complex linguistic landscape in China, which speech was a phonetic script supposed to represent? There were myriad languages spoken throughout the empire and the subsequent nation-state, most of which were mutually incomprehensible. Mandarin as spoken in Beijing was different from that spoken in the south, and “topolects” or regional languages such as Min or Cantonese were to Mandarin what Romanian is to English. As a linguistic life-or-death issue, phonetic scripts stood for the infrastructural possibilities and limitations in the representation of speeches. Some scripts, such as Lao Naixuan’s phonetic script composed of more than a hundred signs, were capable of representing multiple Mandarin and non-Mandarin speeches; whereas others, such as Phonetic Symbols that only has thirty-seven syllabic signs, represented only one speech, i.e., Mandarin. Using Mandarin-oriented scripts to transcribe non-Mandarin speeches was like writing English with fifteen letters, hence the acrimonious disputes that fill the pages of this dissertation. Succinctly put, it was at the level of script invention that Chinese and non-Chinese actors engineered different infrastructures not only for laboring minds but also for the social world of Chinese languages. The history of information technologies and knowledge economy in China was thus inseparable from the world of speech and language, as each script offered a new potential to reassemble the written matter and the speaking mind in a different way.
“Codes of Modernity” thus conceptualizes the script itself as an infrastructural medium. A script was not merely a passive carrier of information, but an existential artifact. Building on an expanding literature on infrastructures, it endorses the observation that infrastructures, technologies, and the social world around them work in a recursive loop. An infrastructure is not just the physical object that permits the flow of information, goods, ideas, and people, but a sociotechnical product that enables the experience of culture, while imposing constrains on it at the same time. Like electricity grids, transportation systems, and sewage canals, the experience of scripts as infrastructures is the experience of thought worlds. After a long tradition of structuralism and poststructuralism that sought to understand the world through the semiotic prism of language, “Codes of Modernity” argues that it is time for an infrastructuralism that excavates the indispensable media that enable the production of language and thought