30 research outputs found

    ADDRESSING INFORMALITY IN PROCESSING CHINESE MICROTEXT

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Psychological Capital and Teacher Well-Being: The Mediation Role of Coping with Stress and Work Task Motivation

    Get PDF
    The number of teacher well-being studies has been increasing in the international researches since the Millennium; however, this remarkable topic is not well investigated in every country. Present dissertation has got a double function: one intention of it is to explore the mediation role of work task motivation and coping with stress in the relationship between psychological capital and teacher well-being in Ethiopian higher education context, another goal is to develop an integrated teacher well-being model based on the contemporary theories of positive psychology, coping appraisal, and self-determination of motivation. Using an associational and quantitative cross-sectional design, 3,517 university instructors participated in the research. Stratified random sampling was used for recruiting respondents, and their answers were statistically processed with the help of IBM SPSS 26 and AMOS 26. First, we examined the cross-cultural validation of psychological capital questionnaire (PCQ-12; Luthans et al., 2007), work task motivation scale for teachers (WTMST; Fernet et al., 2008), coping with stress questionnaire (CWS-Q; Rabenu et al., 2016), and teacher well-being scale (TWBS; Collie et al., 2015) using single and multi-modal confirmatory factor analysis. Secondly, we confirmed and explored the dimensions of the mediation role of work task motivation in the relationship between psychological capital and teacher well-being. Thirdly, based on the previous results, we added the coping with stress construct to assess its mediating role in the relationship between psychological capital and teacher well-being. Fourthly, we merged and examined the mediation role of work task motivation and coping with stress in the relationship between psychological capital and teacher well-being in the Ethiopian higher education cultural settings. The measures developed and tested contribute to the Ethiopian higher education by ensuring a broader range of tools for university decision-makers. Based on the research findings it became possible to enhance higher education instructors’ well-being in the framework of positive psychological interventions, and to develop their coping with stress and motivational strategies. The verified teacher well-being model is suitable for being adopted and used worldwide

    Rural secondary school teachers' experiences of job satisfaction and their expectations of support to develop their professional competencies as curriculum workers

    Get PDF
    Since the first National Curriculum Statement matriculation results for 2008, there has been an outcry that rural secondary schools in KwaZulu Natal are lagging behind in terms of pass rate compared to urban and former Model C secondary schools. There are various contributory factors that are impacting on poor learners’ performance in rural schools. This study was specifically conducted in rural secondary schools of KwaZulu Natal. The reason was that there is few research conducted in rural schools, particularly with regard to teacher job satisfaction and professional development. There is therefore a belief that satisfied teachers produce good performance in their schools. Furthermore, it is also believed that satisfied and adequately developed teachers are the key to successful implementation of the grades 10-12 National Curriculum Statement. The study was therefore conducted to investigate rural secondary school teachers’ experiences of job satisfaction and their expectations of support to develop their competencies as curriculum workers. The research problem was investigated through the mixed methods research. The use of mixed methods research was to ensure that reliability and validity are addressed. The concurrent strategy of mixed methods research was employed. In concurrent mixed research methods, data is collected during the same phase. Data was collected from rural secondary schools of Umzinyathi, Ilembe and Empangeni districts in KwaZulu Natal. There were fifty rural secondary schools which participated in the study. Four hundred rural secondary school teachers completed survey questionnaires. Eighteen rural secondary school teachers participated in individual interviews. Only nine rural secondary schools were involved in observation and interviews. Research findings show that poverty was one of the major contributory factors that led to poor performance of rural secondary schools. Poverty and lack of adequate professional development programmes in rural secondary schools have negative impact in terms of teachers’ job satisfaction. Learners’ poor command of English in rural secondary schools contributed to their poor academic performance. Lack of support services, bad condition of roads and, long distances travelled by both learners and teachers contributed to teachers’ job dissatisfaction and learners’ poor performance. Rural secondary school learners were demotivated about learning, since they lacked role models in their communities. Rural secondary school learners were also undisciplined. They bunked classes. They carried weapons to schools. Rural secondary school learners also helped criminals to steal and vandalize school property. They smoked dagga inside the school premises. Moreover, research findings indicate that rural secondary school teachers were not involved in school decision-making processes. School management teams were the only structure making school decisions. Growth opportunities for teachers were not fairly provided to them by their principals. Schools governing body chairpersons and principals were abusing the teacher promotion process since they were biased. They only promoted their friends and relatives and sometimes they were bribed by candidates. The latter findings contributed to teachers’ job dissatisfaction. Further findings indicate that there were teachers who were teaching subjects for which they were not qualified. Some heads of department were supervising subject streams that were outside of their specialization since the school post-provisioning norms (PPN) was small. Rural secondary school principals possessed inadequate grades 10-12 National Curriculum Statement expertise. Integrated Quality Management Systems was unable to develop teachers for effective grades 10-12 National Curriculum Statement implementation since it was not implemented accordingly in rural secondary schools. Clusters were effective strategies to develop teachers in rural schools although geographical isolation of school was their main challenge. The recommendations of this study are that RSSs must be fully supported by the KwaZulu Natal Department of Education. All roads to schools must be repaired in time. Decent teacher accommodation must be built inside schools with security guards to look after teachers’ safety and their property when they are away. Recreation venues/centres must be established in rural areas to relieve and address teachers’ stress and boredom. The KwaZulu Natal Department of Education must ensure that all schools have libraries, laboratories and computer classes. The Department of Education must also fully recognize postgraduate qualifications such as honours, master’s and doctoral degrees to retain highly qualified teachers in rural secondary schools. Teachers must be promoted on merit rather than on friendship or relationship

    Enhancing knowledge acquisition systems with user generated and crowdsourced resources

    Get PDF
    This thesis is on leveraging knowledge acquisition systems with collaborative data and crowdsourcing work from internet. We propose two strategies and apply them for building effective entity linking and question answering (QA) systems. The first strategy is on integrating an information extraction system with online collaborative knowledge bases, such as Wikipedia and Freebase. We construct a Cross-Lingual Entity Linking (CLEL) system to connect Chinese entities, such as people and locations, with corresponding English pages in Wikipedia. The main focus is to break the language barrier between Chinese entities and the English KB, and to resolve the synonymy and polysemy of Chinese entities. To address those problems, we create a cross-lingual taxonomy and a Chinese knowledge base (KB). We investigate two methods of connecting the query representation with the KB representation. Based on our CLEL system participating in TAC KBP 2011 evaluation, we finally propose a simple and effective generative model, which achieved much better performance. The second strategy is on creating annotation for QA systems with the help of crowd- sourcing. Crowdsourcing is to distribute a task via internet and recruit a lot of people to complete it simultaneously. Various annotated data are required to train the data-driven statistical machine learning algorithms for underlying components in our QA system. This thesis demonstrates how to convert the annotation task into crowdsourcing micro-tasks, investigate different statistical methods for enhancing the quality of crowdsourced anno- tation, and finally use enhanced annotation to train learning to rank models for passage ranking algorithms for QA.Gegenstand dieser Arbeit ist das Nutzbarmachen sowohl von Systemen zur Wissener- fassung als auch von kollaborativ erstellten Daten und Arbeit aus dem Internet. Es werden zwei Strategien vorgeschlagen, welche für die Erstellung effektiver Entity Linking (Disambiguierung von Entitätennamen) und Frage-Antwort Systeme eingesetzt werden. Die erste Strategie ist, ein Informationsextraktions-System mit kollaborativ erstellten Online- Datenbanken zu integrieren. Wir entwickeln ein Cross-Linguales Entity Linking-System (CLEL), um chinesische Entitäten, wie etwa Personen und Orte, mit den entsprechenden Wikipediaseiten zu verknüpfen. Das Hauptaugenmerk ist es, die Sprachbarriere zwischen chinesischen Entitäten und englischer Datenbank zu durchbrechen, und Synonymie und Polysemie der chinesis- chen Entitäten aufzulösen. Um diese Probleme anzugehen, erstellen wir eine cross linguale Taxonomie und eine chinesische Datenbank. Wir untersuchen zwei Methoden, die Repräsentation der Anfrage und die Repräsentation der Datenbank zu verbinden. Schließlich stellen wir ein einfaches und effektives generatives Modell vor, das auf unserem System für die Teilnahme an der TAC KBP 2011 Evaluation basiert und eine erheblich bessere Performanz erreichte. Die zweite Strategie ist, Annotationen für Frage-Antwort-Systeme mit Hilfe von "Crowd- sourcing" zu erstellen. "Crowdsourcing" bedeutet, eine Aufgabe via Internet an eine große Menge an angeworbene Menschen zu verteilen, die diese simultan erledigen. Verschiedene annotierte Daten sind notwendig, um die datengetriebenen statistischen Lernalgorithmen zu trainieren, die unserem Frage-Antwort System zugrunde liegen. Wir zeigen, wie die Annotationsaufgabe in Mikro-Aufgaben für das Crowdsourcing umgewan- delt werden kann, wir untersuchen verschiedene statistische Methoden, um die Qualität der Annotation aus dem Crowdsourcing zu erweitern, und schließlich nutzen wir die erwei- erte Annotation, um Modelle zum Lernen von Ranglisten von Textabschnitten zu trainieren

    Confidence Measures for Automatic and Interactive Speech Recognition

    Full text link
    [EN] This thesis work contributes to the field of the {Automatic Speech Recognition} (ASR). And particularly to the {Interactive Speech Transcription} and {Confidence Measures} (CM) for ASR. The main goals of this thesis work can be summarised as follows: 1. To design IST methods and tools to tackle the problem of improving automatically generated transcripts. 2. To assess the designed IST methods and tools on real-life tasks of transcription in large educational repositories of video lectures. 3. To improve the reliability of the IST by improving the underlying (CM). Abstracts: The {Automatic Speech Recognition} (ASR) is a crucial task in a broad range of important applications which could not accomplished by means of manual transcription. The ASR can provide cost-effective transcripts in scenarios of increasing social impact such as the {Massive Open Online Courses} (MOOC), for which the availability of accurate enough is crucial even if they are not flawless. The transcripts enable search-ability, summarisation, recommendation, translation; they make the contents accessible to non-native speakers and users with impairments, etc. The usefulness is such that students improve their academic performance when learning from subtitled video lectures even when transcript is not perfect. Unfortunately, the current ASR technology is still far from the necessary accuracy. The imperfect transcripts resulting from ASR can be manually supervised and corrected, but the effort can be even higher than manual transcription. For the purpose of alleviating this issue, a novel {Interactive Transcription of Speech} (IST) system is presented in this thesis. This IST succeeded in reducing the effort if a small quantity of errors can be allowed; and also in improving the underlying ASR models in a cost-effective way. In other to adequate the proposed framework into real-life MOOCs, another intelligent interaction methods involving limited user effort were investigated. And also, it was introduced a new method which benefit from the user interactions to improve automatically the unsupervised parts ({Constrained Search} for ASR). The conducted research was deployed into a web-based IST platform with which it was possible to produce a massive number of semi-supervised lectures from two different well-known repositories, videoLectures.net and poliMedia. Finally, the performance of the IST and ASR systems can be easily increased by improving the computation of the {Confidence Measure} (CM) of transcribed words. As so, two contributions were developed: a new particular {Logistic Regresion} (LR) model; and the speaker adaption of the CM for cases in which it is possible, such with MOOCs.[ES] Este trabajo contribuye en el campo del {reconocimiento automático del habla} (RAH). Y en especial, en el de la {transcripción interactiva del habla} (TIH) y el de las {medidas de confianza} (MC) para RAH. Los objetivos principales son los siguientes: 1. Diseño de métodos y herramientas TIH para mejorar las transcripciones automáticas. 2. Evaluar los métodos y herramientas TIH empleando tareas de transcripción realistas extraídas de grandes repositorios de vídeos educacionales. 3. Mejorar la fiabilidad del TIH mediante la mejora de las MC. Resumen: El {reconocimiento automático del habla} (RAH) es una tarea crucial en una amplia gama de aplicaciones importantes que no podrían realizarse mediante transcripción manual. El RAH puede proporcionar transcripciones rentables en escenarios de creciente impacto social como el de los {cursos abiertos en linea masivos} (MOOC), para el que la disponibilidad de transcripciones es crucial, incluso cuando no son completamente perfectas. Las transcripciones permiten la automatización de procesos como buscar, resumir, recomendar, traducir; hacen que los contenidos sean más accesibles para hablantes no nativos y usuarios con discapacidades, etc. Incluso se ha comprobado que mejora el rendimiento de los estudiantes que aprenden de videos con subtítulos incluso cuando estos no son completamente perfectos. Desafortunadamente, la tecnología RAH actual aún está lejos de la precisión necesaria. Las transcripciones imperfectas resultantes del RAH pueden ser supervisadas y corregidas manualmente, pero el esfuerzo puede ser incluso superior al de la transcripción manual. Con el fin de aliviar este problema, esta tesis presenta un novedoso sistema de {transcripción interactiva del habla} (TIH). Este método TIH consigue reducir el esfuerzo de semi-supervisión siempre que sea aceptable una pequeña cantidad de errores; además mejora a la par los modelos RAH subyacentes. Con objeto de transportar el marco propuesto para MOOCs, también se investigaron otros métodos de interacción inteligentes que involucran esfuerzo limitado por parte del usuario. Además, se introdujo un nuevo método que aprovecha las interacciones para mejorar aún más las partes no supervisadas (ASR con {búsqueda restringida}). La investigación en TIH llevada a cabo se desplegó en una plataforma web con el que fue posible producir un número masivo de transcripciones de videos de dos conocidos repositorios, videoLectures.net y poliMedia. Por último, el rendimiento de la TIH y los sistemas de RAH se puede aumentar directamente mediante la mejora de la estimación de la {medida de confianza} (MC) de las palabras transcritas. Por este motivo se desarrollaron dos contribuciones: un nuevo modelo discriminativo {logístico} (LR); y la adaptación al locutor de la MC para los casos en que es posible, como por ejemplo en MOOCs.[CA] Aquest treball hi contribueix al camp del {reconeixment automàtic de la parla} (RAP). I en especial, al de la {transcripció interactiva de la parla} i el de {mesures de confiança} (MC) per a RAP. Els objectius principals són els següents: 1. Dissenyar mètodes i eines per a TIP per tal de millorar les transcripcions automàtiques. 2. Avaluar els mètodes i eines TIP per a tasques de transcripció realistes extretes de grans repositoris de vídeos educacionals. 3. Millorar la fiabilitat del TIP, mitjançant la millora de les MC. Resum: El {reconeixment automàtic de la parla} (RAP) és una tasca crucial per una àmplia gamma d'aplicacions importants que no es poden dur a terme per mitjà de la transcripció manual. El RAP pot proporcionar transcripcions en escenaris de creixent impacte social com els {cursos online oberts massius} (MOOC). Les transcripcions permeten automatitzar tasques com ara cercar, resumir, recomanar, traduir; a més a més, fa accessibles els continguts als parlants no nadius i els usuaris amb discapacitat, etc. Fins i tot, pot millorar el rendiment acadèmic de estudiants que aprenen de xerrades amb subtítols, encara que aquests subtítols no siguen perfectes. Malauradament, la tecnologia RAP actual encara està lluny de la precisió necessària. Les transcripcions imperfectes resultants de RAP poden ser supervisades i corregides manualment, però aquest l'esforç pot acabar sent superior a la transcripció manual. Per tal de resoldre aquest problema, en aquest treball es presenta un sistema nou per a {transcripció interactiva de la parla} (TIP). Aquest sistema TIP va ser reeixit en la reducció de l'esforç per quan es pot permetre una certa quantitat d'errors; així com també en en la millora dels models RAP subjacents. Per tal d'adequar el marc proposat per a MOOCs, també es van investigar altres mètodes d'interacció intel·ligents amb esforç d''usuari limitat. A més a més, es va introduir un nou mètode que aprofita les interaccions per tal de millorar encara més les parts no supervisades (RAP amb {cerca restringida}). La investigació en TIP duta a terme es va desplegar en una plataforma web amb la qual va ser possible produir un nombre massiu de transcripcions semi-supervisades de xerrades de repositoris ben coneguts, videoLectures.net i poliMedia. Finalment, el rendiment de la TIP i els sistemes de RAP es pot augmentar directament mitjançant la millora de l'estimació de la {Confiança Mesura} (MC) de les paraules transcrites. Per tant, es van desenvolupar dues contribucions: un nou model discriminatiu logístic (LR); i l'adaptació al locutor de la MC per casos en que és possible, per exemple amb MOOCs.Sánchez Cortina, I. (2016). Confidence Measures for Automatic and Interactive Speech Recognition [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/61473TESI

    Lexical selection for machine translation

    Get PDF
    Current research in Natural Language Processing (NLP) tends to exploit corpus resources as a way of overcoming the problem of knowledge acquisition. Statistical analysis of corpora can reveal trends and probabilities of occurrence, which have proved to be helpful in various ways. Machine Translation (MT) is no exception to this trend. Many MT researchers have attempted to extract knowledge from parallel bilingual corpora. The MT problem is generally decomposed into two sub-problems: lexical selection and reordering of the selected words. This research addresses the problem of lexical selection of open-class lexical items in the framework of MT. The work reported in this thesis investigates different methodologies to handle this problem, using a corpus-based approach. The current framework can be applied to any language pair, but we focus on Arabic and English. This is because Arabic words are hugely ambiguous and thus pose a challenge for the current task of lexical selection. We use a challenging Arabic-English parallel corpus, containing many long passages with no punctuation marks to denote sentence boundaries. This points to the robustness of the adopted approach. In our attempt to extract lexical equivalents from the parallel corpus we focus on the co-occurrence relations between words. The current framework adopts a lexicon-free approach towards the selection of lexical equivalents. This has the double advantage of investigating the effectiveness of different techniques without being distracted by the properties of the lexicon and at the same time saving much time and effort, since constructing a lexicon is time-consuming and labour-intensive. Thus, we use as little, if any, hand-coded information as possible. The accuracy score could be improved by adding hand-coded information. The point of the work reported here is to see how well one can do without any such manual intervention. With this goal in mind, we carry out a number of preprocessing steps in our framework. First, we build a lexicon-free Part-of-Speech (POS) tagger for Arabic. This POS tagger uses a combination of rule-based, transformation-based learning (TBL) and probabilistic techniques. Similarly, we use a lexicon-free POS tagger for English. We use the two POS taggers to tag the bi-texts. Second, we develop lexicon-free shallow parsers for Arabic and English. The two parsers are then used to label the parallel corpus with dependency relations (DRs) for some critical constructions. Third, we develop stemmers for Arabic and English, adopting the same knowledge -free approach. These preprocessing steps pave the way for the main system (or proposer) whose task is to extract translational equivalents from the parallel corpus. The framework starts with automatically extracting a bilingual lexicon using unsupervised statistical techniques which exploit the notion of co-occurrence patterns in the parallel corpus. We then choose the target word that has the highest frequency of occurrence from among a number of translational candidates in the extracted lexicon in order to aid the selection of the contextually correct translational equivalent. These experiments are carried out on either raw or POS-tagged texts. Having labelled the bi-texts with DRs, we use them to extract a number of translation seeds to start a number of bootstrapping techniques to improve the proposer. These seeds are used as anchor points to resegment the parallel corpus and start the selection process once again. The final F-score for the selection process is 0.701. We have also written an algorithm for detecting ambiguous words in a translation lexicon and obtained a precision score of 0.89.EThOS - Electronic Theses Online ServiceEgyptian GovernmentGBUnited Kingdo

    Answer Re-ranking with bilingual LDA and social QA forum corpus

    Get PDF
    One of the most important tasks for AI is to find valuable information from the Web. In this research, we develop a question answering system for retrieving answers based on a topic model, bilingual latent Dirichlet allocation (Bi-LDA), and knowledge from social question answering (SQA) forum, such as Yahoo! Answers. Regarding question and answer pairs from a SQA forum as a bilingual corpus, a shared topic over question and answer documents is assigned to each term so that the answer re-ranking system can infer the correlation of terms between questions and answers. A query expansion approach based on the topic model obtains a 9% higher top-150 mean reciprocal rank (MRR@150) and a 16% better geometric mean rank as compared to a simple matching system via Okapi/BM25. In addition, this thesis compares the performance in several experimental settings to clarify the factor of the result

    Automated Validation of State-Based Client-Centric Isolation with TLA <sup>+</sup>

    Get PDF
    Clear consistency guarantees on data are paramount for the design and implementation of distributed systems. When implementing distributed applications, developers require approaches to verify the data consistency guarantees of an implementation choice. Crooks et al. define a state-based and client-centric model of database isolation. This paper formalizes this state-based model in, reproduces their examples and shows how to model check runtime traces and algorithms with this formalization. The formalized model in enables semi-automatic model checking for different implementation alternatives for transactional operations and allows checking of conformance to isolation levels. We reproduce examples of the original paper and confirm the isolation guarantees of the combination of the well-known 2-phase locking and 2-phase commit algorithms. Using model checking this formalization can also help finding bugs in incorrect specifications. This improves feasibility of automated checking of isolation guarantees in synthesized synchronization implementations and it provides an environment for experimenting with new designs.</p
    corecore