1,088 research outputs found
The Visual Word Recognition and Orthography Depth in Second Language Acquisition
The study investigated whether the orthographic depth of first language (L1) affects the word recognition in second language (L2) learning. Fifteen native Chinese speakers and fifteen Greek native speakers were recruited to test their English naming ability. The results suggest that the orthographic depth has an impact on the L2 learning but word familiarity also determined the naming performance in certain extent. The data can be interpreted as the supportive evidence of implicating the Orthographic Depth Hypothesis (ODH) on L2 learning (the original ODH mainly refers to the orthographic depth effect on L1). However, the regularity influence of spelling-to-sound rules was a very weak predictor of orthographic depth variations. The data is able to modify the strong dual route model of word recognition by providing some empirical evidence
Recommended from our members
Sources for the investigation of meaning in the Hebrew Bible
Many linguistic tools and methods are applied to Biblical texts to gain meaning from them. Such applications do not always take into account the perspective of the investigators, the presuppositions of the method being used, and the nature of the material to which it is applied. These factors all influence the meaning obtained from the text. It is vital therefore to consider the available data in Hebrew, the development and transmission of the Masoretic Text, and the nature of the language contained therein (Chapter 1). The main section of the thesis provides a critical survey of the application of various tools and methods. Chapter 2 provides a summary of the Comparative Method with its presuppositions, a brief overview of Barr’s criticisms of its application to Biblical texts, and guidelines for its use. Chapter 3 looks at the Versions, the influence of the language, theology and motivation of the translators on their production, and the validity of using translations for obtaining meaning from Hebrew. Chapter 4 examines the presuppositions of Lexical Semantics and surveys some applications of this method to Classical Hebrew. Chapter 5 examines Text Linguistics and some applications of Tagmemics to Hebrew narratives, assessing its contribution to the investigation of meaning. The text is like a multi-faceted diamond which can be viewed from any number of angles, both synchronically and diachronically, reflecting potentially innumerable meanings. Each of the tools and methods surveyed here approaches the text from a different perspective and when appropriately applied can be combined to gain as much meaning as possible from the Hebrew Bible. This results in illustration of an integrated approach to the investigation of meaning in Classical Hebrew. Nonetheless, it remains possible to construct a complete linguistic analysis of the text at every level and still not quite understand what it means
Language Contact in Australia
This MA dissertation is concerned with a specific case of language contact in Australia. It investigates the effects of standardisation and the development of a writing system on the formerly solely oral language Diyari of South Australia, using documentation sources collected over a period of 110 years. One of the main findings is that a significant change occurred in both language attitude and language structure resulting from missionary influence and the introduction of literacy. Furthermore, a lack of morphological features in Diyari recorded for related languages puzzled the author of the last descriptive grammar of the language (Austin, 1981). As detailed in the dissertation, the results suggest that grammatical reduction due to language contact and standardization was the cause of these phenomena
The Lexiculture Papers: English Words and Culture
The Lexiculture Papers is a collection of scholarship on English words and culture. Each of the 62 chapters was originally authored by a student-scholar in the course, Language and Culture, at Wayne State University, between 2013 and 2020. Each chapter is a short social and historical description of a single English word in its cultural context, principally since 1800. Using a combination of historical linguistics, etymology, corpus linguistics, and discourse analysis, the papers analyze English-speaking social life through the lens of specific words
Computational approaches to semantic change (Volume 6)
Semantic change — how the meanings of words change over time — has preoccupied scholars since well before modern linguistics emerged in the late 19th and early 20th century, ushering in a new methodological turn in the study of language change. Compared to changes in sound and grammar, semantic change is the least understood. Ever since, the study of semantic change has progressed steadily, accumulating a vast store of knowledge for over a century, encompassing many languages and language families. Historical linguists also early on realized the potential of computers as research tools, with papers at the very first international conferences in computational linguistics in the 1960s. Such computational studies still tended to be small-scale, method-oriented, and qualitative. However, recent years have witnessed a sea-change in this regard. Big-data empirical quantitative investigations are now coming to the forefront, enabled by enormous advances in storage capability and processing power. Diachronic corpora have grown beyond imagination, defying exploration by traditional manual qualitative methods, and language technology has become increasingly data-driven and semantics-oriented. These developments present a golden opportunity for the empirical study of semantic change over both long and short time spans
Cognitive and linguistic predictors of literacy skills in the Greek language: the manifestation of reading and spelling difficulties in a regular orthography
The aim of this thesis was three-fold: firstly, to examine the development of
reading and spelling abilities in the Greek language; secondly, to identify the cognitive
predictors of reading and spelling skills; and finally, to establish how developmental
dyslexia is manifested in the regular Greek orthography.
An extensive battery of cognitive, linguistic, and literacy tasks was administered
to 132 children: 66 Grade-2 and 66 Grade-4 Greek-speaking children attending
four different schools in Athens, Greece. The battery included: tests of reading,
spelling, and mathematical attainment; a nonword reading task, various phonological
awareness & other phonological processing tests; a non-verbal intelligence test and
various syntactic awareness tasks. Evidence on the manifestation of developmental
dyslexia in Greek was based on a chronological-age and a reading-level matched-pairs
comparison between poor and average readers.
Despite a large number of difficult polysyllabic word stimuli, reading accuracy
was at ceiling for most subjects. Reading speed proved a more effective
measure of individual differences. A high degree of accuracy was also observed on
many phonological awareness tests. Rapid naming, phonological awareness and
speech rate proved the most important predictors of reading ability in the regular
Greek language. The predictive value of many variables/tests, however, appeared
to differ between English and Greek. Phonological awareness - the most powerful
and stable predictor in English - appeared to be a reliable predictor of reading ability
only at the initial stages of literacy development (Grade-2). The most significant predictor at Grade-4 was rapid naming. Speech rate consistently predicted reading
skill in all our analyses. Syntactic awareness proved not a reliable predictor. Its
contribution was significant only for spelling ability at Grade-4. The matched-pair
comparisons supported the above results.
Results are discussed in relation to the existing differences in the orthographic
structure of the English and Greek languages. It is suggested that the examination
of linguistic differences is important, both, from a theoretical and clinical
point of view
An Analysis of Three Approaches to Grammar with Recommendations for a Multiphasal Grammar
Today the teacher is confronted with three approaches to the teaching of grammar, all of which contain useful concepts; it is the major contention of this study that the best of each of these approaches may be the desired choice. It is the intention of this paper to propose a multiphasal grammar and to show that such a grammar seems to be the ultimate direction for the teaching of the English language. This multiphasal grammar will combine the best of the three approaches: the most useful and logical elements of traditional nomenclature; the structuralists\u27 emphasis on the sound of language, based on the three mechanisms of intonation: pitch, stress, and juncture, as well as their attitude toward uniform correctness; and the transformational approach to syntax. This author believes that a multiphasal grammar will be more teachable, more efficient, and better received in the public school than the grammar, basically traditional, that is being taught today. For decades, the word grammar has had a distasteful connotation. Teachers as well as students find the study of grammar boring and generally unproductive through no fault of the subject matter; rather the fault lies in antiquated and basically inadequate techniques and approaches. (See more in text
Text Preprocessing in Programmable Logic
There is a tremendous amount of information being generated and stored every year, and its growth rate is exponential. From 2008 to 2009, the growth rate was estimated to be 62%. In 2010, the amount of generated information is expected to grow by 50% to 1.2 Zettabytes, and by 2020 this rate is expected to grow to 35 Zettabytes. By preprocessing text in programmable logic, high data processing rates could be achieved
with greater power efficiency than with an equivalent software solution, leading to a smaller carbon footprint.
This thesis presents an overview of the fields of Information Retrieval and Natural Language Processing, and the design and implementation of four text preprocessing modules in programmable logic: UTF–8 decoding, stop–word filtering, and stemming with both Lovins’ and Porter’s techniques. These extensively pipelined circuits were implemented in a high performance FPGA and found to sustain maximum operational frequencies of 704 MHz, data throughputs in excess of 5 Gbps and efficiencies in the range of 4.332 – 6.765 mW/Gbps and 34.66 – 108.2 uW/MHz. These circuits can be incorporated into larger systems, such as document classifiers and information extraction engines
- …