77,598 research outputs found
Selecting ELL Textbooks: A Content Analysis of Language-Teaching Models
Many middle school teachers lack adequate criteria to critically select materials that represent a variety of L2 teaching models. This study analyzes the illustrated and written content of 33 ELL textbooks to determine the range of L2 teaching models represented. The researchers asked to what extent do middle school ELL texts depict frequency and variation of language-teaching models in illustrations and written texts. Using content analysis, they measured the range of depiction of the 4 language-teaching models and concluded that 4 of the 33 textbooks had considerable to extensive frequency and variation of L2 teaching model
Selecting ELL Textbooks: A Content Analysis of Language-Teaching Models
Many middle school teachers lack adequate criteria to critically select materials that represent a variety of L2 teaching models. This study analyzes the illustrated and written content of 33 ELL textbooks to determine the range of L2 teaching models represented. The researchers asked to what extent do middle school ELL texts depict frequency and variation of language-teaching models in illustrations and written texts. Using content analysis, they measured the range of depiction of the 4 language-teaching models and concluded that 4 of the 33 textbooks had considerable to extensive frequency and variation of L2 teaching model
Language-based multimedia information retrieval
This paper describes various methods and approaches for language-based multimedia information retrieval, which have been developed in the projects POP-EYE and OLIVE and which will be developed further in the MUMIS project. All of these project aim at supporting automated indexing of video material by use of human language technologies. Thus, in contrast to image or sound-based retrieval methods, where both the query language and the indexing methods build on non-linguistic data, these methods attempt to exploit advanced text retrieval technologies for the retrieval of non-textual material. While POP-EYE was building on subtitles or captions as the prime language key for disclosing video fragments, OLIVE is making use of speech recognition to automatically derive transcriptions of the sound tracks, generating time-coded linguistic elements which then serve as the basis for text-based retrieval functionality
Artificial Sequences and Complexity Measures
In this paper we exploit concepts of information theory to address the
fundamental problem of identifying and defining the most suitable tools to
extract, in a automatic and agnostic way, information from a generic string of
characters. We introduce in particular a class of methods which use in a
crucial way data compression techniques in order to define a measure of
remoteness and distance between pairs of sequences of characters (e.g. texts)
based on their relative information content. We also discuss in detail how
specific features of data compression techniques could be used to introduce the
notion of dictionary of a given sequence and of Artificial Text and we show how
these new tools can be used for information extraction purposes. We point out
the versatility and generality of our method that applies to any kind of
corpora of character strings independently of the type of coding behind them.
We consider as a case study linguistic motivated problems and we present
results for automatic language recognition, authorship attribution and self
consistent-classification.Comment: Revised version, with major changes, of previous "Data Compression
approach to Information Extraction and Classification" by A. Baronchelli and
V. Loreto. 15 pages; 5 figure
Language Trees and Zipping
In this letter we present a very general method to extract information from a
generic string of characters, e.g. a text, a DNA sequence or a time series.
Based on data-compression techniques, its key point is the computation of a
suitable measure of the remoteness of two bodies of knowledge. We present the
implementation of the method to linguistic motivated problems, featuring highly
accurate results for language recognition, authorship attribution and language
classification.Comment: 5 pages, RevTeX4, 1 eps figure. In press in Phys. Rev. Lett. (January
2002
A reproducible approach with R markdown to automatic classification of medical certificates in French
In this paper, we report the ongoing developments of our first participation to the Cross-Language Evaluation Forum (CLEF) eHealth Task 1: âMultilingual Information Extraction - ICD10 codingâ (NĂ©vĂ©ol et al., 2017). The task consists in labelling death certificates, in French with international standard codes. In particular, we wanted to accomplish the goal of the âReplication trackâ of this Task which promotes the sharing of tools and the dissemination of solid, reproducible results.In questo articolo presentiamo gli sviluppi del lavoro iniziato con la partecipazione al Laboratorio CrossLanguage Evaluation Forum (CLEF) eHealth denominato: âMultilingual Information Extraction - ICD10 codingâ (NĂ©vĂ©ol et al., 2017) che ha come obiettivo quello di classificare certificati di morte in lingua francese con dei codici standard internazionali. In particolare, abbiamo come obiettivo quello proposto dalla âReplication trackâ di questo Task, che promuove la condivisione di strumenti e la diffusione di risultati riproducibili
Working with the CHILDES tools : transcription, coding and analysis
The Child Language Data Exchange System (CHILDES) consists of Codes for the Human Analysis of Transcripts (CHAT), Computerized Language Analysis (CLAN), and a database. There is also an online manual which includes the CHILDES bibliography, the database, and the CHAT conventions as well as the CLAN instructions. The first three parts of this paper concern the CHAT format of transcription, grammatical coding, and analyzing transcripts by using the CLAN programs. The fourth part shows examples of transcribed and coded data
- âŠ