355,739 research outputs found
Examining Scientific Writing Styles from the Perspective of Linguistic Complexity
Publishing articles in high-impact English journals is difficult for scholars
around the world, especially for non-native English-speaking scholars (NNESs),
most of whom struggle with proficiency in English. In order to uncover the
differences in English scientific writing between native English-speaking
scholars (NESs) and NNESs, we collected a large-scale data set containing more
than 150,000 full-text articles published in PLoS between 2006 and 2015. We
divided these articles into three groups according to the ethnic backgrounds of
the first and corresponding authors, obtained by Ethnea, and examined the
scientific writing styles in English from a two-fold perspective of linguistic
complexity: (1) syntactic complexity, including measurements of sentence length
and sentence complexity; and (2) lexical complexity, including measurements of
lexical diversity, lexical density, and lexical sophistication. The
observations suggest marginal differences between groups in syntactical and
lexical complexity.Comment: 6 figure
Measuring Syntactic Complexity in Spoken and Written Learner Language: Comparing the Incomparable?
Spoken and written language are two modes of language. When learners aim at higher skill levels, the expected outcome of successful second language learning is usually to become a fluent speaker and writer who can produce accurate and complex language in the target language. There is an axiomatic difference between speech and writing, but together they form the essential parts of learners’ L2 skills. The two modes have their own characteristics, and there are differences between native and nonnative language use. For instance, hesitations and pauses are not visible in the end result of the writing process, but they are characteristic of nonnative spoken language use. The present study is based on the analysis of L2 English spoken and written productions of 18 L1 Finnish learners with focus on syntactic complexity. As earlier spoken language segmentation units mostly come from fluency studies, we conducted an experiment with a new unit, the U-unit, and examined how using this unit as the basis of spoken language segmentation affects the results. According to the analysis, written language was more complex than spoken language. However, the difference in the level of complexity was greatest when the traditional units, T-units and AS-units, were used in segmenting the data. Using the U-unit revealed that spoken language may, in fact, be closer to written language in its syntactic complexity than earlier studies had suggested. Therefore, further research is needed to discover whether the differences in spoken and written learner language are primarily due to the nature of these modes or, rather, to the units and measures used in the analysis
FRACTAL DIMENSIONS OF DIFFERENT WRITING SYSTEMS AND THEIR POSSIBLE IMPLICATIONS
The fractal dimension is an indicator of structural complexity. It represents a ratio of the change in the details of a pattern to the change in the scale used for measuring it. In this study, fractal analysis was applied to different writing systems. Each script investigated was considered as a distinct image and its fractal dimension value was estimated by using the box-counting method. Firstly, the presence of characteristic fractal dimensions for Greek, Latin and Cyrillic was established by using different types of fonts to show the validity of such an investigation. Then, possible relationships were sought for between different writing systems by considering their fractal dimensions. It was observed that some scripts with known close relations indeed exhibited relatively close fractal natures in the range of mesh size used in the calculations. Latin and Cyrillic known to be derived from Greek exhibited fractal dimension values rather close to and slightly higher than that of Greek. This might imply the increase in complexity of a writing system as other scripts are developed from it. Arabic and Hebrew, Devanagari and Thai, Armenian and Georgian exhibited quite similar fractal natures to each other, supporting available knowledge/speculations on their kinship. The Korean script, which is known to be developed uniquely, was investigated for obtaining some clues about its possible inspirations. The fractal dimension of the Korean writing system Hangul was determined to be close to those of Devanagari and especially Thai scripts, when the Far East scripts were taken into consideration. On the other hand, the fractal natures of the Old Turkic and Japanese scripts seemed to be less similar to the Korean script. As shown in this study, fractal analysis may be utilized as a helping tool, together with other techniques, in determining the origins and/or relatives of various unknown scripts, or alternately to show that they are irrelevant. Additional information, such as regional and historical relationships between different writing systems may reinforce the implications obtained from investigations carried out by fractal analysis
FRACTAL DIMENSIONS OF DIFFERENT WRITING SYSTEMS AND THEIR POSSIBLE IMPLICATIONS
The fractal dimension is an indicator of structural complexity. It represents a ratio of the change in the details of a pattern to the change in the scale used for measuring it. In this study, fractal analysis was applied to different writing systems. Each script investigated was considered as a distinct image and its fractal dimension value was estimated by using the box-counting method. Firstly, the presence of characteristic fractal dimensions for Greek, Latin and Cyrillic was established by using different types of fonts to show the validity of such an investigation. Then, possible relationships were sought for between different writing systems by considering their fractal dimensions. It was observed that some scripts with known close relations indeed exhibited relatively close fractal natures in the range of mesh size used in the calculations. Latin and Cyrillic known to be derived from Greek exhibited fractal dimension values rather close to and slightly higher than that of Greek. This might imply the increase in complexity of a writing system as other scripts are developed from it. Arabic and Hebrew, Devanagari and Thai, Armenian and Georgian exhibited quite similar fractal natures to each other, supporting available knowledge/speculations on their kinship. The Korean script, which is known to be developed uniquely, was investigated for obtaining some clues about its possible inspirations. The fractal dimension of the Korean writing system Hangul was determined to be close to those of Devanagari and especially Thai scripts, when the Far East scripts were taken into consideration. On the other hand, the fractal natures of the Old Turkic and Japanese scripts seemed to be less similar to the Korean script. As shown in this study, fractal analysis may be utilized as a helping tool, together with other techniques, in determining the origins and/or relatives of various unknown scripts, or alternately to show that they are irrelevant. Additional information, such as regional and historical relationships between different writing systems may reinforce the implications obtained from investigations carried out by fractal analysis
Complexity, parallel computation and statistical physics
The intuition that a long history is required for the emergence of complexity
in natural systems is formalized using the notion of depth. The depth of a
system is defined in terms of the number of parallel computational steps needed
to simulate it. Depth provides an objective, irreducible measure of history
applicable to systems of the kind studied in statistical physics. It is argued
that physical complexity cannot occur in the absence of substantial depth and
that depth is a useful proxy for physical complexity. The ideas are illustrated
for a variety of systems in statistical physics.Comment: 21 pages, 7 figure
The Road Ahead for State Assessments
The adoption of the Common Core State Standards offers an opportunity to make significant improvements to the large-scale statewide student assessments that exist today, and the two US DOE-funded assessment consortia -- the Partnership for the Assessment of Readiness for College and Careers (PARCC) and the SMARTER Balanced Assessment Consortium (SBAC) -- are making big strides forward. But to take full advantage of this opportunity the states must focus squarely on making assessments both fair and accurate.A new report commissioned by the Rennie Center for Education Research & Policy and Policy Analysis for California Education (PACE), The Road Ahead for State Assessments, offers a blueprint for strengthening assessment policy, pointing out how new technologies are opening up new possibilities for fairer, more accurate evaluations of what students know and are able to do. Not all of the promises can yet be delivered, but the report provides a clear set of assessment-policy recommendations. The Road Ahead for State Assessments includes three papers on assessment policy.The first, by Mark Reckase of Michigan State University, provides an overview of computer adaptive assessment. Computer adaptive assessment is an established technology that offers detailed information on where students are on a learning continuum rather than a summary judgment about whether or not they have reached an arbitrary standard of "proficiency" or "readiness." Computer adaptivity will support the fair and accurate assessment of English learners (ELs) and lead to a serious engagement with the multiple dimensions of "readiness" for college and careers.The second and third papers give specific attention to two areas in which we know that current assessments are inadequate: assessments in science and assessments for English learners.In science, paper-and-pencil, multiple choice tests provide only weak and superficial information about students' knowledge and skills -- most specifically about their abilities to think scientifically and actually do science. In their paper, Chris Dede and Jody Clarke-Midura of Harvard University illustrate the potential for richer, more authentic assessments of students' scientific understanding with a case study of a virtual performance assessment now under development at Harvard. With regard to English learners, administering tests in English to students who are learning the language, or to speakers of non-standard dialects, inevitably confounds students' content knowledge with their fluency in Standard English, to the detriment of many students. In his paper, Robert Linquanti of WestEd reviews key problems in the assessment of ELs, and identifies the essential features of an assessment system equipped to provide fair and accurate measures of their academic performance.The report's contributors offer deeply informed recommendations for assessment policy, but three are especially urgent.Build a system that ensures continued development and increased reliance on computer adaptive testing. Computer adaptive assessment provides the essential foundation for a system that can produce fair and accurate measurement of English learners' knowledge and of all students' knowledge and skills in science and other subjects. Developing computer adaptive assessments is a necessary intermediate step toward a system that makes assessment more authentic by tightly linking its tasks and instructional activities and ultimately embedding assessment in instruction. It is vital for both consortia to keep these goals in mind, even in light of current technological and resource constraints.Integrate the development of new assessments with assessments of English language proficiency (ELP). The next generation of ELP assessments should take into consideration an English learners' specific level of proficiency in English. They will need to be based on ELP standards that sufficiently specify the target academic language competencies that English learners need to progress in and gain mastery of the Common Core Standards. One of the report's authors, Robert Linquanti, states: "Acknowledging and overcoming the challenges involved in fairly and accurately assessing ELs is integral and not peripheral to the task of developing an assessment system that serves all students well. Treating the assessment of ELs as a separate problem -- or, worse yet, as one that can be left for later -- calls into question the basic legitimacy of assessment systems that drive high-stakes decisions about students, teachers, and schools." Include virtual performance assessments as part of comprehensive state assessment systems. Virtual performance assessments have considerable promise for measuring students' inquiry and problem-solving skills in science and in other subject areas, because authentic assessment can be closely tied to or even embedded in instruction. The simulation of authentic practices in settings similar to the real world opens the way to assessment of students' deeper learning and their mastery of 21st century skills across the curriculum. We are just setting out on the road toward assessments that ensure fair and accurate measurement of performance for all students, and support for sustained improvements in teaching and learning. Developing assessments that realize these goals will take time, resources and long-term policy commitment. PARCC and SBAC are taking the essential first steps down a long road, and new technologies have begun to illuminate what's possible. This report seeks to keep policymakers' attention focused on the road ahead, to ensure that the choices they make now move us further toward the goal of college and career success for all students. This publication was released at an event on May 16, 2011
Using a Combination of Measurement Tools to Extract Metrics from Open Source Projects
Software measurement can play a major role in ensuring the quality and reliability of software products. The measurement activities require appropriate tools to collect relevant metric data. Currently, there are several such tools available for software measurement. The main objective of this paper is to provide some guidelines in using a combination of multiple measurement tools especially for products built using object-oriented techniques and languages. In this paper, we highlight three tools for collecting metric data, in our case from several Java-based open source projects. Our research is currently based on the work of Card and Glass, who argue that design complexity measures (data complexity and structural complexity) are indicators/predictors of procedural/cyclomatic complexity (decision counts) and errors (discovered from system tests). Their work was centered on structured design and our work is with object-oriented designs and the metrics we use parallel those of Card and Glass, being, Henry and Kafura's Information Flow Metrics, McCabe's Cyclomatic Complexity, and Chidamber and Kemerer Object-oriented Metrics
- …