26,946 research outputs found

    The phonetics of second language learning and bilingualism

    Get PDF
    This chapter provides an overview of major theories and findings in the field of second language (L2) phonetics and phonology. Four main conceptual frameworks are discussed and compared: the Perceptual Assimilation Model-L2, the Native Language Magnet Theory, the Automatic Selection Perception Model, and the Speech Learning Model. These frameworks differ in terms of their empirical focus, including the type of learner (e.g., beginner vs. advanced) and target modality (e.g., perception vs. production), and in terms of their theoretical assumptions, such as the basic unit or window of analysis that is relevant (e.g., articulatory gestures, position-specific allophones). Despite the divergences among these theories, three recurring themes emerge from the literature reviewed. First, the learning of a target L2 structure (segment, prosodic pattern, etc.) is influenced by phonetic and/or phonological similarity to structures in the native language (L1). In particular, L1-L2 similarity exists at multiple levels and does not necessarily benefit L2 outcomes. Second, the role played by certain factors, such as acoustic phonetic similarity between close L1 and L2 sounds, changes over the course of learning, such that advanced learners may differ from novice learners with respect to the effect of a specific variable on observed L2 behavior. Third, the connection between L2 perception and production (insofar as the two are hypothesized to be linked) differs significantly from the perception-production links observed in L1 acquisition. In service of elucidating the predictive differences among these theories, this contribution discusses studies that have investigated L2 perception and/or production primarily at a segmental level. In addition to summarizing the areas in which there is broad consensus, the chapter points out a number of questions which remain a source of debate in the field today.https://drive.google.com/open?id=1uHX9K99Bl31vMZNRWL-YmU7O2p1tG2wHhttps://drive.google.com/open?id=1uHX9K99Bl31vMZNRWL-YmU7O2p1tG2wHhttps://drive.google.com/open?id=1uHX9K99Bl31vMZNRWL-YmU7O2p1tG2wHAccepted manuscriptAccepted manuscrip

    ORAC-DR: A generic data reduction pipeline infrastructure

    Get PDF
    ORAC-DR is a general purpose data reduction pipeline system designed to be instrument and observatory agnostic. The pipeline works with instruments as varied as infrared integral field units, imaging arrays and spectrographs, and sub-millimeter heterodyne arrays & continuum cameras. This paper describes the architecture of the pipeline system and the implementation of the core infrastructure. We finish by discussing the lessons learned since the initial deployment of the pipeline system in the late 1990s.Comment: 11 pages, 1 figure, accepted for publication in Astronomy and Computin

    Assessment of the Accuracy of a Multi-Beam LED Scanner Sensor for Measuring Olive Canopies

    Get PDF
    MDPI. CC BYCanopy characterization has become important when trying to optimize any kind of agricultural operation in high-growing crops, such as olive. Many sensors and techniques have reported satisfactory results in these approaches and in this work a 2D laser scanner was explored for measuring canopy trees in real-time conditions. The sensor was tested in both laboratory and field conditions to check its accuracy, its cone width, and its ability to characterize olive canopies in situ. The sensor was mounted on a mast and tested in laboratory conditions to check: (i) its accuracy at different measurement distances; (ii) its measurement cone width with different reflectivity targets; and (iii) the influence of the target’s density on its accuracy. The field tests involved both isolated and hedgerow orchards, in which the measurements were taken manually and with the sensor. The canopy volume was estimated with a methodology consisting of revolving or extruding the canopy contour. The sensor showed high accuracy in the laboratory test, except for the measurements performed at 1.0 m distance, with 60 mm error (6%). Otherwise, error remained below 20 mm (1% relative error). The cone width depended on the target reflectivity. The accuracy decreased with the target density

    Pipeline Reduction of Binary Light Curves from Large-Scale Surveys

    Full text link
    One of the most important changes in observational astronomy of the 21st Century is a rapid shift from classical object-by-object observations to extensive automatic surveys. As CCD detectors are getting better and their prices are getting lower, more and more small and medium-size observatories are refocusing their attention to detection of stellar variability through systematic sky-scanning missions. This trend is aditionally powered by the success of pioneering surveys such as ASAS, DENIS, OGLE, TASS, their space counterpart Hipparcos and others. Such surveys produce massive amounts of data and it is not at all clear how these data are to be reduced and analysed. This is especially striking in the eclipsing binary (EB) field, where most frequently used tools are optimized for object-by-object analysis. A clear need for thorough, reliable and fully automated approaches to modeling and analysis of EB data is thus obvious. This task is very difficult because of limited data quality, non-uniform phase coverage and solution degeneracy. This paper reviews recent advancements in putting together semi-automatic and fully automatic pipelines for EB data processing. Automatic procedures have already been used to process Hipparcos data, LMC/SMC observations, OGLE and ASAS catalogs etc. We discuss the advantages and shortcomings of these procedures.Comment: 14 pages, 8 figures, S240 IAU symposium proceeding

    Sketched Answer Set Programming

    Full text link
    Answer Set Programming (ASP) is a powerful modeling formalism for combinatorial problems. However, writing ASP models is not trivial. We propose a novel method, called Sketched Answer Set Programming (SkASP), aiming at supporting the user in resolving this issue. The user writes an ASP program while marking uncertain parts open with question marks. In addition, the user provides a number of positive and negative examples of the desired program behaviour. The sketched model is rewritten into another ASP program, which is solved by traditional methods. As a result, the user obtains a functional and reusable ASP program modelling her problem. We evaluate our approach on 21 well known puzzles and combinatorial problems inspired by Karp's 21 NP-complete problems and demonstrate a use-case for a database application based on ASP.Comment: 15 pages, 11 figures; to appear in ICTAI 201

    Investigating effort prediction of web-based applications using CBR on the ISBSG dataset

    Get PDF
    As web-based applications become more popular and more sophisticated, so does the requirement for early accurate estimates of the effort required to build such systems. Case-based reasoning (CBR) has been shown to be a reasonably effective estimation strategy, although it has not been widely explored in the context of web applications. This paper reports on a study carried out on a subset of the ISBSG dataset to examine the optimal number of analogies that should be used in making a prediction. The results show that it is not possible to select such a value with confidence, and that, in common with other findings in different domains, the effectiveness of CBR is hampered by other factors including the characteristics of the underlying dataset (such as the spread of data and presence of outliers) and the calculation employed to evaluate the distance function (in particular, the treatment of numeric and categorical data)
    • …
    corecore