1,025 research outputs found

    Student Perceptions on the Counselor Education Exit Requirement Experience

    Get PDF
    Master’s level Counselor Education Programs were asked to participate in a study to determine students’ perceptions about the exit requirement experience. Ninety-five recent graduates or graduate students nearing completion of a counselor education degree were surveyed. Results from 91 usable responses indicated that overall, students who were enrolled in programs that required some form of an exit requirement were satisfied with the process. Furthermore, the majority of these respondents felt that the major purpose of the exit requirement was to measure synthesis of knowledge. Implications for assessment in counselor education programs are discussed

    Explicit length modelling for statistical machine translation

    Full text link
    [EN] Explicit length modelling has been previously explored in statistical pattern recognition with successful results. In this paper, two length models along with two parameter estimation methods and two alternative parametrisations for statistical machine translation (SMT) are presented. More precisely, we incorporate explicit bilingual length modelling in a state-of-the-art log-linear SMT system as an additional feature function in order to prove the contribution of length information. Finally, a systematic evaluation on reference SMT tasks considering different language pairs proves the benefits of explicit length modelling.Work supported by the EC (FEDER/FSE) under the transLectures project (FP7-ICT-2011-7-287755) and the Spanish MEC/MICINN under the MIPRCV "Consolider Ingenio 2010" program (CSD2007-00018) and iTrans2 (TIN2009-14511) projects and FPU grant (AP2010-4349). Also supported by the Spanish MITyC under the erudito.com (TSI-020110-2009-439) project and by the Generalitat Valenciana under grants Prometeo/2009/014 and GV/2010/067, and by the UPV under the AdInTAO (20091027) project. The authors wish to thank the anonymous reviewers for their criticisms and suggestions.Silvestre Cerdà, JA.; Andrés Ferrer, J.; Civera Saiz, J. (2012). Explicit length modelling for statistical machine translation. Pattern Recognition. 45(9):3183-3192. https://doi.org/10.1016/j.patcog.2012.01.006S3183319245

    Cost-sensitive active learning for computer-assisted translation

    Full text link
    This is the author’s version of a work that was accepted for publication in Pattern Recognition Letters. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pattern Recognition Letters, [Volume 37, 1 February 2014, Pages 124–134] DOI: 10.1016/j.patrec.2013.06.007[EN] Machine translation technology is not perfect. To be successfully embedded in real-world applications, it must compensate for its imperfections by interacting intelligently with the user within a computer-assisted translation framework. The interactive¿predictive paradigm, where both a statistical translation model and a human expert collaborate to generate the translation, has been shown to be an effective computer-assisted translation approach. However, the exhaustive supervision of all translations and the use of non-incremental translation models penalizes the productivity of conventional interactive¿predictive systems. We propose a cost-sensitive active learning framework for computer-assisted translation whose goal is to make the translation process as painless as possible. In contrast to conventional active learning scenarios, the proposed active learning framework is designed to minimize not only how many translations the user must supervise but also how difficult each translation is to supervise. To do that, we address the two potential drawbacks of the interactive-predictive translation paradigm. On the one hand, user effort is focused to those translations whose user supervision is considered more ¿informative¿, thus, maximizing the utility of each user interaction. On the other hand, we use a dynamic machine translation model that is continually updated with user feedback after deployment. We empirically validated each of the technical components in simulation and quantify the user effort saved. We conclude that both selective translation supervision and translation model updating lead to important user-effort reductions, and consequently to improved translation productivity.Work supported by the European Union Seventh Framework Program (FP7/2007-2013) under the CasMaCat Project (Grants agreement No. 287576), by the Generalitat Valenciana under Grant ALMPR (Prometeo/2009/014), and by the Spanish Government under Grant TIN2012-31723. The authors thank Daniel Ortiz-Martinez for providing us with the log-linear SMT model with incremental features and the corresponding online learning algorithms. The authors also thank the anonymous reviewers for their criticisms and suggestions.González Rubio, J.; Casacuberta Nolla, F. (2014). Cost-sensitive active learning for computer-assisted translation. Pattern Recognition Letters. 37(1):124-134. https://doi.org/10.1016/j.patrec.2013.06.007S12413437

    Inferential models: A framework for prior-free posterior probabilistic inference

    Full text link
    Posterior probabilistic statistical inference without priors is an important but so far elusive goal. Fisher's fiducial inference, Dempster-Shafer theory of belief functions, and Bayesian inference with default priors are attempts to achieve this goal but, to date, none has given a completely satisfactory picture. This paper presents a new framework for probabilistic inference, based on inferential models (IMs), which not only provides data-dependent probabilistic measures of uncertainty about the unknown parameter, but does so with an automatic long-run frequency calibration property. The key to this new approach is the identification of an unobservable auxiliary variable associated with observable data and unknown parameter, and the prediction of this auxiliary variable with a random set before conditioning on data. Here we present a three-step IM construction, and prove a frequency-calibration property of the IM's belief function under mild conditions. A corresponding optimality theory is developed, which helps to resolve the non-uniqueness issue. Several examples are presented to illustrate this new approach.Comment: 29 pages with 3 figures. Main text is the same as the published version. Appendix B is an addition, not in the published version, that contains some corrections and extensions of two of the main theorem

    Inducing Probabilistic Grammars by Bayesian Model Merging

    Full text link
    We describe a framework for inducing probabilistic grammars from corpora of positive samples. First, samples are {\em incorporated} by adding ad-hoc rules to a working grammar; subsequently, elements of the model (such as states or nonterminals) are {\em merged} to achieve generalization and a more compact representation. The choice of what to merge and when to stop is governed by the Bayesian posterior probability of the grammar given the data, which formalizes a trade-off between a close fit to the data and a default preference for simpler models (`Occam's Razor'). The general scheme is illustrated using three types of probabilistic grammars: Hidden Markov models, class-based nn-grams, and stochastic context-free grammars.Comment: To appear in Grammatical Inference and Applications, Second International Colloquium on Grammatical Inference; Springer Verlag, 1994. 13 page

    Computer simulations of developmental change: The contributions of working memory capacity and long-term knowledge

    Get PDF
    Increasing working memory (WM) capacity is often cited as a major influence on children’s development and yet WM capacity is difficult to examine independently of long-term knowledge. A computational model of children’s nonword repetition (NWR) performance is presented that independently manipulates long-term knowledge and WM capacity to determine the relative contributions of each in explaining the developmental data. The simulations show that (1) both mechanisms independently cause the same overall developmental changes in NWR performance; (2) increase in long-term knowledge provides the better fit to the child data; and (3) varying both long-term knowledge and WM capacity adds no significant gains over varying long-term knowledge alone. Given that increases in long-term knowledge must occur during development, the results indicate that increases in WM capacity may not be required to explain developmental differences. An increase in WM capacity should only be cited as a mechanism of developmental change when there are clear empirical reasons for doing so

    Tectonic denudation and topographic development in the Spanish Sierra Nevada

    Get PDF
    The denudation history of the rapidly uplifting western part of the Spanish Sierra Nevada was assessed using apatite fission track (AFT) ages and 10Be analyses of bedrock and fluvial sediments. Major contrasts in the denudation history are recorded within the 27 km2 Río Torrente catchment. Upland areas are characterized by low-relief, low slope angles, and locally the preservation of shallow marine sediments, which have experienced <200 m of erosion in the last 9 Myr. However, AFT age determinations from samples collected close to the marine sediments imply >2 km of denudation since circa 4 Ma. The minimum denudation rates of 0.4 mm yr−1 derived from AFT also contrast with the slow medium-term (104 years) erosion rates (0.044 ± 0.015 mm yr−1) estimated from 10Be measurements at high elevations. The local medium-long-term contrasts in denudation rates within the high Sierra Nevada indicate that much of the unroofing occurs by tectonic denudation on flat-lying detachments. In lower elevation parts of the catchment, rapid river incision coupled to rock uplift has produced ∼1.6 km of relief, implying that the rivers and adjacent hillslopes close to the edge of the orogen are sensitive to normal-fault-driven changes in base level. However, these changes are not transmitted into the low-relief slowly eroding upland areas. Thus the core of the mountain range continues to increase in elevation until the limits of crustal strength are reached and denudation is initiated along planes of structural weakness. We propose that this form of tectonic denudation provides an effective limit to relief in young orogens

    Statistical methods in language processing

    Full text link
    The term statistical methods here refers to a methodology that has been dominant in computational linguistics since about 1990. It is characterized by the use of stochastic models, substantial data sets, machine learning, and rigorous experimental evaluation. The shift to statistical methods in computational linguistics parallels a movement in artificial intelligence more broadly. Statistical methods have so thoroughly permeated computational linguistics that almost all work in the field draws on them in some way. There has, however, been little penetration of the methods into general linguistics. The methods themselves are largely borrowed from machine learning and information theory. We limit attention to that which has direct applicability to language processing, though the methods are quite general and have many nonlinguistic applications. Not every use of statistics in language processing falls under statistical methods as we use the term. Standard hypothesis testing and experimental design, for example, are not covered in this article. WIREs Cogni Sci 2011 2 315–322 DOI: 10.1002/wcs.111 For further resources related to this article, please visit the WIREs websitePeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/83468/1/111_ftp.pd

    FASTLens (FAst STatistics for weak Lensing) : Fast method for Weak Lensing Statistics and map making

    Full text link
    With increasingly large data sets, weak lensing measurements are able to measure cosmological parameters with ever greater precision. However this increased accuracy also places greater demands on the statistical tools used to extract the available information. To date, the majority of lensing analyses use the two point-statistics of the cosmic shear field. These can either be studied directly using the two-point correlation function, or in Fourier space, using the power spectrum. But analyzing weak lensing data inevitably involves the masking out of regions or example to remove bright stars from the field. Masking out the stars is common practice but the gaps in the data need proper handling. In this paper, we show how an inpainting technique allows us to properly fill in these gaps with only NlogNN \log N operations, leading to a new image from which we can compute straight forwardly and with a very good accuracy both the pow er spectrum and the bispectrum. We propose then a new method to compute the bispectrum with a polar FFT algorithm, which has the main advantage of avoiding any interpolation in the Fourier domain. Finally we propose a new method for dark matter mass map reconstruction from shear observations which integrates this new inpainting concept. A range of examples based on 3D N-body simulations illustrates the results.Comment: Final version accepted by MNRAS. The FASTLens software is available from the following link : http://irfu.cea.fr/Ast/fastlens.software.ph
    corecore