213,272 research outputs found

    The sensitivity of landscape evolution models to spatial and temporal rainfall resolution

    Get PDF
    © Author(s) 2016. Climate is one of the main drivers for landscape evolution models (LEMs), yet its representation is often basic with values averaged over long time periods and frequently lumped to the same value for the whole basin. Clearly, this hides the heterogeneity of precipitation - but what impact does this averaging have on erosion and deposition, topography, and the final shape of LEM landscapes? This paper presents results from the first systematic investigation into how the spatial and temporal resolution of precipitation affects LEM simulations of sediment yields and patterns of erosion and deposition. This is carried out by assessing the sensitivity of the CAESAR-Lisflood LEM to different spatial and temporal precipitation resolutions - as well as how this interacts with different-size drainage basins over short and long timescales. A range of simulations were carried out, varying rainfall from 0.25 h × 5 km to 24 h × Lump resolution over three different-sized basins for 30-year durations. Results showed that there was a sensitivity to temporal and spatial resolution, with the finest leading to & gt; 100 % increases in basin sediment yields. To look at how these interactions manifested over longer timescales, several simulations were carried out to model a 1000-year period. These showed a systematic bias towards greater erosion in uplands and deposition in valley floors with the finest spatial- and temporal-resolution data. Further tests showed that this effect was due solely to the data resolution, not orographic factors. Additional research indicated that these differences in sediment yield could be accounted for by adding a compensation factor to the model sediment transport law. However, this resulted in notable differences in the topographies generated, especially in third-order and higher streams. The implications of these findings are that uncalibrated past and present LEMs using lumped and time-averaged climate inputs may be under-predicting basin sediment yields as well as introducing spatial biases through under-predicting erosion in first-order streams but over-predicting erosion in second- and third-order streams and valley floor areas. Calibrated LEMs may give correct sediment yields, but patterns of erosion and deposition will be different and the calibration may not be correct for changing climates. This may have significant impacts on the modelled basin profile and shape from long-timescale simulations

    The impact of machine translation error types on post-editing effort indicators

    Get PDF
    In this paper, we report on a post-editing study for general text types from English into Dutch conducted with master's students of translation. We used a fine-grained machine translation (MT) quality assessment method with error weights that correspond to severity levels and are related to cognitive load. Linear mixed effects models are applied to analyze the impact of MT quality on potential post-editing effort indicators. The impact of MT quality is evaluated on three different levels, each with an increasing granularity. We find that MT quality is a significant predictor of all different types of post-editing effort indicators and that different types of MT errors predict different post-editing effort indicators

    The Co-Evolution of Test Maintenance and Code Maintenance through the lens of Fine-Grained Semantic Changes

    Full text link
    Automatic testing is a widely adopted technique for improving software quality. Software developers add, remove and update test methods and test classes as part of the software development process as well as during the evolution phase, following the initial release. In this work we conduct a large scale study of 61 popular open source projects and report the relationships we have established between test maintenance, production code maintenance, and semantic changes (e.g, statement added, method removed, etc.). performed in developers' commits. We build predictive models, and show that the number of tests in a software project can be well predicted by employing code maintenance profiles (i.e., how many commits were performed in each of the maintenance activities: corrective, perfective, adaptive). Our findings also reveal that more often than not, developers perform code fixes without performing complementary test maintenance in the same commit (e.g., update an existing test or add a new one). When developers do perform test maintenance, it is likely to be affected by the semantic changes they perform as part of their commit. Our work is based on studying 61 popular open source projects, comprised of over 240,000 commits consisting of over 16,000,000 semantic change type instances, performed by over 4,000 software engineers.Comment: postprint, ICSME 201

    Evaluating the role of quantitative modeling in language evolution

    No full text
    Models are a flourishing and indispensable area of research in language evolution. Here we highlight critical issues in using and interpreting models, and suggest viable approaches. First, contrasting models can explain the same data and similar modelling techniques can lead to diverging conclusions. This should act as a reminder to use the extreme malleability of modelling parsimoniously when interpreting results. Second, quantitative techniques similar to those used in modelling language evolution have proven themselves inadequate in other disciplines. Cross-disciplinary fertilization is crucial to avoid mistakes which have previously occurred in other areas. Finally, experimental validation is necessary both to sharpen models' hypotheses, and to support their conclusions. Our belief is that models should be interpreted as quantitative demonstrations of logical possibilities, rather than as direct sources of evidence. Only an integration of theoretical principles, quantitative proofs and empirical validation can allow research in the evolution of language to progress

    Barriers and opportunities for evidence-based health service planning: the example of developing a Decision Analytic Model to plan services for sexually transmitted infections in the UK

    Get PDF
    Decision Analytic Models (DAMs) are established means of evidence-synthesis to differentiate between health interventions. They have mainly been used to inform clinical decisions and health technology assessment at the national level, yet could also inform local health service planning. For this, a DAM must take into account the needs of the local population, but also the needs of those planning its services. Drawing on our experiences from stakeholder consultations, where we presented the potential utility of a DAM for planning local health services for sexually transmitted infections (STIs) in the UK, and the evidence it could use to inform decisions regarding different combinations of service provision, in terms of their costs, cost-effectiveness, and public health outcomes, we discuss the barriers perceived by stakeholders to the use of DAMs to inform service planning for local populations, including (1) a tension between individual and population perspectives; (2) reductionism; and (3) a lack of transparency regarding models, their assumptions, and the motivations of those generating models

    Predictive Analytics for Fantasy Football: Predicting Player Performance Across the NFL

    Get PDF
    The goal of this research is to develop a quantitative method of ranking and listing players in terms of performance. These rankings can then be used to evaluate players prior to and during a fantasy football draft. To produce these rankings, we develop a methodology for forecasting the performance of each individual player (on different metrics) for the upcoming season (16 games) and use these forecasts to estimate player fantasy football scores for the 2018 season. More specifically, this work answers the following: In what order should players be drafted in a 2018 fantasy football draft and why? Which players can be expected to perform the best at their given position (Quarterback, Running back, Wide Receiver, Kicker, Team Defense) in 2018, and which players should we expect to perform poorly
    corecore