418 research outputs found

    Improving EFL Learners Language Written Production Using Subtitled Videos

    Full text link
    English as a Foreign Language (EFL) Learners tend to produce their speech or written production as they are associated with what they see and what they hear. So, they are able and desired to give comment spontaneously after watching and listening to them. Producing written language can, as a matter of fact, be detected from learners\u27 fluency, accuracy and complexity. This article endeavors to elaborate written language production done by university students by using English subtitled videos. Two intact groups were assigned to accomplish two different tasks; that is, one group watched the video with subtitle and the other one without subtitle. The result of the study reveals that learners who carry out the tasks of watching video with subtitle improve their ability in written production in terms of fluency and accuracy regardless to complexity

    ROC for logistic regression classifiers trained by single feature.

    No full text
    <p>1) Research interest features <i>simText</i>, <i>simMesh</i> and <i>simOutcite</i> (panels a, b and d) have large ROC areas, showing that they are informative for the classification. <i>simIncite</i> (panel c) however is not as large as other features in this category with 0.56 ROC only. 2) The ROC curves for common co-author based features (panels f, k and l) are a straight lines due to the fact that most of the author pairs (especially negative instances) don't have any common coauthors, and only 1/3 of the positive instances have non-zero common co-authors. 3) Other features such as <i>sumCoauthor</i> (panel f) is also effective for classification with 0.67 ROC. Activity feature <i>sumRecency</i> (panel i) has 0.60 ROC, and so does <i>sumPub</i> (panel e). <i>sumClusteringCoef</i> and <i>diffSeniority</i> (panels h, j) show only 0.53 and 0.54 ROC respectively. The individual ROC is consistent with information gain analysis, which also shows that the research interest features are most informative, followed by common neighbor based features.</p

    10-fold cross-validation on the training set.

    No full text
    <p>10-fold cross-validation on the training set.</p

    Learning curve for logistic regression with log loss metric.

    No full text
    <p>The training error and validation error converges by the time the dataset reaches the size of 1000 author pairs. Therefore the training set size (5361 positive instances and 5361 negative instances) is sufficient for the task.</p

    Collaboration frequency distribution for the CiteGraph dataset.

    No full text
    <p>It is a power law distribution log (<i>y</i>) = −3.59* log (<i>x</i>)+0.885, where <i>y</i> refers to the percentage of researcher pairs that collaborate <i>x</i> times, with x referring to the number of collaborations.</p

    Feature definition.

    No full text
    <p>Feature definition.</p

    An illustration of automatic research collaboration recommendation.

    No full text
    <p>The graph shows a co-authorship network in which the nodes are authors and the links represent co-authorship. The solid lines represent existing co-authorships. Our study is to build a computational model to predict whether author <i>s</i> will collaborate with authors <i>b</i>, <i>f</i>, and <i>d</i> based on their existing research and collaborations.</p

    Training set feature ranking, by information gain.

    No full text
    <p>Training set feature ranking, by information gain.</p

    <i>RandomPairCategory</i> evaluation results.

    No full text
    <p><i>RandomPairCategory</i> evaluation results.</p

    Research interest similarities over researcher career span.

    No full text
    <p>In the early stages of a researcher's career, collaborators with less research similarity are found but collaboration between two experienced researchers shows greater research interest similarity.</p
    • …
    corecore