5 research outputs found

    Effects of network topology on the OpenAnswer’s Bayesian model of peer assessment

    Get PDF
    The paper investigates if and how the topology of the peer assessment network can affect the performance of the Bayesian model adopted in Ope nAnswer. Performance is evaluated in terms of the comparison of predicted grades with actual teacher’s grades. The global network is built by interconnecting smaller subnetworks, one for each student, where intra subnetwork nodes represent student's characteristics, and peer assessment assignments make up inter subnetwork connections and determine evidence propagation. A possible subset of teacher graded answers is dynamically determined by suitable selec tion and stop rules. The research questions addressed are: RQ1) “does the topology (diameter) of the network negatively influence the precision of predicted grades?”̀ in the affirmative case, RQ2) “are we able to reduce the negative effects of high diameter networks through an appropriate choice of the subset of students to be corrected by the teacher?” We show that RQ1) OpenAnswer is less effective on higher diameter topologies, RQ2) this can be avoided if the subset of corrected students is chosen considering the network topology

    Modeling peer assessment as a personalized predictor of teacher's grades: The case of OpenAnswer

    Get PDF
    Questions with open answers are rarely used as e-learning assessment tools because of the resulting high workload for the teacher/tutor that should grade them. This can be mitigated by having students grade each other's answers, but the uncertainty on the quality of the resulting grades could be high. In our OpenAnswer system we have modeled peer-assessment as a Bayesian network connecting a set of sub-networks (each representing a participating student) to the corresponding answers of her graded peers. The model has shown good ability to predict (without further info from the teacher) the exact teacher mark and a very good ability to predict it within 1 mark from the right one (ground truth). From the available datasets we noticed that different teachers sometimes disagree in their assessment of the same answer. For this reason in this paper we explore how the model can be tailored to the specific teacher to improve its prediction ability. To this aim, we parametrically define the CPTs (Conditional Probability Tables) describing the probabilistic dependence of a Bayesian variable from others in the modeled network, and we optimize the parameters generating the CPTs to obtain the smallest average difference between the predicted grades and the teacher's marks (ground truth). The optimization is carried out separately with respect to each teacher available in our datasets, or respect to the whole datasets. The paper discusses the results and shows that the prediction performance of our model, when optimized separately for each teacher, improves against the case in which our model is globally optimized respect to the whole dataset, which in turn improves against the predictions of the raw peer-assessment. The improved prediction would allow us to use OpenAnswer, without teacher intervention, as a class monitoring and diagnostic tool

    Self regulated learning in flipped classrooms: A systematic literature review

    Get PDF
    The flipped classroom is considered an instructional strategy and a type of blended learning instruction that focused on active learning and student engagement. Over the years, flipped classroom studies have focused more on the advantages and challenges of flipped instruction and its effectiveness, but little is known about the state of self-regulation in flipped classrooms. This study investigates the self-regulation strategies as well as the supports proposed for self-regulated learning in flipped classrooms. Findings show that relatively few studies have focused on self-regulated learning in flipped classrooms compared to the overall research and publication productivity in flipped classrooms. Also, the existing solutions and supports have only focused on either self-regulation or online help-seeking, but have not focused on other specific types of self-regulation strategies. Our study proposed some future research recommendations in flipped classrooms

    ピアアセスメントのための項目反応理論と整数計画法を用いたグループ構成最適化

    Get PDF
    In recent years, large-scale e-learning environments such as Massive Online Open Courses (MOOCs) have become increasingly popular. In such environments, peer assessment, which is mutual assessment among learners, has been used to evaluate reports and programming assignments. When the number of learners increases as in MOOCs, peer assessment is often conducted by dividing learners into multiple groups to reduce the learners’ assessment workload. In this case, however, the accuracy of peer assessment depends on the way to form groups. To solve the problem, this study proposes a group optimization method based on item response theory (IRT) and integer programming. The proposed group optimization method is formulated as an integer programming problem that maximizes the Fisher information, which is a widely used index of ability assessment accuracy in IRT. Experimental results, however, show that the proposed method cannot sufficiently improve the accuracy compared to the random group formulation. To overcome this limitation, this study introduces the concept of external raters and proposes an external rater selection method that assigns a few appropriate external raters to each learner after the groups were formed using the proposed group optimization method. In this study, an external rater is defined as a peer-rater who belongs to different groups. The proposed external rater selection method is formulated as an integer programming problem that maximizes the lower bound of the Fisher information of the estimated ability of the learners by the external raters. Experimental results using both simulated and real-world peer assessment data show that the introduction of external raters is useful to improve the accuracy sufficiently. The result also demonstrates that the proposed external rater selection method based on IRT models can significantly improve the accuracy of ability assessment than the random selection.近年,MOOCsなどの大規模型eラーニングが普及してきた.大規模な数の学習者が参加している場合には,教師が一人で学習者のレポートやプログラム課題などを評価することは難しい.大規模の学習者の評価手法の一つとして,学習者同士によるピアアセスメントが注目されている.MOOCsのように学習者数が多い場合のピアアセスメントは,評価の負担を軽減するために学習者を複数のグループに分割してグループ内のメンバ同士で行うことが多い.しかし,この場合,グループ構成の仕方によって評価結果が大きく変化してしまう問題がある.この問題を解決するために,本研究では,項目反応理論と整数計画法を用いて,グループで行うピアアセスメントの精度を最適化するグループ構成手法を提案する.具体的には,項目反応理論において学習者の能力測定精度を表すフィッシャー情報量を最大化する整数計画問題としてグループ構成問題を定式化する.実験の結果,ランダムグループ構成と比べて,提案手法はおおむね測定精度を改善したが,それは限定的な結果であることが明らかとなった.そこで,本研究ではさらに,異なるグループから数名の学習者を外部評価者として各学習者に割り当て外部評価者選択手法を提案する.シミュレーションと実データ実験により,提案手法を用いることで能力測定精度を大幅に改善できることを示す.電気通信大学201
    corecore