28,818 research outputs found

    Modeling peer assessment as a personalized predictor of teacher's grades: The case of OpenAnswer

    Get PDF
    Questions with open answers are rarely used as e-learning assessment tools because of the resulting high workload for the teacher/tutor that should grade them. This can be mitigated by having students grade each other's answers, but the uncertainty on the quality of the resulting grades could be high. In our OpenAnswer system we have modeled peer-assessment as a Bayesian network connecting a set of sub-networks (each representing a participating student) to the corresponding answers of her graded peers. The model has shown good ability to predict (without further info from the teacher) the exact teacher mark and a very good ability to predict it within 1 mark from the right one (ground truth). From the available datasets we noticed that different teachers sometimes disagree in their assessment of the same answer. For this reason in this paper we explore how the model can be tailored to the specific teacher to improve its prediction ability. To this aim, we parametrically define the CPTs (Conditional Probability Tables) describing the probabilistic dependence of a Bayesian variable from others in the modeled network, and we optimize the parameters generating the CPTs to obtain the smallest average difference between the predicted grades and the teacher's marks (ground truth). The optimization is carried out separately with respect to each teacher available in our datasets, or respect to the whole datasets. The paper discusses the results and shows that the prediction performance of our model, when optimized separately for each teacher, improves against the case in which our model is globally optimized respect to the whole dataset, which in turn improves against the predictions of the raw peer-assessment. The improved prediction would allow us to use OpenAnswer, without teacher intervention, as a class monitoring and diagnostic tool

    SizeNet: Weakly Supervised Learning of Visual Size and Fit in Fashion Images

    Full text link
    Finding clothes that fit is a hot topic in the e-commerce fashion industry. Most approaches addressing this problem are based on statistical methods relying on historical data of articles purchased and returned to the store. Such approaches suffer from the cold start problem for the thousands of articles appearing on the shopping platforms every day, for which no prior purchase history is available. We propose to employ visual data to infer size and fit characteristics of fashion articles. We introduce SizeNet, a weakly-supervised teacher-student training framework that leverages the power of statistical models combined with the rich visual information from article images to learn visual cues for size and fit characteristics, capable of tackling the challenging cold start problem. Detailed experiments are performed on thousands of textile garments, including dresses, trousers, knitwear, tops, etc. from hundreds of different brands.Comment: IEEE Conference on Computer Vision and Pattern Recognition Workshop (CVPRW) 2019 Focus on Fashion and Subjective Search - Understanding Subjective Attributes of Data (FFSS-USAD

    Effects of network topology on the OpenAnswer’s Bayesian model of peer assessment

    Get PDF
    The paper investigates if and how the topology of the peer assessment network can affect the performance of the Bayesian model adopted in Ope nAnswer. Performance is evaluated in terms of the comparison of predicted grades with actual teacher’s grades. The global network is built by interconnecting smaller subnetworks, one for each student, where intra subnetwork nodes represent student's characteristics, and peer assessment assignments make up inter subnetwork connections and determine evidence propagation. A possible subset of teacher graded answers is dynamically determined by suitable selec tion and stop rules. The research questions addressed are: RQ1) “does the topology (diameter) of the network negatively influence the precision of predicted grades?”̀ in the affirmative case, RQ2) “are we able to reduce the negative effects of high diameter networks through an appropriate choice of the subset of students to be corrected by the teacher?” We show that RQ1) OpenAnswer is less effective on higher diameter topologies, RQ2) this can be avoided if the subset of corrected students is chosen considering the network topology

    Lifelong Generative Modeling

    Full text link
    Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner, where knowledge gained from previous tasks is retained and used to aid future learning over the lifetime of the learner. It is essential towards the development of intelligent machines that can adapt to their surroundings. In this work we focus on a lifelong learning approach to unsupervised generative modeling, where we continuously incorporate newly observed distributions into a learned model. We do so through a student-teacher Variational Autoencoder architecture which allows us to learn and preserve all the distributions seen so far, without the need to retain the past data nor the past models. Through the introduction of a novel cross-model regularizer, inspired by a Bayesian update rule, the student model leverages the information learned by the teacher, which acts as a probabilistic knowledge store. The regularizer reduces the effect of catastrophic interference that appears when we learn over sequences of distributions. We validate our model's performance on sequential variants of MNIST, FashionMNIST, PermutedMNIST, SVHN and Celeb-A and demonstrate that our model mitigates the effects of catastrophic interference faced by neural networks in sequential learning scenarios.Comment: 32 page
    • …
    corecore