9,053 research outputs found

    Bayesian Hierarchical Modelling for Tailoring Metric Thresholds

    Full text link
    Software is highly contextual. While there are cross-cutting `global' lessons, individual software projects exhibit many `local' properties. This data heterogeneity makes drawing local conclusions from global data dangerous. A key research challenge is to construct locally accurate prediction models that are informed by global characteristics and data volumes. Previous work has tackled this problem using clustering and transfer learning approaches, which identify locally similar characteristics. This paper applies a simpler approach known as Bayesian hierarchical modeling. We show that hierarchical modeling supports cross-project comparisons, while preserving local context. To demonstrate the approach, we conduct a conceptual replication of an existing study on setting software metrics thresholds. Our emerging results show our hierarchical model reduces model prediction error compared to a global approach by up to 50%.Comment: Short paper, published at MSR '18: 15th International Conference on Mining Software Repositories May 28--29, 2018, Gothenburg, Swede

    A new perspective for the training assessment: Machine learning-based neurometric for augmented user's evaluation

    Get PDF
    Inappropriate training assessment might have either high social costs and economic impacts, especially in high risks categories, such as Pilots, Air Traffic Controllers, or Surgeons. One of the current limitations of the standard training assessment procedures is the lack of information about the amount of cognitive resources requested by the user for the correct execution of the proposed task. In fact, even if the task is accomplished achieving the maximum performance, by the standard training assessment methods, it would not be possible to gather and evaluate information about cognitive resources available for dealing with unexpected events or emergency conditions. Therefore, a metric based on the brain activity (neurometric) able to provide the Instructor such a kind of information should be very important. As a first step in this direction, the Electroencephalogram (EEG) and the performance of 10 participants were collected along a training period of 3 weeks, while learning the execution of a new task. Specific indexes have been estimated from the behavioral and EEG signal to objectively assess the users' training progress. Furthermore, we proposed a neurometric based on a machine learning algorithm to quantify the user's training level within each session by considering the level of task execution, and both the behavioral and cognitive stabilities between consecutive sessions. The results demonstrated that the proposed methodology and neurometric could quantify and track the users' progresses, and provide the Instructor information for a more objective evaluation and better tailoring of training programs. © 2017 Borghini, Aricò, Di Flumeri, Sciaraffa, Colosimo, Herrero, Bezerianos, Thakor and Babiloni
    corecore