610 research outputs found
Effects of network topology on the OpenAnswer’s Bayesian model of peer assessment
The paper investigates if and how the topology of the peer assessment network can affect the performance of the Bayesian model adopted in Ope
nAnswer. Performance is evaluated in terms of the comparison of predicted grades with actual teacher’s grades. The global network is built by interconnecting smaller subnetworks, one for each student, where intra subnetwork nodes represent student's characteristics, and peer assessment assignments make up inter subnetwork connections and determine evidence propagation. A possible subset of teacher graded answers is dynamically determined by suitable selec
tion and stop rules. The research questions addressed are: RQ1) “does the topology (diameter) of the network negatively influence the precision of predicted
grades?”̀ in the affirmative case, RQ2) “are we able to reduce the negative effects of high diameter networks through an appropriate choice of the subset of
students to be corrected by the teacher?” We show that RQ1) OpenAnswer is less effective on higher diameter topologies, RQ2) this can be avoided if the subset of corrected students is chosen considering the network topology
UAV surveying for a complete mapping and documentation of archaeological findings. The early Neolithic site of Portonovo
The huge potential of 3D digital acquisition techniques for the documentation of archaeological sites, as well as the related findings, is almost well established. In spite of the variety of available techniques, a sole documentation pipeline cannot be defined a priori because of the diversity of archaeological settings. Stratigraphic archaeological excavations, for example, require a systematic, quick and low cost 3D single-surface documentation because the nature of stratigraphic archaeology compels providing documentary evidence of any excavation phase. Only within a destructive process each single excavation cannot be identified, documented and interpreted and this implies the necessity of a re- examination of the work on field. In this context, this paper describes the methodology, carried out during the last years, to 3D document the Early Neolithic site of Portonovo (Ancona, Italy) and, in particular, its latest step consisting in a
photogrammetric aerial survey by means of UAV platform. It completes the previous research delivered in the same site by means of terrestrial laser scanning and close range techniques and sets out different options for further reflection in terms of site coverage, resolution and campaign cost. With the support of a topographic network and a unique reference system, the full documentation of the site is managed in order to detail each excavation phase; besides, the final output proves how the 3D digital methodology can be completely integrated with reasonable costs during the excavation and used to interpret the archaeological context. Further contribution of this work is the comparison between several acquisition techniques (i.e. terrestrial and aerial), which could be useful as decision support system for different archaeological scenarios. The main objectives of the comparison are: i) the evaluation of 3D mapping
accuracy from different data sources, ii) the definition of a standard pipeline for different archaeological needs and iii) the provision of different level of detail according to the user need
Classification of Alzheimers Disease with Deep Learning on Eye-tracking Data
Existing research has shown the potential of classifying Alzheimers Disease
(AD) from eye-tracking (ET) data with classifiers that rely on task-specific
engineered features. In this paper, we investigate whether we can improve on
existing results by using a Deep-Learning classifier trained end-to-end on raw
ET data. This classifier (VTNet) uses a GRU and a CNN in parallel to leverage
both visual (V) and temporal (T) representations of ET data and was previously
used to detect user confusion while processing visual displays. A main
challenge in applying VTNet to our target AD classification task is that the
available ET data sequences are much longer than those used in the previous
confusion detection task, pushing the limits of what is manageable by
LSTM-based models. We discuss how we address this challenge and show that VTNet
outperforms the state-of-the-art approaches in AD classification, providing
encouraging evidence on the generality of this model to make predictions from
ET data.Comment: ICMI 2023 long pape
The experimental reconstruction of an Early Neolithic underground oven of Portonovo (Italy)
This contribution presents the experimental reconstruction of an underground oven replicated according to the archaeological evidence unearthed from the Early Neolithic site of Portonovo-Fosso Fontanaccia (Ancona-Italy). A domed structure, measuring 190x180 cm diameter at the base and 50 cm in height, was dug in 15 hours, in a sediment compatible with the geological formation that features the archaeological site. The experimental protocol presented in this article aims to reconstruct techniques, timing and tools needed to dig the peculiar underground structures of Portonovo used by Neolithic groups and understand key topics regarding the entire technical process such as energy investment for the community, seasonality and lifespan
AI in Education needs interpretable machine learning: Lessons from Open Learner Modelling
Interpretability of the underlying AI representations is a key raison
d'\^{e}tre for Open Learner Modelling (OLM) -- a branch of Intelligent Tutoring
Systems (ITS) research. OLMs provide tools for 'opening' up the AI models of
learners' cognition and emotions for the purpose of supporting human learning
and teaching. Over thirty years of research in ITS (also known as AI in
Education) produced important work, which informs about how AI can be used in
Education to best effects and, through the OLM research, what are the necessary
considerations to make it interpretable and explainable for the benefit of
learning. We argue that this work can provide a valuable starting point for a
framework of interpretable AI, and as such is of relevance to the application
of both knowledge-based and machine learning systems in other high-stakes
contexts, beyond education.Comment: presented at 2018 ICML Workshop on Human Interpretability in Machine
Learning (WHI 2018), Stockholm, Swede
Cascading Convolutional Temporal Colour Constancy
Computational Colour Constancy (CCC) consists of estimating the colour of one
or more illuminants in a scene and using them to remove unwanted chromatic
distortions. Much research has focused on illuminant estimation for CCC on
single images, with few attempts of leveraging the temporal information
intrinsic in sequences of correlated images (e.g., the frames in a video), a
task known as Temporal Colour Constancy (TCC). The state-of-the-art for TCC is
TCCNet, a deep-learning architecture that uses a ConvLSTM for aggregating the
encodings produced by CNN submodules for each image in a sequence. We extend
this architecture with different models obtained by (i) substituting the TCCNet
submodules with C4, the state-of-the-art method for CCC targeting images; (ii)
adding a cascading strategy to perform an iterative improvement of the estimate
of the illuminant. We tested our models on the recently released TCC benchmark
and achieved results that surpass the state-of-the-art. Analyzing the impact of
the number of frames involved in illuminant estimation on performance, we show
that it is possible to reduce inference time by training the models on few
selected frames from the sequences while retaining comparable accuracy
- …