57 research outputs found
MOON: Assisting Students in Completing Educational Notebook Scenarios
Jupyter notebooks are increasingly being adopted by teachers to deliver
interactive practical sessions to their students. Notebooks come with many
attractive features, such as the ability to combine textual explanations,
multimedia content, and executable code alongside a flexible execution model
which encourages experimentation and exploration. However, this execution model
can quickly become an issue when students do not follow the intended execution
order of the teacher, leading to errors or misleading results that hinder their
learning. To counter this adverse effect, teachers usually write detailed
instructions about how students are expected to use the notebooks. Yet, the use
of digital media is known to decrease reading efficiency and compliance with
written instructions, resulting in frequent notebook misuse and students
getting lost during practical sessions. In this article, we present a novel
approach, MOON, designed to remedy this problem. The central idea is to provide
teachers with a language that enables them to formalize the expected usage of
their notebooks in the form of a script and to interpret this script to guide
students with visual indications in real time while they interact with the
notebooks. We evaluate our approach using a randomized controlled experiment
involving 21 students, which shows that MOON helps students comply better with
the intended scenario without hindering their ability to progress. Our
follow-up user study shows that about 75% of the surveyed students perceived
MOON as rather useful or very useful
Exploration of Explainable AI in Context of Human-Machine Interface for the Assistive Driving System
Interplay between upsampling and regularization for provider fairness in recommender systems
Considering the impact of recommendations on item providers is one of the duties of multi-sided recommender systems. Item providers are key stakeholders in online platforms, and their earnings and plans are influenced by the exposure their items receive in recommended lists. Prior work showed that certain minority groups of providers, characterized by a common sensitive attribute (e.g., gender or race), are being disproportionately affected by indirect and unintentional discrimination. Our study in this paper handles a situation where (i) the same provider is associated with multiple items of a list suggested to a user, (ii) an item is created by more than one provider jointly, and (iii) predicted user–item relevance scores are biasedly estimated for items of provider groups. Under this scenario, we assess disparities in relevance, visibility, and exposure, by simulating diverse representations of the minority group in the catalog and the interactions. Based on emerged unfair outcomes, we devise a treatment that combines observation upsampling and loss regularization, while learning user–item relevance scores. Experiments on real-world data demonstrate that our treatment leads to lower disparate relevance. The resulting recommended lists show fairer visibility and exposure, higher minority item coverage, and negligible loss in recommendation utility
- …