10,401 research outputs found

    Supporting decision making process with "Ideal" software agents: what do business executives want?

    Get PDF
    According to Simon’s (1977) decision making theory, intelligence is the first and most important phase in the decision making process. With the escalation of information resources available to business executives, it is becoming imperative to explore the potential and challenges of using agent-based systems to support the intelligence phase of decision-making. This research examines UK executives’ perceptions of using agent-based support systems and the criteria for design and development of their “ideal” intelligent software agents. The study adopted an inductive approach using focus groups to generate a preliminary set of design criteria of “ideal” agents. It then followed a deductive approach using semi-structured interviews to validate and enhance the criteria. This qualitative research has generated unique insights into executives’ perceptions of the design and use of agent-based support systems. The systematic content analysis of qualitative data led to the proposal and validation of design criteria at three levels. The findings revealed the most desirable criteria for agent based support systems from the end users’ point view. The design criteria can be used not only to guide intelligent agent system design but also system evaluation

    Towards highly informative learning analytics

    Get PDF
    Among various trending topics that can be investigated in the field of educational technology, there is a clear and high demand for using artificial intelligence (AI) and educational data to improve the whole learning and teaching cycle. This spans from collecting and estimating the prior knowledge of learners for a certain subject to the actual learning process and its assessment. AI in education cuts across almost all educational technology disciplines and is key to many other technological innovations for educational institutions. The use of data to inform decision-making in education and training is not new, but the scope and scale of its potential impact on teaching and learning have silently increased by orders of magnitude over the last few years. The release of ChatGPT was another driver to finally make everyone aware of the potential effects of AI technology in the digital education system of today. We are now at a stage where data can be automatically harvested at previously unimagined levels of granularity and variety. Analysis of these data with AI has the potential to provide evidence-based insights into learners’ abilities and patterns of behaviour that, in turn, can provide crucial action points to guide curriculum and course design, personalised assistance, generate assessments, and the development of new educational offerings. AI in education has many connected research communities like Artificial Intelligence in Education (AIED), Educational Data Mining (EDM), or Learning Analytics (LA). LA is the term that is used for research, studies, and applications that try to understand and support the behaviour of learners based on large sets of collected data

    One Explanation Does Not Fit All The Promise of Interactive Explanations for Machine Learning Transparency

    Get PDF
    The need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system's operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations -- a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up "What if?" questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats...Comment: Published in the Kunstliche Intelligenz journal, special issue on Challenges in Interactive Machine Learnin

    An empirical investigation to examine the usability issues of using adaptive, adaptable, and mixed-initiative approaches in in-teractive systems

    Get PDF
    The combination of graphical user interface (GUI) and usability evaluation presents an advantage to mastering every piece of software and ensuring perfect quality of work. The increasing demand for online learning is becoming more important, both individually and academically. This thesis introduces and describes an empirical study to investigate and compare how vocabulary can be learned by using different interactive approaches; specifically, a static learning website (with straightforward words and meanings), an adaptable learning website (allowing the user to choose a learning method), an adaptive learning website (a system-chosen way of learning), and a mixed-initiative (mixing approaches and techniques). The purpose of this study is to explore and determine the effects of these approaches in learning vocabu-lary achievement to enhance vocabulary learning for non-English speakers. The par-ticipants were Arabic speakers. The three levels of vocabulary learning activities were categorised as easy, medium, and hard. The independent variables (IVs) were controlled during the experiment to ensure consistency and were as follows: tasks, learning effects, and time. The dependent variables (DVs) were learning vocabulary achievements and scores. Two aims were explored in relation to the effects of these approaches to achievement. The first related to learning vocabularies for non-English speakers tackling the difficulties of the English language and the second related to studying system usability of learning English vocabulary in terms of usability measures (efficiency, frequency of error occurrence, effectiveness, and satisfaction). For this purpose, a vocabulary-learning language website was designed, implement-ed, and tested empirically. To fulfill these requirements, it was first necessary to measure two usability components (efficiency and effectiveness) with a within-subject design of n = 24 subjects recruited and, for users’ satisfaction, a between-subject design of n = 99 subjects recruited, while investigating satisfaction with a system usability scale (SUS) survey. The results and data analysis were described. Overall, the results shown were all satisfactory
    corecore