37 research outputs found
Teaching informatics to novices: big ideas and the necessity of optimal guidance
This thesis reports on the two main areas of our research: introductory programming as the traditional way of accessing informatics and cultural teaching informatics through unconventional pathways.
The research on introductory programming aims to overcome challenges in traditional programming education, thus increasing participation in informatics. Improving access to informatics enables individuals to pursue more and better professional opportunities and contribute to informatics advancements. We aimed to balance active, student-centered activities and provide optimal support to novices at their level. Inspired by Productive Failure and exploring the concept of notional machine, our work focused on developing Necessity Learning Design, a design to help novices tackle new programming concepts. Using this design, we implemented a learning sequence to introduce arrays and evaluated it in a real high-school context. The subsequent chapters discuss our experiences teaching CS1 in a remote-only scenario during the COVID-19 pandemic and our collaborative effort with primary school teachers to develop a learning module for teaching iteration using a visual programming environment.
The research on teaching informatics principles through unconventional pathways, such as cryptography, aims to introduce informatics to a broader audience, particularly younger individuals that are less technical and professional-oriented. It emphasizes the importance of understanding informatics's cultural and scientific aspects to focus on the informatics societal value and its principles for active citizenship. After reflecting on computational thinking and inspired by the big ideas of science and informatics, we describe our hands-on approach to teaching cryptography in high school, which leverages its key scientific elements to emphasize its social aspects. Additionally, we present an activity for teaching public-key cryptography using graphs to explore fundamental concepts and methods in informatics and mathematics and their interdisciplinarity. In broadening the understanding of informatics, these research initiatives also aim to foster motivation and prime for more professional learning of informatics
Computational Intelligence and Human- Computer Interaction: Modern Methods and Applications
The present book contains all of the articles that were accepted and published in the Special Issue of MDPIâs journal Mathematics titled "Computational Intelligence and HumanâComputer Interaction: Modern Methods and Applications". This Special Issue covered a wide range of topics connected to the theory and application of different computational intelligence techniques to the domain of humanâcomputer interaction, such as automatic speech recognition, speech processing and analysis, virtual reality, emotion-aware applications, digital storytelling, natural language processing, smart cars and devices, and online learning. We hope that this book will be interesting and useful for those working in various areas of artificial intelligence, humanâcomputer interaction, and software engineering as well as for those who are interested in how these domains are connected in real-life situations
Women in Artificial intelligence (AI)
This Special Issue, entitled "Women in Artificial Intelligence" includes 17 papers from leading women scientists. The papers cover a broad scope of research areas within Artificial Intelligence, including machine learning, perception, reasoning or planning, among others. The papers have applications to relevant fields, such as human health, finance, or education. It is worth noting that the Issue includes three papers that deal with different aspects of gender bias in Artificial Intelligence. All the papers have a woman as the first author. We can proudly say that these women are from countries worldwide, such as France, Czech Republic, United Kingdom, Australia, Bangladesh, Yemen, Romania, India, Cuba, Bangladesh and Spain. In conclusion, apart from its intrinsic scientific value as a Special Issue, combining interesting research works, this Special Issue intends to increase the invisibility of women in AI, showing where they are, what they do, and how they contribute to developments in Artificial Intelligence from their different places, positions, research branches and application fields. We planned to issue this book on the on Ada Lovelace Day (11/10/2022), a date internationally dedicated to the first computer programmer, a woman who had to fight the gender difficulties of her times, in the XIX century. We also thank the publisher for making this possible, thus allowing for this book to become a part of the international activities dedicated to celebrating the value of women in ICT all over the world. With this book, we want to pay homage to all the women that contributed over the years to the field of AI
Evidence-driven testing and debugging of software systems
Program debugging is the process of testing, exposing, reproducing, diagnosing and fixing software bugs. Many techniques have been proposed to aid developers during software testing and debugging. However, researchers have found that developers hardly use or adopt the proposed techniques in software practice. Evidently, this is because there is a gap between proposed methods and the state of software practice. Most methods fail to address the actual needs of software developers. In this dissertation, we pose the following scientific question: How can we bridge the gap between software practice and the state-of-the-art automated testing and debugging techniques? To address this challenge, we put forward the following thesis: Software testing and debugging should be driven by empirical evidence collected from software practice. In particular, we posit that the feedback from software practice should shape and guide (the automation) of testing and debugging activities. In this thesis, we focus on gathering evidence from software practice by conducting several empirical studies on software testing and debugging activities in the real-world. We then build tools and methods that are well-grounded and driven by the empirical evidence obtained from these experiments. Firstly, we conduct an empirical study on the state of debugging in practice using a survey and a human study. In this study, we ask developers about their debugging needs and observe the tools and strategies employed by developers while testing, diagnosing and repairing real bugs. Secondly, we evaluate the effectiveness of the state-of-the-art automated fault localization (AFL) methods on real bugs and programs. Thirdly, we conducted an experiment to evaluate the causes of invalid inputs in software practice. Lastly, we study how to learn input distributions from real-world sample inputs, using probabilistic grammars. To bridge the gap between software practice and the state of the art in software testing and debugging, we proffer the following empirical results and techniques: (1) We collect evidence on the state of practice in program debugging and indeed, we found that there is a chasm between (available) debugging tools and developer needs. We elicit the actual needs and concerns of developers when testing and diagnosing real faults and provide a benchmark (called DBGBench) to aid the automated evaluation of debugging and repair tools. (2) We provide empirical evidence on the effectiveness of several state-of-the-art AFL techniques (such as statistical debugging formulas and dynamic slicing). Building on the obtained empirical evidence, we provide a hybrid approach that outperforms the state-of-the-art AFL techniques. (3) We evaluate the prevalence and causes of invalid inputs in software practice, and we build on the lessons learned from this experiment to build a general-purpose algorithm (called ddmax) that automatically diagnoses and repairs real-world invalid inputs. (4) We provide a method to learn the distribution of input elements in software practice using probabilistic grammars and we further employ the learned distribution to drive the test generation of inputs that are similar (or dissimilar) to sample inputs found in the wild. In summary, we propose an evidence-driven approach to software testing and debugging, which is based on collecting empirical evidence from software practice to guide and direct software testing and debugging. In our evaluation, we found that our approach is effective in improving the effectiveness of several debugging activities in practice. In particular, using our evidence-driven approach, we elicit the actual debugging needs of developers, improve the effectiveness of several automated fault localization techniques, effectively debug and repair invalid inputs, and generate test inputs that are (dis)similar to real-world inputs. Our proposed methods are built on empirical evidence and they improve over the state-of-the-art techniques in testing and debugging.Software-Debugging bezeichnet das Testen, AufspĂŒren, Reproduzieren, Diagnostizieren und das Beheben von Fehlern in Programmen. Es wurden bereits viele Debugging-Techniken vorgestellt, die Softwareentwicklern beim Testen und Debuggen unterstĂŒtzen. Dennoch hat sich in der Forschung gezeigt, dass Entwickler diese Techniken in der Praxis kaum anwenden oder adaptieren. Das könnte daran liegen, dass es einen groĂen Abstand zwischen den vorgestellten und in der Praxis tatsĂ€chlich genutzten Techniken gibt. Die meisten Techniken genĂŒgen den Anforderungen der Entwickler nicht. In dieser Dissertation stellen wir die folgende wissenschaftliche Frage: Wie können wir die Kluft zwischen Software-Praxis und den aktuellen wissenschaftlichen Techniken fĂŒr automatisiertes Testen und Debugging schlieĂen? Um diese Herausforderung anzugehen, stellen wir die folgende These auf: Das Testen und Debuggen von Software sollte von empirischen Daten, die in der Software-Praxis gesammelt wurden, vorangetrieben werden. Genauer gesagt postulieren wir, dass das Feedback aus der Software-Praxis die Automation des Testens und Debuggens formen und bestimmen sollte. In dieser Arbeit fokussieren wir uns auf das Sammeln von Daten aus der Software-Praxis, indem wir einige empirische Studien ĂŒber das Testen und Debuggen von Software in der echten Welt durchfĂŒhren. Auf Basis der gesammelten Daten entwickeln wir dann Werkzeuge, die sich auf die Daten der durchgefĂŒhrten Experimente stĂŒtzen. Als erstes fĂŒhren wir eine empirische Studie ĂŒber den Stand des Debuggens in der Praxis durch, wobei wir eine Umfrage und eine Humanstudie nutzen. In dieser Studie befragen wir Entwickler zu ihren BedĂŒrfnissen, die sie beim Debuggen haben und beobachten die Werkzeuge und Strategien, die sie beim Diagnostizieren, Testen und AufspĂŒren echter Fehler einsetzen. Als nĂ€chstes bewerten wir die EffektivitĂ€t der aktuellen Automated Fault Localization (AFL)- Methoden zum automatischen AufspĂŒren von echten Fehlern in echten Programmen. Unser dritter Schritt ist ein Experiment, um die Ursachen von defekten Eingaben in der Software-Praxis zu ermitteln. Zuletzt erforschen wir, wie HĂ€ufigkeitsverteilungen von Teileingaben mithilfe einer Grammatik von echten Beispiel-Eingaben aus der Praxis gelernt werden können. Um die LĂŒcke zwischen Software-Praxis und der aktuellen Forschung ĂŒber Testen und Debuggen von Software zu schlieĂen, bieten wir die folgenden empirischen Ergebnisse und Techniken: (1) Wir sammeln aktuelle Forschungsergebnisse zum Stand des Software-Debuggens und finden in der Tat eine Diskrepanz zwischen (vorhandenen) Debugging-Werkzeugen und dem, was der Entwickler tatsĂ€chlich benötigt. Wir sammeln die tatsĂ€chlichen BedĂŒrfnisse von Entwicklern beim Testen und Debuggen von Fehlern aus der echten Welt und entwickeln einen Benchmark (DbgBench), um das automatische Evaluieren von Debugging-Werkzeugen zu erleichtern. (2) Wir stellen empirische Daten zur EffektivitĂ€t einiger aktueller AFL-Techniken vor (z.B. Statistical Debugging-Formeln und Dynamic Slicing). Auf diese Daten aufbauend, stellen wir einen hybriden Algorithmus vor, der die Leistung der aktuellen AFL-Techniken ĂŒbertrifft. (3) Wir evaluieren die HĂ€ufigkeit und Ursachen von ungĂŒltigen Eingaben in der Softwarepraxis und stellen einen auf diesen Daten aufbauenden universell einsetzbaren Algorithmus (ddmax) vor, der automatisch defekte Eingaben diagnostiziert und behebt. (4) Wir stellen eine Methode vor, die Verteilung von Schnipseln von Eingaben in der Software-Praxis zu lernen, indem wir Grammatiken mit Wahrscheinlichkeiten nutzen. Die gelernten Verteilungen benutzen wir dann, um den Beispiel-Eingaben Ă€hnliche (oder verschiedene) Eingaben zu erzeugen. Zusammenfassend stellen wir einen auf der Praxis beruhenden Ansatz zum Testen und Debuggen von Software vor, welcher auf empirischen Daten aus der Software-Praxis basiert, um das Testen und Debuggen zu unterstĂŒtzen. In unserer Evaluierung haben wir festgestellt, dass unser Ansatz effektiv viele Debugging-Disziplinen in der Praxis verbessert. Genauer gesagt finden wir mit unserem Ansatz die genauen BedĂŒrfnisse von Entwicklern, verbessern die EffektivitĂ€t vieler AFL-Techniken, debuggen und beheben effektiv fehlerhafte Eingaben und generieren Test-Eingaben, die (un)Ă€hnlich zu Eingaben aus der echten Welt sind. Unsere vorgestellten Methoden basieren auf empirischen Daten und verbessern die aktuellen Techniken des Testens und Debuggens
On Designing Programming Error Messages for Novices: Readability and its Constituent Factors
The 2021 ACM CHI Virtual Conference on Human Factors in Computing Systems (CHI'21), Virtual Conference, 8-13 May 2021Programming error messages play an important role in learning to program. The cycle of program input and error message response completes a loop between the programmer and the compiler/interpreter and is a fundamental interaction between human and computer. However, error messages are notoriously problematic, especially for novices. Despite numerous guidelines citing the importance of message readability, there is little empirical research dedicated to understanding and assessing it. We report three related experiments investigating factors that influence programming error message readability. In the first two experiments we identify possible factors, and in the third we ask novice programmers to rate messages using scales derived from these factors. We find evidence that several key factors significantly affect message readability: message length, jargon use, sentence structure, and vocabulary. This provides novel empirical support for previously untested long-standing guidelines on message design, and informs future efforts to create readability metrics for programming error messages
What Type of Debrief is Best for Learning during Think-Pair-Shares?
Copious research demonstrates the benefits of adding active learning to traditional lectures to enhance learning and reduce failure/withdrawal rates. However, many questions remain about how best to implement active learning to maximize student outcomes. This paper investigates several âsecond generationâ questions regarding infusing active learning, via Think-Pair-Share (TPS), into a large lecture course in Computer Science. During the âShareâ phase of TPS, what is the best way to debrief the associated course concepts with the entire class? Specifically, does student learning differ when instructors debrief the rationale for every answer choice (full debrief) versus only the correct answer (partial debrief)? And does the added value for student outcomes vary between tasks requiring recall versus deeper comprehension and/or application of concepts? Regardless of discipline, these questions are relevant to instructors implementing TPS with multiple-choice questions, especially in large lectures. Similar to prior research, when lectures included TPS, students performed significantly better (~13%) on corresponding exam items. However, studentsâ exam performance depended on both the type of debrief and exam questions. Students performed significantly better (~5%) in the full debrief condition than the partial debrief condition. Additionally, benefits of the full debrief condition were significantly stronger (~5%) for exam questions requiring deeper comprehension and/or application of underlying Computer Science processes, compared to simple recall. We discuss these results and lessons learned, providing recommendations for how best to implement TPS in large lecture courses in STEM and other disciplines
Computational Thinking, Between Papert and Wing
International audienceThe pervasiveness of Computer Science (CS) in todayâs digital society and the extensive use of computational methods in other sciences call for its introduction in the school curriculum. Hence, Computer Science Education is becoming more and more relevant. In CS K-12 education, computational thinking (CT) is one of the abused buzzwords: different stakeholders (media, educators, politicians) give it different meanings, some more oriented to CS, others more linked to its interdisciplinary value. The expression was introduced by two leading researchers, Jeannette Wing (in 2006) and Seymour Papert (much early, in 1980), each of them stressing different aspects of a common theme. This paper will use a historical approach to review, discuss, and put in context these first two educational and epistemological approaches to CT. We will relate them to todayâs context and evaluate what aspects are still relevant for CS K-12 education. Of the two, particular interest is devoted to âPapertâs CT,â which is the lesser-known and the lesser-studied. We will conclude that âWingâs CTâ and âPapertâs CT,â when correctly understood, are both relevant to todayâs computer science education. From Wing, we should retain computer scienceâs centrality, CT being the (scientific and cultural) substratum of the technical competencies. Under this interpretation, CT is a lens and a set of categories for understanding the algorithmic fabric of todayâs world. From Papert, we should retain the constructionist idea that only a social and affective involvement of students into the technical content will make programming an interdisciplinary tool for learning (also) other disciplines. We will also discuss the often quoted (and often unverified) claim that CT automatically âtransfersâ to other broad 21st century skills. Our analysis will be relevant for educators and scholars to recognize and avoid misconceptions and build on the two core roots of CT
Computing Creativity: Divergence in Computational Thinking
ABSTRACT Conventionally creativity is often conceived as an aptitude to be discovered in an individual that cannot be mathematically measured. But the concept of creative thinking as a divergence from a standard "norm" is used in creativity research for the purpose of assessing creativity and is also linked to nontraditional or creative processes that lead to unique and divergent artifact
Architecture of Engagement: Autonomy-Supportive Leadership for Instructional Improvement
This multiple paper dissertation addresses the importance of improving student success in online higher education programs by providing support for instructors. The autonomy-supportive structures to improve instructional practice are explained through three main domains, including instructional development, instructional design, and instructional practice. The first paper addresses instructional leadership with the theoretical foundations and practical considerations necessary for instructional leaders. Recommendations are made to use microcredentials or digital badges to scaffold programming using self-determination theory. The second paper addresses the importance of instructional design in improving instructional practice including the intentionality involved in implementing a gamification strategy to improve online student motivation. The third paper addresses instructional practice with a mixed-method sequential explanatory case study. Using the community of inquiry framework, this paper explains intentional course design, course facilitation, and student perceptions of the digital powerups strategy. The conclusion considers implications for practice and the need for instructional leaders to scaffold an architecture of engagement to support instructors and improve student success
Research-Informed Teaching in a Global Pandemic: "Opening up" Schools to Research
The teacher-research agenda has become a significant consideration for policy and professional development in a number of countries. Encouraging research-based teacher education programmes remains an important goal, where teachers are able to effectively utilize educational research as part of their work in school settings and to reflect on and enhance their professional development. In the last decade, teacher research has grown in importance across the three iâs of the teacher learning continuum: initial, induction and in-service teacher education. This has been brought into even starker relief with the global spread of COVID-19, and the enforced and emergency, wholesale move to digital education. Now, perhaps more than ever, teachers need the perspective and support of research-led practice, particularly in how to effectively use Internet technologies to mediate and enhance learning, teaching and assessment online, and new blended modalities for education that must be physically distant. The aim of this paper is to present a number of professional development open educational systems which exist or are currently being developed to support teachers internationally, to engage with, use and do research. Exemplification of the opening up of research to schools and teachers is provided in the chapter through reference to the European Union-funded Erasmus + project, BRIST: Building Research Infrastructures for School Teachers. BRIST is developing technology to coordinate and support teacher-research at a European level