584 research outputs found
UMSL Bulletin 2023-2024
The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp
UMSL Bulletin 2022-2023
The 2022-2023 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1087/thumbnail.jp
Investigating the learning potential of the Second Quantum Revolution: development of an approach for secondary school students
In recent years we have witnessed important changes: the Second Quantum Revolution is in the spotlight of many countries, and it is creating a new generation of technologies.
To unlock the potential of the Second Quantum Revolution, several countries have launched strategic plans and research programs that finance and set the pace of research and development of these new technologies (like the Quantum Flagship, the National Quantum Initiative Act and so on).
The increasing pace of technological changes is also challenging science education and institutional systems, requiring them to help to prepare new generations of experts.
This work is placed within physics education research and contributes to the challenge by developing an approach and a course about the Second Quantum Revolution. The aims are to promote quantum literacy and, in particular, to value from a cultural and educational perspective the Second Revolution.
The dissertation is articulated in two parts. In the first, we unpack the Second Quantum Revolution from a cultural perspective and shed light on the main revolutionary aspects that are elevated to the rank of principles implemented in the design of a course for secondary school students, prospective and in-service teachers. The design process and the educational reconstruction of the activities are presented as well as the results of a pilot study conducted to investigate the impact of the approach on students' understanding and to gather feedback to refine and improve the instructional materials.
The second part consists of the exploration of the Second Quantum Revolution as a context to introduce some basic concepts of quantum physics. We present the results of an implementation with secondary school students to investigate if and to what extent external representations could play any role to promote studentsâ understanding and acceptance of quantum physics as a personal reliable description of the world
Computational creativity: an interdisciplinary approach to sequential learning and creative generations
Creativity seems mysterious; when we experience a creative spark, it is difficult to explain how we got that idea, and we often recall notions like ``inspiration" and ``intuition" when we try to explain the phenomenon. The fact that we are clueless about how a creative idea manifests itself does not necessarily imply that a scientific explanation cannot exist. We are unaware of how we perform certain tasks, such as biking or language understanding, but we have more and more computational techniques that can replicate and hopefully explain such activities.
We should understand that every creative act is a fruit of experience, society, and culture. Nothing comes from nothing. Novel ideas are never utterly new; they stem from representations that are already in mind. Creativity involves establishing new relations between pieces of information we had already: then, the greater the knowledge, the greater the possibility of finding uncommon connections, and the more the potential to be creative.
In this vein, a beneficial approach to a better understanding of creativity must include computational or mechanistic accounts of such inner procedures and the formation of the knowledge that enables such connections. That is the aim of Computational Creativity: to develop computational systems for emulating and studying creativity.
Hence, this dissertation focuses on these two related research areas: discussing computational mechanisms to generate creative artifacts and describing some implicit cognitive processes that can form the basis for creative thoughts
What Makes Digital Technology? A Categorization Based on Purpose
Digital technology (DT) is creating and shaping todayâs world. Building on its identity and history of technology research, the Information Systems discipline is at the forefront of understanding the nature of DT and related phenomena. Understanding the nature of DT requires understanding its purposes. Because of the growing number of DTs, these purposes are diversifying, and further examination is needed. To that end, we followed an organizational systematics paradigm and present a taxonomic theory for DT that enables its classification through its diverse purposes. The taxonomic theory comprises a multi-layer taxonomy of DT and purpose-related archetypes, which we inferred from a sample of 92 real-world DTs. In our empirical evaluation, we assessed reliability, validity, and usefulness of the taxonomy and archetypes. The taxonomic theory exceeds existing technology classifications by being the first that (1) has been rigorously developed, (2) considers the nature of DT, (3) is sufficiently concrete to reflect the diverse purposes of DT, and (4) is sufficiently abstract to be persistent. Our findings add to the descriptive knowledge on DT, advance our understanding of the diverse purposes of DT, and lay the ground for further theorizing. Our work also supports practitioners in managing and designing DTs
Recommended from our members
UK food sustainability and global food supply chains: a sustainability impact study of Ghana's fresh vegetable exports to the UK
The purpose of this research is to explore the opportunities for reducing sustainability implications associated with the UK's global food supply chains by analysing Ghana's fresh vegetables exports. Existing literature assesses sustainability implications focusing on the traditional sustainability dimensions; namely, the environmental, social, and economic dimensions. Further, studies on the assessment of the UK food sustainability are yet to consider sustainability concerns generated by global food sources. To facilitate a holistic evaluation of the UK's global food supply chains and propagate its vision of global leadership in food sustainability, there is a need to consider all other relevant sustainability dimensions and their impacts associated with the activities and operations of global food suppliers. Case study data involving interviews and focus groups, together with survey data, are obtained from producers of Ghanaian fresh vegetables, such as smallholder farmers, outgrowers, local farmers, and exporters. The interviews and focus groups are first analysed using NVivo 11 software, following a thematic approach. Multilinear Regression (MLR) is performed using the Statistical Package for the Social Sciences (SPSS) to analyse the survey, in order to examine the relationship between sustainable food supply chains (sustainable FSC) and sustainability dimensions identified from the thematic analysis of the interviews and focus groups.
These findings indicate that six sustainability dimensions and their associated impacts are important in analysing Ghana's fresh vegetable exports to the UK. These are environmental, social, and economic dimensions, regulatory frameworks, collaboration, and producers' complexities in developing sustainable food supply chains (sustainable FSC). Interestingly, the survey results suggest that four of these dimensions are statistically significant; these are environmental, social, regulatory frameworks, and collaboration. The survey further revealed that an increase in regulatory frameworks and mechanisms can reduce sustainable FSC; whereas an increase in the practices and activities of the environmental, social, and collaborative dimensions increases sustainable FSC, thus improving overall sustainability. Revelations and findings from both the thematic and survey analysis were utilised to develop, test and validate the Sustainability Impact Assessment (SIA) model (thus, a conceptual framework of the study).
This study contributes to the body of knowledge in several ways. To theory, an SIA model is suggested, demonstrating the capture of all important sustainability dimensions; namely, environmental, economic, social, regulatory, collaboration, and complexities of food supply chains. It extends the discussion on sustainability impact assessments and sustainability development and encourages research in sustainability assessment. In practice, this SIA model can facilitate easy capture, examination, and evaluation of all relevant sustainability implications and allow new insights into the development and assessment of the stream of sustainability development.
Among many other implications such as promoting collaboration, policymakers need to encourage FairTrade for producers in developing countries, and regulatory mechanisms should be re-designed to enhance profitability by using simple conformity and economic incentives. Further, food trade partners and FSC professionals should encourage smart strategies and technologies to enhance logistics that minimise food waste and energy consumption, while boosting producers' welfare. Moreover, governments and policymakers should ensure that the sustainability concerns of overseas countries are captured in food policies and strategies to help facilitate global leadership in food sustainability
Architectural Assemblages as Computational Medium: Introducing Assembler, a tool for the design and study of architectural assemblages
Assembler is a computational tool designed for the creation and study of assemblages in architecture, interweaving mereology, combinatorial design, and decision at scale. The tool leverages the potential of automation and repeated parts to generate scalable and spatially heterogeneous assemblages, emphasizing the computational role of both parts and relations in creating emergent qualities. Assembler utilizes iterative, rule-based heuristic, enabling computation across scales via part/assemblage/environment relationhood. The design process is understood as a decision network, where the user has control over the design of parts, connections, and heuristics of the system, and the tool enacts those decisions in space and time. After a theoretical contextualization and an overview of precursors and precedents in architecture and combinatorial design, the tool logic is explained and its current status and potential developments are discusse
Topological Feature Selection: A Graph-Based Filter Feature Selection Approach
In this paper, we introduce a novel unsupervised, graph-based filter feature
selection technique which exploits the power of topologically constrained
network representations. We model dependency structures among features using a
family of chordal graphs (the Triangulated Maximally Filtered Graph), and we
maximise the likelihood of features' relevance by studying their relative
position inside the network. Such an approach presents three aspects that are
particularly satisfactory compared to its alternatives: (i) it is highly
tunable and easily adaptable to the nature of input data; (ii) it is fully
explainable, maintaining, at the same time, a remarkable level of simplicity;
(iii) it is computationally cheaper compared to its alternatives. We test our
algorithm on 16 benchmark datasets from different applicative domains showing
that it outperforms or matches the current state-of-the-art under heterogeneous
evaluation conditions.Comment: 23 pages, 2 figures, 13 table
Recommended from our members
Sonic heritage: listening to the past
History is so often told through objects, images and photographs, but the potential of sounds to reveal place and space is often neglected. Our research project âSonic Palimpsestâ1 explores the potential of sound to evoke impressions and new understandings of the past, to embrace the sonic as a tool to understand what was, in a way that can complement and add to our predominant visual understandings. Our work includes the expansion of the Oral History archives held at Chatham Dockyard to include womenâs voices and experiences, and the creation of sonic works to engage the public with their heritage. Our research highlights the social and cultural value of oral history and field recordings in the transmission of knowledge to both researchers and the public. Together these recordings document how buildings and spaces within the dockyard were used and experienced by those who worked there. We can begin to understand the social and cultural roles of these buildings within the community, both past and present
If interpretability is the answer, what is the question?
Due to the ability to model even complex dependencies, machine learning (ML) can be used to tackle a broad range of (high-stakes) prediction problems. The complexity of the resulting models comes at the cost of transparency, meaning that it is difficult to understand the model by inspecting its parameters.
This opacity is considered problematic since it hampers the transfer of knowledge from the model, undermines the agency of individuals affected by algorithmic decisions, and makes it more challenging to expose non-robust or unethical behaviour.
To tackle the opacity of ML models, the field of interpretable machine learning (IML) has emerged. The field is motivated by the idea that if we could understand the model's behaviour -- either by making the model itself interpretable or by inspecting post-hoc explanations -- we could also expose unethical and non-robust behaviour, learn about the data generating process, and restore the agency of affected individuals. IML is not only a highly active area of research, but the developed techniques are also widely applied in both industry and the sciences.
Despite the popularity of IML, the field faces fundamental criticism, questioning whether IML actually helps in tackling the aforementioned problems of ML and even whether it should be a field of research in the first place:
First and foremost, IML is criticised for lacking a clear goal and, thus, a clear definition of what it means for a model to be interpretable. On a similar note, the meaning of existing methods is often unclear, and thus they may be misunderstood or even misused to hide unethical behaviour. Moreover, estimating conditional-sampling-based techniques poses a significant computational challenge.
With the contributions included in this thesis, we tackle these three challenges for IML.
We join a range of work by arguing that the field struggles to define and evaluate "interpretability" because incoherent interpretation goals are conflated. However, the different goals can be disentangled such that coherent requirements can inform the derivation of the respective target estimands. We demonstrate this with the examples of two interpretation contexts: recourse and scientific inference.
To tackle the misinterpretation of IML methods, we suggest deriving formal interpretation rules that link explanations to aspects of the model and data. In our work, we specifically focus on interpreting feature importance. Furthermore, we collect interpretation pitfalls and communicate them to a broader audience.
To efficiently estimate conditional-sampling-based interpretation techniques, we propose two methods that leverage the dependence structure in the data to simplify the estimation problems for Conditional Feature Importance (CFI) and SAGE.
A causal perspective proved to be vital in tackling the challenges: First, since IML problems such as algorithmic recourse are inherently causal; Second, since causality helps to disentangle the different aspects of model and data and, therefore, to distinguish the insights that different methods provide; And third, algorithms developed for causal structure learning can be leveraged for the efficient estimation of conditional-sampling based IML methods.Aufgrund der FÀhigkeit, selbst komplexe AbhÀngigkeiten zu modellieren, kann maschinelles Lernen (ML) zur Lösung eines breiten Spektrums von anspruchsvollen Vorhersageproblemen eingesetzt werden.
Die KomplexitÀt der resultierenden Modelle geht auf Kosten der Interpretierbarkeit, d. h. es ist schwierig, das Modell durch die Untersuchung seiner Parameter zu verstehen.
Diese Undurchsichtigkeit wird als problematisch angesehen, da sie den Wissenstransfer aus dem Modell behindert, sie die HandlungsfÀhigkeit von Personen, die von algorithmischen Entscheidungen betroffen sind, untergrÀbt und sie es schwieriger macht, nicht robustes oder unethisches Verhalten aufzudecken.
Um die Undurchsichtigkeit von ML-Modellen anzugehen, hat sich das Feld des interpretierbaren maschinellen Lernens (IML) entwickelt.
Dieses Feld ist von der Idee motiviert, dass wir, wenn wir das Verhalten des Modells verstehen könnten - entweder indem wir das Modell selbst interpretierbar machen oder anhand von post-hoc ErklĂ€rungen - auch unethisches und nicht robustes Verhalten aufdecken, ĂŒber den datengenerierenden Prozess lernen und die HandlungsfĂ€higkeit betroffener Personen wiederherstellen könnten.
IML ist nicht nur ein sehr aktiver Forschungsbereich, sondern die entwickelten Techniken werden auch weitgehend in der Industrie und den Wissenschaften angewendet.
Trotz der PopularitĂ€t von IML ist das Feld mit fundamentaler Kritik konfrontiert, die in Frage stellt, ob IML tatsĂ€chlich dabei hilft, die oben genannten Probleme von ML anzugehen, und ob es ĂŒberhaupt ein Forschungsgebiet sein sollte:
In erster Linie wird an IML kritisiert, dass es an einem klaren Ziel und damit an einer klaren Definition dessen fehlt, was es fĂŒr ein Modell bedeutet, interpretierbar zu sein. Weiterhin ist die Bedeutung bestehender Methoden oft unklar, so dass sie missverstanden oder sogar missbraucht werden können, um unethisches Verhalten zu verbergen. Letztlich stellt die SchĂ€tzung von auf bedingten Stichproben basierenden Verfahren eine erhebliche rechnerische Herausforderung dar.
In dieser Arbeit befassen wir uns mit diesen drei grundlegenden Herausforderungen von IML.
Wir schlieĂen uns der Argumentation an, dass es schwierig ist, "Interpretierbarkeit" zu definieren und zu bewerten, weil inkohĂ€rente Interpretationsziele miteinander vermengt werden. Die verschiedenen Ziele lassen sich jedoch entflechten, sodass kohĂ€rente Anforderungen die Ableitung der jeweiligen ZielgröĂen informieren. Wir demonstrieren dies am Beispiel von zwei Interpretationskontexten: algorithmischer Regress
und wissenschaftliche Inferenz.
Um der Fehlinterpretation von IML-Methoden zu begegnen, schlagen wir vor, formale Interpretationsregeln abzuleiten, die ErklĂ€rungen mit Aspekten des Modells und der Daten verknĂŒpfen. In unserer Arbeit konzentrieren wir uns speziell auf die Interpretation von sogenannten Feature Importance Methoden. DarĂŒber hinaus tragen wir wichtige Interpretationsfallen zusammen und kommunizieren sie an ein breiteres Publikum.
Zur effizienten SchĂ€tzung auf bedingten Stichproben basierender Interpretationstechniken schlagen wir zwei Methoden vor, die die AbhĂ€ngigkeitsstruktur in den Daten nutzen, um die SchĂ€tzprobleme fĂŒr Conditional Feature Importance (CFI) und SAGE zu vereinfachen.
Eine kausale Perspektive erwies sich als entscheidend fĂŒr die BewĂ€ltigung der Herausforderungen: Erstens, weil IML-Probleme wie der algorithmische Regress inhĂ€rent kausal sind; zweitens, weil KausalitĂ€t hilft, die verschiedenen Aspekte von Modell und Daten zu entflechten und somit die Erkenntnisse, die verschiedene Methoden liefern, zu unterscheiden; und drittens können wir Algorithmen, die fĂŒr das Lernen kausaler Struktur entwickelt wurden, fĂŒr die effiziente SchĂ€tzung von auf bindingten Verteilungen basierenden IML-Methoden verwenden
- âŠ