620 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Investigating the learning potential of the Second Quantum Revolution: development of an approach for secondary school students

    Get PDF
    In recent years we have witnessed important changes: the Second Quantum Revolution is in the spotlight of many countries, and it is creating a new generation of technologies. To unlock the potential of the Second Quantum Revolution, several countries have launched strategic plans and research programs that finance and set the pace of research and development of these new technologies (like the Quantum Flagship, the National Quantum Initiative Act and so on). The increasing pace of technological changes is also challenging science education and institutional systems, requiring them to help to prepare new generations of experts. This work is placed within physics education research and contributes to the challenge by developing an approach and a course about the Second Quantum Revolution. The aims are to promote quantum literacy and, in particular, to value from a cultural and educational perspective the Second Revolution. The dissertation is articulated in two parts. In the first, we unpack the Second Quantum Revolution from a cultural perspective and shed light on the main revolutionary aspects that are elevated to the rank of principles implemented in the design of a course for secondary school students, prospective and in-service teachers. The design process and the educational reconstruction of the activities are presented as well as the results of a pilot study conducted to investigate the impact of the approach on students' understanding and to gather feedback to refine and improve the instructional materials. The second part consists of the exploration of the Second Quantum Revolution as a context to introduce some basic concepts of quantum physics. We present the results of an implementation with secondary school students to investigate if and to what extent external representations could play any role to promote studentsā€™ understanding and acceptance of quantum physics as a personal reliable description of the world

    Less is More: Restricted Representations for Better Interpretability and Generalizability

    Get PDF
    Deep neural networks are prevalent in supervised learning for large amounts of tasks such as image classification, machine translation and even scientific discovery. Their success is often at the sacrifice of interpretability and generalizability. The increasing complexity of models and involvement of the pre-training process make the inexplicability more imminent. The outstanding performance when labeled data are abundant while prone to overfit when labeled data are limited demonstrates the difficulty of deep neural networks' generalizability to different datasets. This thesis aims to improve interpretability and generalizability by restricting representations. We choose to approach interpretability by focusing on attribution analysis to understand which features contribute to prediction on BERT, and to approach generalizability by focusing on effective methods in a low-data regime. We consider two strategies of restricting representations: (1) adding bottleneck, and (2) introducing compression. Given input x, suppose we want to learn y with the latent representation z (i.e. xā†’zā†’y), adding bottleneck means adding function R such that L(R(z)) < L(z) and introducing compression means adding function R so that L(R(y)) < L(y) where L refers to the number of bits. In other words, the restriction is added either in the middle of the pipeline or at the end of it. We first introduce how adding information bottleneck can help attribution analysis and apply it to investigate BERT's behavior on text classification in Chapter 3. We then extend this attribution method to analyze passage reranking in Chapter 4, where we conduct a detailed analysis to understand cross-layer and cross-passage behavior. Adding bottleneck can not only provide insight to understand deep neural networks but can also be used to increase generalizability. In Chapter 5, we demonstrate the equivalence between adding bottleneck and doing neural compression. We then leverage this finding with a framework called Non-Parametric learning by Compression with Latent Variables (NPC-LV), and show how optimizing neural compressors can be used in the non-parametric image classification with few labeled data. To further investigate how compression alone helps non-parametric learning without latent variables (NPC), we carry out experiments with a universal compressor gzip on text classification in Chapter 6. In Chapter 7, we elucidate methods of adopting the perspective of doing compression but without the actual process of compression using T5. Using experimental results in passage reranking, we show that our method is highly effective in a low-data regime when only one thousand query-passage pairs are available. In addition to the weakly supervised scenario, we also extend our method to large language models like GPT under almost no supervision --- in one-shot and zero-shot settings. The experiments show that without extra parameters or in-context learning, GPT can be used for semantic similarity, text classification, and text ranking and outperform strong baselines, which is presented in Chapter 8. The thesis proposes to tackle two big challenges in machine learning --- "interpretability" and "generalizability" through restricting representation. We provide both theoretical derivation and empirical results to show the effectiveness of using information-theoretic approaches. We not only design new algorithms but also provide numerous insights on why and how "compression" is so important in understanding deep neural networks and improving generalizability

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This ļ¬fth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different ļ¬elds of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modiļ¬ed Proportional Conļ¬‚ict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classiļ¬ers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identiļ¬cation of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classiļ¬cation. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classiļ¬cation, and hybrid techniques mixing deep learning with belief functions as well

    Fuzzy Natural Logic in IFSA-EUSFLAT 2021

    Get PDF
    The present book contains five papers accepted and published in the Special Issue, ā€œFuzzy Natural Logic in IFSA-EUSFLAT 2021ā€, of the journal Mathematics (MDPI). These papers are extended versions of the contributions presented in the conference ā€œThe 19th World Congress of the International Fuzzy Systems Association and the 12th Conference of the European Society for Fuzzy Logic and Technology jointly with the AGOP, IJCRS, and FQAS conferencesā€, which took place in Bratislava (Slovakia) from September 19 to September 24, 2021. Fuzzy Natural Logic (FNL) is a system of mathematical fuzzy logic theories that enables us to model natural language terms and rules while accounting for their inherent vagueness and allows us to reason and argue using the tools developed in them. FNL includes, among others, the theory of evaluative linguistic expressions (e.g., small, very large, etc.), the theory of fuzzy and intermediate quantifiers (e.g., most, few, many, etc.), and the theory of fuzzy/linguistic IFā€“THEN rules and logical inference. The papers in this Special Issue use the various aspects and concepts of FNL mentioned above and apply them to a wide range of problems both theoretically and practically oriented. This book will be of interest for researchers working in the areas of fuzzy logic, applied linguistics, generalized quantifiers, and their applications

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Efficient Security Algorithm for Provisioning Constrained Internet of Things (IoT) Devices

    Get PDF
    Addressing the security concerns of constrained Internet of Things (IoT) devices, such as client- side encryption and secure provisioning remains a work in progress. IoT devices characterized by low power and processing capabilities do not exactly fit into the provisions of existing security schemes, as classical security algorithms are built on complex cryptographic functions that are too complex for constrained IoT devices. Consequently, the option for constrained IoT devices lies in either developing new security schemes or modifying existing ones as lightweight. This work presents an improved version of the Advanced Encryption Standard (AES) known as the Efficient Security Algorithm for Power-constrained IoT devices, which addressed some of the security concerns of constrained Internet of Things (IoT) devices, such as client-side encryption and secure provisioning. With cloud computing being the key enabler for the massive provisioning of IoT devices, encryption of data generated by IoT devices before onward transmission to cloud platforms of choice is being advocated via client-side encryption. However, coping with trade-offs remain a notable challenge with Lightweight algorithms, making the innovation of cheaper secu- rity schemes without compromise to security a high desirable in the secure provisioning of IoT devices. A cryptanalytic overview of the consequence of complexity reduction with mathematical justification, while using a Secure Element (ATECC608A) as a trade-off is given. The extent of constraint of a typical IoT device is investigated by comparing the Laptop/SAMG55 implemen- tations of the Efficient algorithm for constrained IoT devices. An analysis of the implementation and comparison of the Algorithm to lightweight algorithms is given. Based on experimentation results, resource constrain impacts a 657% increase in the encryption completion time on the IoT device in comparison to the laptop implementation; of the Efficient algorithm for Constrained IoT devices, which is 0.9 times cheaper than CLEFIA and 35% cheaper than the AES in terms of the encryption completion times, compared to current results in literature at 26%, and with a 93% of avalanche effect rate, well above a recommended 50% in literature. The algorithm is utilised for client-side encryption to provision the device onto AWS IoT core

    Occupant-Centric Simulation-Aided Building Design Theory, Application, and Case Studies

    Get PDF
    This book promotes occupants as a focal point for the design process

    Theme Aspect Argumentation Model for Handling Fallacies

    Full text link
    From daily discussions to marketing ads to political statements, information manipulation is rife. It is increasingly more important that we have the right set of tools to defend ourselves from manipulative rhetoric, or fallacies. Suitable techniques to automatically identify fallacies are being investigated in natural language processing research. However, a fallacy in one context may not be a fallacy in another context, so there is also a need to explain how and why it has come to be judged a fallacy. For the explainable fallacy identification, we present a novel approach to characterising fallacies through formal constraints, as a viable alternative to more traditional fallacy classifications by informal criteria. To achieve this objective, we introduce a novel context-aware argumentation model, the theme aspect argumentation model, which can do both: the modelling of a given argumentation as it is expressed (rhetorical modelling); and a deeper semantic analysis of the rhetorical argumentation model. By identifying fallacies with formal constraints, it becomes possible to tell whether a fallacy lurks in the modelled rhetoric with a formal rigour. We present core formal constraints for the theme aspect argumentation model and then more formal constraints that improve its fallacy identification capability. We show and prove the consequences of these formal constraints. We then analyse the computational complexities of deciding the satisfiability of the constraints

    Logics of Responsibility

    Get PDF
    The study of responsibility is a complicated matter. The term is used in different ways in different fields, and it is easy to engage in everyday discussions as to why someone should be considered responsible for something. Typically, the backdrop of these discussions involves social, legal, moral, or philosophical problems. A clear pattern in all these spheres is the intent of issuing standards for when---and to what extent---an agent should be held responsible for a state of affairs. This is where Logic lends a hand. The development of expressive logics---to reason about agents' decisions in situations with moral consequences---involves devising unequivocal representations of components of behavior that are highly relevant to systematic responsibility attribution and to systematic blame-or-praise assignment. To put it plainly, expressive syntactic-and-semantic frameworks help us analyze responsibility-related problems in a methodical way. This thesis builds a formal theory of responsibility. The main tool used toward this aim is modal logic and, more specifically, a class of modal logics of action known as stit theory. The underlying motivation is to provide theoretical foundations for using symbolic techniques in the construction of ethical AI. Thus, this work means a contribution to formal philosophy and symbolic AI. The thesis's methodology consists in the development of stit-theoretic models and languages to explore the interplay between the following components of responsibility: agency, knowledge, beliefs, intentions, and obligations. Said models are integrated into a framework that is rich enough to provide logic-based characterizations for three categories of responsibility: causal, informational, and motivational responsibility. The thesis is structured as follows. Chapter 2 discusses at length stit theory, a logic that formalizes the notion of agency in the world over an indeterministic conception of time known as branching time. The idea is that agents act by constraining possible futures to definite subsets. On the road to formalizing informational responsibility, Chapter 3 extends stit theory with traditional epistemic notions (knowledge and belief). Thus, the chapter formalizes important aspects of agents' reasoning in the choice and performance of actions. In a context of responsibility attribution and excusability, Chapter 4 extends epistemic stit theory with measures of optimality of actions that underlie obligations. In essence, this chapter formalizes the interplay between agents' knowledge and what they ought to do. On the road to formalizing motivational responsibility, Chapter 5 adds intentions and intentional actions to epistemic stit theory and reasons about the interplay between knowledge and intentionality. Finally, Chapter 6 merges the previous chapters' formalisms into a rich logic that is able to express and model different modes of the aforementioned categories of responsibility. Technically, the most important contributions of this thesis lie in the axiomatizations of all the introduced logics. In particular, the proofs of soundness & completeness results involve long, step-by-step procedures that make use of novel techniques
    • ā€¦
    corecore