991 research outputs found
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Language integrated relational lenses
Relational databases are ubiquitous. Such monolithic databases accumulate large
amounts of data, yet applications typically only work on small portions of the data
at a time. A subset of the database defined as a computation on the underlying
tables is called a view. Querying views is helpful, but it is also desirable to update
them and have these changes be applied to the underlying database. This view
update problem has been the subject of much previous work before, but support
by database servers is limited and only rarely available.
Lenses are a popular approach to bidirectional transformations, a generalization
of the view update problem in databases to arbitrary data. However, perhaps surprisingly, lenses have seldom actually been used to implement updatable views in
databases. Bohannon, Pierce and Vaughan propose an approach to updatable views called relational lenses. However, to the best of our knowledge this
proposal has not been implemented or evaluated prior to the work reported in
this thesis.
This thesis proposes programming language support for relational lenses. Language integrated relational lenses support expressive and efficient view updates,
without relying on updatable view support from the database server. By integrating relational lenses into the programming language, application development
becomes easier and less error-prone, avoiding the impedance mismatch of having
two programming languages. Integrating relational lenses into the language poses
additional challenges. As defined by Bohannon et al. relational lenses completely
recompute the database, making them inefficient as the database scales. The
other challenge is that some parts of the well-formedness conditions are too general for implementation. Bohannon et al. specify predicates using possibly infinite
abstract sets and define the type checking rules using relational algebra.
Incremental relational lenses equip relational lenses with change-propagating semantics that map small changes to the view into (potentially) small changes
to the source tables. We prove that our incremental semantics are functionally
equivalent to the non-incremental semantics, and our experimental results show
orders of magnitude improvement over the non-incremental approach. This thesis introduces a concrete predicate syntax and shows how the required checks
are performed on these predicates and show that they satisfy the abstract predicate specifications. We discuss trade-offs between static predicates that are fully
known at compile time vs dynamic predicates that are only known during execution and introduce hybrid predicates taking inspiration from both approaches.
This thesis adapts the typing rules for relational lenses from sequential composition to a functional style of sub-expressions. We prove that any well-typed
functional relational lens expression can derive a well-typed sequential lens.
We use these additions to relational lenses as the foundation for two practical implementations: an extension of the Links functional language and a library written
in Haskell. The second implementation demonstrates how type-level computation can be used to implement relational lenses without changes to the compiler.
These two implementations attest to the possibility of turning relational lenses
into a practical language feature
Strategies for defending the Principle of Identity of Indiscernibles: a critical survey and a new approach
The Principle of Identity of Indiscernibles (PII) is the focus of much controversy in the history of Metaphysics and in contemporary Physics. Many questions rover the debate about its truth or falsehood, for example, to which objects the principle applies? Which properties can be counted as discerning properties? Is the principle necessary? In other words, which version of the principle is the correct and is this version true? This thesis aims to answer this questions in order to show that PII is a necessarily true principle of metaphysics. To accomplish this task, the reader will find, in this thesis, an encyclopaedical introduction to the history of PII and to the reasons it matters so much, followed by a presentation of the most famous arguments against it and the defences used against these arguments. Then, the reader finds in-depth discussion of the minutiae involved in postulating the principle as to make clear what is in fact being attacked and defended. With these preliminaries solved, a deeper analysis of these defences is presented aiming to discover which is the most appropriate example to use against the attacks to the principle. This analysis allowed a classification of these defences in four families with different strategies within them. Finally, with these defensive strategies at hand we are able to confront alleged counterexamples to PII in Mathematics with the intention to test these defences
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Towards A Practical High-Assurance Systems Programming Language
Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation.
Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code.
To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process
(b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!)
(b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!
A Computational Framework for Efficient Reliability Analysis of Complex Networks
With the growing scale and complexity of modern infrastructure networks comes the challenge of developing efficient and dependable methods for analysing their reliability. Special attention must be given to potential network interdependencies as disregarding these can lead to catastrophic failures. Furthermore, it is of paramount importance to properly treat all uncertainties. The survival signature is a recent development built to effectively analyse complex networks that far exceeds standard techniques in several important areas. Its most distinguishing feature is the complete separation of system structure from probabilistic information. Because of this, it is possible to take into account a variety of component failure phenomena such as dependencies, common causes of failure, and imprecise probabilities without reevaluating the network structure.
This cumulative dissertation presents several key improvements to the survival signature ecosystem focused on the structural evaluation of the system as well as the modelling of component failures.
A new method is presented in which (inter)-dependencies between components and networks are modelled using vine copulas. Furthermore, aleatory and epistemic uncertainties are included by applying probability boxes and imprecise copulas. By leveraging the large number of available copula families it is possible to account for varying dependent effects. The graph-based design of vine copulas synergizes well with the typical descriptions of network topologies. The proposed method is tested on a challenging scenario using the IEEE reliability test system, demonstrating its usefulness and emphasizing the ability to represent complicated scenarios with a range of dependent failure modes.
The numerical effort required to analytically compute the survival signature is prohibitive for large complex systems. This work presents two methods for the approximation of the survival signature. In the first approach system configurations of low interest are excluded using percolation theory, while the remaining parts of the signature are estimated by Monte Carlo simulation. The method is able to accurately approximate the survival signature with very small errors while drastically reducing computational demand. Several simple test systems, as well as two real-world situations, are used to show the accuracy and performance.
However, with increasing network size and complexity this technique also reaches its limits. A second method is presented where the numerical demand is further reduced. Here, instead of approximating the whole survival signature only a few strategically selected values are computed using Monte Carlo simulation and used to build a surrogate model based on normalized radial basis functions. The uncertainty resulting from the approximation of the data points is then propagated through an interval predictor model which estimates bounds for the remaining survival signature values. This imprecise model provides bounds on the survival signature and therefore the network reliability. Because a few data points are sufficient to build the interval predictor model it allows for even larger systems to be analysed.
With the rising complexity of not just the system but also the individual components themselves comes the need for the components to be modelled as subsystems in a system-of-systems approach. A study is presented, where a previously developed framework for resilience decision-making is adapted to multidimensional scenarios in which the subsystems are represented as survival signatures. The survival signature of the subsystems can be computed ahead of the resilience analysis due to the inherent separation of structural information. This enables efficient analysis in which the failure rates of subsystems for various resilience-enhancing endowments are calculated directly from the survival function without reevaluating the system structure.
In addition to the advancements in the field of survival signature, this work also presents a new framework for uncertainty quantification developed as a package in the Julia programming language called UncertaintyQuantification.jl. Julia is a modern high-level dynamic programming language that is ideal for applications such as data analysis and scientific computing. UncertaintyQuantification.jl was built from the ground up to be generalised and versatile while remaining simple to use. The framework is in constant development and its goal is to become a toolbox encompassing state-of-the-art algorithms from all fields of uncertainty quantification and to serve as a valuable tool for both research and industry. UncertaintyQuantification.jl currently includes simulation-based reliability analysis utilising a wide range of sampling schemes, local and global sensitivity analysis, and surrogate modelling methodologies
Examining the Relationships Between Distance Education Students’ Self-Efficacy and Their Achievement
This study aimed to examine the relationships between students’ self-efficacy (SSE) and students’ achievement (SA) in distance education. The instruments were administered to 100 undergraduate students in a distance university who work as migrant workers in Taiwan to gather data, while their SA scores were obtained from the university. The semi-structured interviews for 8 participants consisted of questions that showed the specific conditions of SSE and SA. The findings of this study were reported as follows: There was a significantly positive correlation between targeted SSE (overall scales and general self-efficacy) and SA. Targeted students' self-efficacy effectively predicted their achievement; besides, general self- efficacy had the most significant influence. In the qualitative findings, four themes were extracted for those students with lower self-efficacy but higher achievement—physical and emotional condition, teaching and learning strategy, positive social interaction, and intrinsic motivation. Moreover, three themes were extracted for those students with moderate or higher self-efficacy but lower achievement—more time for leisure (not hard-working), less social interaction, and external excuses. Providing effective learning environments, social interactions, and teaching and learning strategies are suggested in distance education
The Basic Needs in Games (BANG) Model of Video Games and Mental Health: Untangling the Positive and Negative Effects of Games with Better Science
How do video games affect mental health? Despite decades of research and widespread interest from policymakers, parents, and players, in most cases the best answer we have is: it depends. I argue that our limited success stems largely from (1) a lack of theories that explain more than small portions of the varied evidence base, and (2) methodological limitations related to measurement, self-report data, questionable research practices, and more. In this thesis, I present the Basic Needs in Games (BANG) model. Building upon self-determination theory, BANG offers a novel theoretical account that provides mechanisms for both short- and long-term effects, positive and negative, resulting from quality or quantity of gaming. Under BANG, the primary mechanism through which games impact mental health is via need satisfaction and frustration: the extent to which both games, and players’ life in general, provide experiences of control and volition (autonomy), mastery and growth (competence), and connection and belonging (relatedness). To generate BANG, I conducted semi-structured interviews, finding that need-frustrating experiences within games have important effects on player behavior, likelihood of continuing play, and expectations for future experiences (Study 1). In a mixed-method survey, I show that some—but not all—players are successful in compensating for frustrated needs in daily life by playing games (Study 2). These findings informed the validation of the the Basic Needs in Games Scale (BANGS), as previous instruments either did not measure need frustration or were not designed for gaming contexts. Across 1400 participants and various validity analyses, I show that the questionnaire is suitable for wide-ranging use (Study 3). Finally, I collected 12 weeks of digital trace data using a novel method of monitoring the Xbox network, and combined this with 6 biweekly surveys measuring need satisfaction and frustration alongside three mental health constructs (Study 4). Across 2000 responses (n = 400), I find partial support for BANG: there is strong evidence to rule out a meaningful relationship between playtime and subsequent mental health. However, players who felt more need satisfaction than usual in games also reported higher than usual need satisfaction in general, which in turn related to better mental health. My results help push the field beyond simplified notions of playtime by offering a framework that can systematically account for a wide variety of observed gaming effects. I hope that this work can serve as both a call to action and an illustrative example of how games research can be more productive
- …