5,664 research outputs found
The Council of Europeâs Framework of Competences for Democratic Culture: Hope for democracy or an allusive Utopia?
Democracies around the world are increasingly polarized along political and cultural lines. To address these challenges, in 2016, the Council of Europe (CoE) produced a model of twenty competences for democratic culture. In 2018, this same model became the basis of the Reference Framework of Competences for Democratic Culture (RFCDC). The RFCDC provides pedagogical instructions to help implement these competences. Together, I call this set of materials âthe Frameworkâ.
This thesis begins with the premise that utopia has long played an important role in the way power is maintained or resisted in democratic education. It questions the assumption that democratic culture can be cultivated instrumentally through policy- based competences without imposing power on subjects and views this assumption to be utopian. It thus excavates the potential utopian ideals at play in the Framework using âhidden utopiasâ as a conceptual lens and method, which draws inspiration from the theories of MichĂšl Foucault, Ernst Bloch and Ruth Levitas.
It investigates how using âhidden utopiasâ as a theoretical lens might facilitate a deeper understanding of the nature and purpose of the Framework, how implicit utopias might be at play, how this could be problematic and how these theories might shed light on the application of the Framework in pedagogical contexts. The contribution of this thesis is to make visible potential utopias at the heart of the Framework. It suggests that making implicit utopias visible in democratic education can help educators and learners engage with these discourses in critical and innovative ways and think beyond them
Towards A Practical High-Assurance Systems Programming Language
Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation.
Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code.
To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process
Evaluation Methodologies in Software Protection Research
Man-at-the-end (MATE) attackers have full control over the system on which
the attacked software runs, and try to break the confidentiality or integrity
of assets embedded in the software. Both companies and malware authors want to
prevent such attacks. This has driven an arms race between attackers and
defenders, resulting in a plethora of different protection and analysis
methods. However, it remains difficult to measure the strength of protections
because MATE attackers can reach their goals in many different ways and a
universally accepted evaluation methodology does not exist. This survey
systematically reviews the evaluation methodologies of papers on obfuscation, a
major class of protections against MATE attacks. For 572 papers, we collected
113 aspects of their evaluation methodologies, ranging from sample set types
and sizes, over sample treatment, to performed measurements. We provide
detailed insights into how the academic state of the art evaluates both the
protections and analyses thereon. In summary, there is a clear need for better
evaluation methodologies. We identify nine challenges for software protection
evaluations, which represent threats to the validity, reproducibility, and
interpretation of research results in the context of MATE attacks
Becoming George Lucas: From Avant-Garde, Auteur, Independent Artist to Studio Executive
Because of the unprecedented popularity of Star Wars, George Lucas, the creator of the multi-media franchise, is one of the most well-known filmmakers in history. What makes Lucasâs relationship with Star Wars unique is that because the franchise has continually been exploited rather than left as a single unchanging, static text, its artistic value, along with Lucasâs legacy, is in constant flux and is often misunderstood. In other words, depending on Star Warsâs position in the public zeitgeist at a given time, Lucas is either revered, detested, or considered incompetent as a filmmaker. While there is no denying that it is impossible to know Lucas as a filmmaker without considering the outsized role Star Wars played in his career, this thesis argues that to truly understand Lucas requires centering and prioritizing his artistic journey before the esoteric space opera consumed his life.
As such, this thesis will first present a three-chapter chronological, narrativized historical account of Lucasâs artistic journey from its origins through the subsequent success of Star Wars (1977). Individually, each of the three chapters of this thesis will track the evolution of the Hollywood Studio System in the 1970s using a detailed production history of one of Lucasâs first three feature film productions as a vantage point. But, by placing the chapters together, they become a poetic ballad about the clash of art and commerce as the repetition of themes creates a resonance that characterizes Lucas as a folk-lore-like hero determined to challenge conventional filmmaking practices. Then, when Lucasâs career hits a fever pitch after the release of Star Wars, a coda will round out the narrative by providing an overview of how Lucas used the financial stability of the franchise to fund the development of digital filmmaking in the second half of his career
Operatic Pasticcios in 18th-Century Europe
In Early Modern times, techniques of assembling, compiling and arranging pre-existing material were part of the established working methods in many arts. In the world of 18th-century opera, such practices ensured that operas could become a commercial success because the substitution or compilation of arias fitting the singer's abilities proved the best recipe for fulfilling the expectations of audiences. Known as »pasticcios« since the 18th-century, these operas have long been considered inferior patchwork. The volume collects essays that reconsider the pasticcio, contextualize it, define its preconditions, look at its material aspects and uncover its aesthetical principles
Automatic Program Instrumentation for Automatic Verification (Extended Technical Report)
In deductive verification and software model checking, dealing with certain
specification language constructs can be problematic when the back-end solver
is not sufficiently powerful or lacks the required theories. One way to deal
with this is to transform, for verification purposes, the program to an
equivalent one not using the problematic constructs, and to reason about its
correctness instead. In this paper, we propose instrumentation as a unifying
verification paradigm that subsumes various existing ad-hoc approaches, has a
clear formal correctness criterion, can be applied automatically, and can
transfer back witnesses and counterexamples. We illustrate our approach on the
automated verification of programs that involve quantification and aggregation
operations over arrays, such as the maximum value or sum of the elements in a
given segment of the array, which are known to be difficult to reason about
automatically. We formalise array aggregation operations as monoid
homomorphisms. We implement our approach in the MonoCera tool, which is
tailored to the verification of programs with aggregation, and evaluate it on
example programs, including SV-COMP programs.Comment: 36 page
Getting the gist of it: An investigation of gist processing and the learning of novel gist categories
Gist extraction rapidly processes global structural regularities to provide access to the general meaning and global categorizations of our visual environment â the gist. Medical experts can also extract gist information from mammograms to categorize them as normal or abnormal. However, the visual properties influencing the gist of medical abnormality are largely unknown. It is also not known how medical experts, or any observer for that matter, learned to recognise the gist of new categories. This thesis investigated the processing and acquisition of the gist of abnormality. Chapter 2 observed no significant differences in performance between 500 ms and unlimited viewing time, suggesting that the gist of abnormality is fully accessible after 500 ms and remains available during further visual processing. Next, chapter 3 demonstrated that certain high-pass filters enhanced gist signals in mammograms at risk of future cancer, without affecting overall performance. These filters could be used to enhance mammograms for gist risk-factor scoring. Chapter 4âs multi-session training showed that perceptual exposure with global feedback is sufficient to induce learning of a new gist categorisation. However, learning was affected by individual differences and was not significantly retained after 7-10 days, suggesting that prolonged perceptual exposure might be needed for consolidation. Chapter 5 observed evidence for the neural signature of gist extraction in medical experts across a network of regions, where neural activity patterns showed clear individual differences. Overall, the findings of this thesis confirm the gist extraction of medical abnormality as a rapid, global process that is sensitive to spatial structural regularities. Additionally, it was shown that a gist category can be learned via global feedback, but this learning is hard to retain and is affected by individual differences. Similarly, individual differences were observed in the neural signature of gist extraction by medical experts
Brain Computations and Connectivity [2nd edition]
This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations.
Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed.
The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes.
Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions.
This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press.
Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics
Lessons from Formally Verified Deployed Software Systems (Extended version)
The technology of formal software verification has made spectacular advances,
but how much does it actually benefit the development of practical software?
Considerable disagreement remains about the practicality of building systems
with mechanically-checked proofs of correctness. Is this prospect confined to a
few expensive, life-critical projects, or can the idea be applied to a wide
segment of the software industry?
To help answer this question, the present survey examines a range of
projects, in various application areas, that have produced formally verified
systems and deployed them for actual use. It considers the technologies used,
the form of verification applied, the results obtained, and the lessons that
can be drawn for the software industry at large and its ability to benefit from
formal verification techniques and tools.
Note: a short version of this paper is also available, covering in detail
only a subset of the considered systems. The present version is intended for
full reference.Comment: arXiv admin note: text overlap with arXiv:1211.6186 by other author
- âŠ