10,201 research outputs found
Towards A Practical High-Assurance Systems Programming Language
Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation.
Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code.
To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process
Terminology and ontology development for semantic annotation : A use case on sepsis and adverse events
publishedVersio
âOh my god, how did I spend all that money?â: Lived experiences in two commodified fandom communities
This research explores the role of commodification in participation in celebrity-centric fandom communities, applying a leisure studies framework to understand the constraints fans face in their quest to participate and the negotiations they engage in to overcome these constraints.
In fan studies scholarship, there is a propensity to focus on the ways fans oppose commodified industry structures; however, this ignores the many fans who happily participate within them. Using the fandoms for the pop star Taylor Swift and the television series Supernatural as case studies, this project uses a mixed-methodological approach to speak directly to fans via surveys and semistructured interviews to develop an understanding of fansâ lived experiences based on their own words.
By focusing on celebrity-centric fandom communities rather than on the more frequently studied textual fandoms, this thesis turns to the role of the celebrity in fansâ ongoing desire to participate in commodified spaces. I argue that fans are motivated to continue spending money to participate within their chosen fandom when this form of participation is tied to the opportunity for engagement with the celebrity. While many fans seek community from their fandom participation, this research finds that for others, social ties are a secondary outcome of their overall desire for celebrity attention, which becomes a hobby in which they build a âleisure careerâ (Stebbins 2014). When fans successfully gain attention from their celebrity object of fandom, they gain status within their community, creating intra-fandom hierarchies based largely on financial resources and on freedom from structural constraints related to education, employment, and caring.
Ultimately, this thesis argues that the broad neglect of celebrity fandom practices means we have overlooked the experiences of many fans, necessitating a much broader future scope for the field
Knowledge Graph Building Blocks: An easy-to-use Framework for developing FAIREr Knowledge Graphs
Knowledge graphs and ontologies provide promising technical solutions for
implementing the FAIR Principles for Findable, Accessible, Interoperable, and
Reusable data and metadata. However, they also come with their own challenges.
Nine such challenges are discussed and associated with the criterion of
cognitive interoperability and specific FAIREr principles (FAIR + Explorability
raised) that they fail to meet. We introduce an easy-to-use, open source
knowledge graph framework that is based on knowledge graph building blocks
(KGBBs). KGBBs are small information modules for knowledge-processing, each
based on a specific type of semantic unit. By interrelating several KGBBs, one
can specify a KGBB-driven FAIREr knowledge graph. Besides implementing semantic
units, the KGBB Framework clearly distinguishes and decouples an internal
in-memory data model from data storage, data display, and data access/export
models. We argue that this decoupling is essential for solving many problems of
knowledge management systems. We discuss the architecture of the KGBB Framework
as we envision it, comprising (i) an openly accessible KGBB-Repository for
different types of KGBBs, (ii) a KGBB-Engine for managing and operating FAIREr
knowledge graphs (including automatic provenance tracking, editing changelog,
and versioning of semantic units); (iii) a repository for KGBB-Functions; (iv)
a low-code KGBB-Editor with which domain experts can create new KGBBs and
specify their own FAIREr knowledge graph without having to think about semantic
modelling. We conclude with discussing the nine challenges and how the KGBB
Framework provides solutions for the issues they raise. While most of what we
discuss here is entirely conceptual, we can point to two prototypes that
demonstrate the principle feasibility of using semantic units and KGBBs to
manage and structure knowledge graphs
Detecting Anomalous Microflows in IoT Volumetric Attacks via Dynamic Monitoring of MUD Activity
IoT networks are increasingly becoming target of sophisticated new
cyber-attacks. Anomaly-based detection methods are promising in finding new
attacks, but there are certain practical challenges like false-positive alarms,
hard to explain, and difficult to scale cost-effectively. The IETF recent
standard called Manufacturer Usage Description (MUD) seems promising to limit
the attack surface on IoT devices by formally specifying their intended network
behavior. In this paper, we use SDN to enforce and monitor the expected
behaviors of each IoT device, and train one-class classifier models to detect
volumetric attacks.
Our specific contributions are fourfold. (1) We develop a multi-level
inferencing model to dynamically detect anomalous patterns in network activity
of MUD-compliant traffic flows via SDN telemetry, followed by packet inspection
of anomalous flows. This provides enhanced fine-grained visibility into
distributed and direct attacks, allowing us to precisely isolate volumetric
attacks with microflow (5-tuple) resolution. (2) We collect traffic traces
(benign and a variety of volumetric attacks) from network behavior of IoT
devices in our lab, generate labeled datasets, and make them available to the
public. (3) We prototype a full working system (modules are released as
open-source), demonstrates its efficacy in detecting volumetric attacks on
several consumer IoT devices with high accuracy while maintaining low false
positives, and provides insights into cost and performance of our system. (4)
We demonstrate how our models scale in environments with a large number of
connected IoTs (with datasets collected from a network of IP cameras in our
university campus) by considering various training strategies (per device unit
versus per device type), and balancing the accuracy of prediction against the
cost of models in terms of size and training time.Comment: 18 pages, 13 figure
Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse
This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses.
This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups.
In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in usersâ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018â6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Testing the nomological network for the Personal Engagement Model
The study of employee engagement has been a key focus of management for over three decades. The academic literature on engagement has generated multiple definitions but there are two primary models of engagement: the Personal Engagement Model of Kahn (1990), and the Work Engagement Model (WEM) of Schaufeli et al., (2002). While the former is cited by most authors as the seminal work on engagement, research has tended to focus on elements of the model and most theoretical work on engagement has predominantly used the WEM to consider the topic.
The purpose of this study was to test all the elements of the nomological network of the PEM to determine whether the complete model of personal engagement is viable. This was done using data from a large, complex public sector workforce. Survey questions were designed to test each element of the PEM and administered to a sample of the workforce (n = 3,103). The scales were tested and refined using confirmatory factor analysis and then the model was tested determine the structure of the nomological network. This was validated and the generalisability of the final model was tested across different work and organisational types.
The results showed that the PEM is viable but there were differences from what was originally proposed by Kahn (1990). Specifically, of the three psychological conditions deemed necessary for engagement to occur, meaningfulness, safety, and availability, only meaningfulness was found to contribute to employee engagement. The model demonstrated that employees experience meaningfulness through both the nature of the work that they do and the organisation within which they do their work. Finally, the findings were replicated across employees in different work types and different organisational types.
This thesis makes five contributions to the engagement paradigm. It advances engagement theory by testing the PEM and showing that it is an adequate representation of engagement. A model for testing the causal mechanism for engagement has been articulated, demonstrating that meaningfulness in work is a primary mechanism for engagement. The research has shown the key aspects of the workplace in which employees experience meaningfulness, the nature of the work that they do and the organisation within which they do it. It has demonstrated that this is consistent across organisations and the type of work. Finally, it has developed a reliable measure of the different elements of the PEM which will support future research in this area
Technical Dimensions of Programming Systems
Programming requires much more than just writing code in a programming language. It is usually done in the context of a stateful environment, by interacting with a system through a graphical user interface. Yet, this wide space of possibilities lacks a common structure for navigation. Work on programming systems fails to form a coherent body of research, making it hard to improve on past work and advance the state of the art.
In computer science, much has been said and done to allow comparison of programming languages, yet no similar theory exists for programming systems; we believe that programming systems deserve a theory too.
We present a framework of technical dimensions which capture the underlying characteristics of programming systems and provide a means for conceptualizing and comparing them.
We identify technical dimensions by examining past influential programming systems and reviewing their design principles, technical capabilities, and styles of user interaction. Technical dimensions capture characteristics that may be studied, compared and advanced independently. This makes it possible to talk about programming systems in a way that can be shared and constructively debated rather than relying solely on personal impressions.
Our framework is derived using a qualitative analysis of past programming systems. We outline two concrete ways of using our framework. First, we show how it can analyze a recently developed novel programming system. Then, we use it to identify an interesting unexplored point in the design space of programming systems.
Much research effort focuses on building programming systems that are easier to use, accessible to non-experts, moldable and/or powerful, but such efforts are disconnected. They are informal, guided by the personal vision of their authors and thus are only evaluable and comparable on the basis of individual experience using them. By providing foundations for more systematic research, we can help programming systems researchers to stand, at last, on the shoulders of giants
- âŚ