1,715 research outputs found
Climate Change and Critical Agrarian Studies
Climate change is perhaps the greatest threat to humanity today and plays out as a cruel engine of myriad forms of injustice, violence and destruction. The effects of climate change from human-made emissions of greenhouse gases are devastating and accelerating; yet are uncertain and uneven both in terms of geography and socio-economic impacts. Emerging from the dynamics of capitalism since the industrial revolution â as well as industrialisation under state-led socialism â the consequences of climate change are especially profound for the countryside and its inhabitants. The book interrogates the narratives and strategies that frame climate change and examines the institutionalised responses in agrarian settings, highlighting what exclusions and inclusions result. It explores how different people â in relation to class and other co-constituted axes of social difference such as gender, race, ethnicity, age and occupation â are affected by climate change, as well as the climate adaptation and mitigation responses being implemented in rural areas. The book in turn explores how climate change â and the responses to it - affect processes of social differentiation, trajectories of accumulation and in turn agrarian politics. Finally, the book examines what strategies are required to confront climate change, and the underlying political-economic dynamics that cause it, reflecting on what this means for agrarian struggles across the world. The 26 chapters in this volume explore how the relationship between capitalism and climate change plays out in the rural world and, in particular, the way agrarian struggles connect with the huge challenge of climate change. Through a huge variety of case studies alongside more conceptual chapters, the book makes the often-missing connection between climate change and critical agrarian studies. The book argues that making the connection between climate and agrarian justice is crucial
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Secure storage systems for untrusted cloud environments
The cloud has become established for applications that need to be scalable and highly
available. However, moving data to data centers owned and operated by a third party,
i.e., the cloud provider, raises security concerns because a cloud provider could easily
access and manipulate the data or program flow, preventing the cloud from being
used for certain applications, like medical or financial.
Hardware vendors are addressing these concerns by developing Trusted Execution
Environments (TEEs) that make the CPU state and parts of memory inaccessible from
the host software. While TEEs protect the current execution state, they do not provide
security guarantees for data which does not fit nor reside in the protected memory
area, like network and persistent storage.
In this work, we aim to address TEEsâ limitations in three different ways, first we
provide the trust of TEEs to persistent storage, second we extend the trust to multiple
nodes in a network, and third we propose a compiler-based solution for accessing
heterogeneous memory regions. More specifically,
âą SPEICHER extends the trust provided by TEEs to persistent storage. SPEICHER
implements a key-value interface. Its design is based on LSM data structures, but
extends them to provide confidentiality, integrity, and freshness for the stored
data. Thus, SPEICHER can prove to the client that the data has not been tampered
with by an attacker.
âą AVOCADO is a distributed in-memory key-value store (KVS) that extends the
trust that TEEs provide across the network to multiple nodes, allowing KVSs to
scale beyond the boundaries of a single node. On each node, AVOCADO carefully
divides data between trusted memory and untrusted host memory, to maximize
the amount of data that can be stored on each node. AVOCADO leverages the
fact that we can model network attacks as crash-faults to trust other nodes with
a hardened ABD replication protocol.
âą TOAST is based on the observation that modern high-performance systems
often use several different heterogeneous memory regions that are not easily
distinguishable by the programmer. The number of regions is increased by the
fact that TEEs divide memory into trusted and untrusted regions. TOAST is a
compiler-based approach to unify access to different heterogeneous memory
regions and provides programmability and portability. TOAST uses a
load/store interface to abstract most library interfaces for different memory
regions
Exploiting Process Algebras and BPM Techniques for Guaranteeing Success of Distributed Activities
The communications and collaborations among activities, pro-
cesses, or systems, in general, are the base of complex sys-
tems defined as distributed systems. Given the increasing
complexity of their structure, interactions, and functionali-
ties, many research areas are interested in providing mod-
elling techniques and verification capabilities to guarantee
their correctness and satisfaction of properties. In particular,
the formal methods community provides robust verification
techniques to prove system properties. However, most ap-
proaches rely on manually designed formal models, making
the analysis process challenging because it requires an expert
in the field. On the other hand, the BPM community pro-
vides a widely used graphical notation (i.e., BPMN) to design
internal behaviour and interactions of complex distributed
systems that can be enhanced with additional features (e.g.,
privacy technologies). Furthermore, BPM uses process min-
ing techniques to automatically discover these models from
events observation. However, verifying properties and ex-
pected behaviour, especially in collaborations, still needs a
solid methodology.
This thesis aims at exploiting the features of the formal meth-
ods and BPM communities to provide approaches that en-
able formal verification over distributed systems. In this con-
text, we propose two approaches. The modelling-based ap-
proach starts from BPMN models and produces process al-
gebra specifications to enable formal verification of system
properties, including privacy-related ones. The process mining-
based approach starts from logs observations to automati-
xv
cally generate process algebra specifications to enable veri-
fication capabilities
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This ïŹfth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different ïŹelds of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modiïŹed Proportional ConïŹict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classiïŹers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identiïŹcation of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classiïŹcation.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classiïŹcation, and hybrid techniques mixing deep learning with belief functions as well
Comparing the production of a formula with the development of L2 competence
This pilot study investigates the production of a formula with the development of L2 competence over proficiency levels of a spoken learner corpus. The results show that the formula
in beginner production data is likely being recalled holistically from learnersâ phonological
memory rather than generated online, identifiable by virtue of its fluent production in absence
of any other surface structure evidence of the formulaâs syntactic properties. As learnersâ L2
competence increases, the formula becomes sensitive to modifications which show structural
conformity at each proficiency level. The transparency between the formulaâs modification
and learnersâ corresponding L2 surface structure realisations suggest that it is the independent
development of L2 competence which integrates the formula into compositional language,
and ultimately drives the SLA process forward
Automated and foundational verification of low-level programs
Formal verification is a promising technique to ensure the reliability of low-level programs like operating systems and hypervisors, since it can show the absence of whole classes of bugs and prevent critical vulnerabilities. However, to realize the full potential of formal verification for real-world low-level programs one has to overcome several challenges, including: (1) dealing with the complexities of realistic models of real-world programming languages; (2) ensuring the trustworthiness of the verification, ideally by providing foundational proofs (i.e., proofs that can be checked by a general-purpose proof assistant); and (3) minimizing the manual effort required for verification by providing a high degree of automation. This dissertation presents multiple projects that advance formal verification along these three axes: RefinedC provides the first approach for verifying C code that combines foundational proofs with a high degree of automation via a novel refinement and ownership type system. Islaris shows how to scale verification of assembly code to realistic models of modern instruction set architectures-in particular, Armv8-A and RISC-V. DimSum develops a decentralized approach for reasoning about programs that consist of components written in multiple different languages (e.g., assembly and C), as is common for low-level programs. RefinedC and Islaris rest on Lithium, a novel proof engine for separation logic that combines automation with foundational proofs.Formale Verifikation ist eine vielversprechende Technik, um die VerlĂ€sslichkeit von grundlegenden Programmen wie Betriebssystemen sicherzustellen. Um das volle Potenzial formaler Verifikation zu realisieren, mĂŒssen jedoch mehrere Herausforderungen gemeistert werden: Erstens muss die KomplexitĂ€t von realistischen Modellen von Programmiersprachen wie C oder Assembler gehandhabt werden. Zweitens muss die VertrauenswĂŒrdigkeit der Verifikation sichergestellt werden, idealerweise durch maschinenĂŒberprĂŒfbare Beweise. Drittens muss die Verifikation automatisiert werden, um den manuellen Aufwand zu minimieren. Diese Dissertation prĂ€sentiert mehrere Projekte, die formale Verifikation entlang dieser Achsen weiterentwickeln: RefinedC ist der erste Ansatz fĂŒr die Verifikation von C Code, der maschinenĂŒberprĂŒfbare Beweise mit einem hohen Grad an Automatisierung vereint. Islaris zeigt, wie die Verifikation von Assembler zu realistischen Modellen von modernen Befehlssatzarchitekturen wie Armv8-A oder RISC-V skaliert werden kann. DimSum entwickelt einen neuen Ansatz fĂŒr die Verifizierung von Programmen, die aus Komponenten in mehreren Programmiersprachen bestehen (z.B., C und Assembler), wie es oft bei grundlegenden Programmen wie Betriebssystemen der Fall ist. RefinedC und Islaris basieren auf Lithium, eine neue Automatisierungstechnik fĂŒr Separationslogik, die maschinenĂŒberprĂŒfbare Beweise und Automatisierung verbindet.This research was supported in part by a Google PhD Fellowship, in part by awards from Android Security's ASPIRE program and from Google Research, and in part by a European Research Council (ERC) Consolidator Grant for the project "RustBelt", funded under the European Unionâs Horizon 2020 Framework Programme (grant agreement no. 683289)
Automated Testing of Software Upgrades for Android Systems
Appsâ pervasive role in our society motivates researchers to develop automated techniques ensuring dependability through testing. However, although App updates are frequent and software engineers would like to prioritize the testing of updated features, automated testing techniques verify entire Apps and thus waste resources. Further, most testing techniques can detect only crashing failures, necessitating visual inspection of outputs to detect functional failures, which is a costly task. Despite efforts to automatically derive oracles for functional failures, the effectiveness of existing approaches is limited. Therefore, instead of automating human tasks, it seems preferable to minimize what should be visually inspected by engineers.
To address the problems above, in this dissertation, we propose approaches to maximize testing effectiveness while containing test execution time and human effort.
First, we present ATUA (Automated Testing of Updates for Apps), a model-based approach that synthesizes App models with static analysis, integrates a dynamically refined state abstraction function, and combines complementary testing strategies, thus enabling ATUA to generate a small set of inputs that exercise only the code affected by updates. A large empirical evaluation conducted with 72 App versions belonging to nine popular Android Apps has shown that ATUA is more effective and less effort-intensive than state-of-the-art approaches when testing App updates.
Second, we present CALM (Continuous Adaptation of Learned Models), an automated App testing approach that efficiently tests App updates by adapting App models learned when automatically testing previous App versions. CALM minimizes the number of App screens to be visualized by software testers while maximizing the percentage of updated methods and instructions exercised. Our empirical evaluation shows that CALM exercises a significantly higher proportion of updated methods and instructions than baselines for the same maximum number of App screens to be visually inspected. Further, in common update scenarios, where only a small fraction of methods are updated, CALM is even quicker to outperform all competing approaches more significantly.
Finally, we minimize test oracle cost by defining strategies for selecting, for visual inspection, a subset of the App outputs. We assessed 26 strategies, relying on either code coverage or action effect, on Apps affected by functional faults confirmed by their developers. Our empirical evaluation has shown that our strategies have the potential to enable the identification of a large proportion of faults. By combining code coverage with action effect, it is possible to reduce oracle cost by about 41.2% while enabling engineers to detect all the faults exercised by test automation approaches
- âŠ