159 research outputs found
Perception and Acceptance of an Autonomous Refactoring Bot
The use of autonomous bots for automatic support in software development
tasks is increasing. In the past, however, they were not always perceived
positively and sometimes experienced a negative bias compared to their human
counterparts. We conducted a qualitative study in which we deployed an
autonomous refactoring bot for 41 days in a student software development
project. In between and at the end, we conducted semi-structured interviews to
find out how developers perceive the bot and whether they are more or less
critical when reviewing the contributions of a bot compared to human
contributions. Our findings show that the bot was perceived as a useful and
unobtrusive contributor, and developers were no more critical of it than they
were about their human colleagues, but only a few team members felt responsible
for the bot.Comment: 8 pages, 2 figures. To be published at 12th International Conference
on Agents and Artificial Intelligence (ICAART 2020
Security Assurance Cases -- State of the Art of an Emerging Approach
Security Assurance Cases (SAC) are a form of structured argumentation used to
reason about the security properties of a system. After the successful adoption
of assurance cases for safety, SACs are getting significant traction in recent
years, especially in safety-critical industries (e.g., automotive), where there
is an increasing pressure to be compliant with several security standards and
regulations. Accordingly, research in the field of SAC has flourished in the
past decade, with different approaches being investigated. In an effort to
systematize this active field of research, we conducted a systematic literature
review (SLR) of the existing academic studies on SAC. Our review resulted in an
in-depth analysis and comparison of 51 papers. Our results indicate that, while
there are numerous papers discussing the importance of security assurance cases
and their usage scenarios, the literature is still immature with respect to
concrete support for practitioners on how to build and maintain a SAC. More
importantly, even though some methodologies are available, their validation and
tool support is still lacking
GitHub Considered Harmful? Analyzing Open-Source Projects for the Automatic Generation of Cryptographic API Call Sequences
GitHub is a popular data repository for code examples. It is being
continuously used to train several AI-based tools to automatically generate
code. However, the effectiveness of such tools in correctly demonstrating the
usage of cryptographic APIs has not been thoroughly assessed. In this paper, we
investigate the extent and severity of misuses, specifically caused by
incorrect cryptographic API call sequences in GitHub. We also analyze the
suitability of GitHub data to train a learning-based model to generate correct
cryptographic API call sequences. For this, we manually extracted and analyzed
the call sequences from GitHub. Using this data, we augmented an existing
learning-based model called DeepAPI to create two security-specific models that
generate cryptographic API call sequences for a given natural language (NL)
description. Our results indicate that it is imperative to not neglect the
misuses in API call sequences while using data sources like GitHub, to train
models that generate code.Comment: Accepted at QRS 202
Secure Software Development in the Era of Fluid Multi-party Open Software and Services
Pushed by market forces, software development has become fast-paced. As a
consequence, modern development projects are assembled from 3rd-party
components. Security & privacy assurance techniques once designed for large,
controlled updates over months or years, must now cope with small, continuous
changes taking place within a week, and happening in sub-components that are
controlled by third-party developers one might not even know they existed. In
this paper, we aim to provide an overview of the current software security
approaches and evaluate their appropriateness in the face of the changed nature
in software development. Software security assurance could benefit by switching
from a process-based to an artefact-based approach. Further, security
evaluation might need to be more incremental, automated and decentralized. We
believe this can be achieved by supporting mechanisms for lightweight and
scalable screenings that are applicable to the entire population of software
components albeit there might be a price to pay.Comment: 7 pages, 1 figure, to be published in Proceedings of International
Conference on Software Engineering - New Ideas and Emerging Result
Remote Trust with Aspect-Oriented Programming
Given a client/server application, how can the server
entrust the integrity of the remote client, albeit the latter
is running on an un-trusted machine? To address this
research problem, we propose a novel approach based
on the client-side generation of an execution signature,
which is remotely checked by the server, wherein
signature generation is locked to the entrusted software
by means of code integrity checking. Our approach
exploits the features of dynamic aspect-oriented
programming (AOP) to extend the power of code
integrity checkers in several ways. This paper both
presents our approach and describes a prototype
implementation for a messaging application
Precise Analysis of Purpose Limitation in Data Flow Diagrams
Data Flow Diagrams (DFDs) are primarily used for modelling functional properties of a system. In recent work, it was shown that DFDs can be used to also model non-functional properties, such as security and privacy properties, if they are annotated with appropriate security- and privacy-related information. An important privacy principle one may wish to model in this way is purpose limitation. But previous work on privacy-aware DFDs (PA-DFDs) considers purpose limitation only superficially, without explaining how the purpose of DFD activators and flows ought to be specified, checked or inferred. In this paper, we define a rigorous formal framework for (1) annotating DFDs with purpose labels and privacy signatures, (2) checking the consistency of labels and signatures, and (3) inferring labels from signatures. We implement our theoretical framework in a proof-of concept tool consisting of a domain-specific language (DSL) for specifying privacy signatures and algorithms for checking and inferring purpose labels from such signatures. Finally, we evaluate our framework and tool through a case study based on a DFD from the privacy literature
REMIND: A Framework for the Resilient Design of Automotive Systems
In the past years, great effort has been spent on enhancing the security and safety of vehicular systems. Current advances in information and communication technology have increased the complexity of these systems and lead to extended functionalities towards self-driving and more connectivity. Unfortunately, these advances open the door for diverse and newly emerging attacks that hamper the security and, thus, the safety of vehicular systems. In this paper, we contribute to supporting the design of resilient automotive systems. We review and analyze scientific literature on resilience techniques, fault tolerance, and dependability. As a result, we present the REMIND resilience framework providing techniques for attack detection, mitigation, recovery, and resilience endurance. Moreover, we provide guidelines on how the REMIND framework can be used against common security threats and attacks and further discuss the trade-offs when applying these guidelines
- …