586,239 research outputs found
Leveraging the heterogeneity of the internet of things devices to improve the security of smart environments
The growing number of devices that are being incorporated into the Internet of Things (IoT) environments leads to a wider presence of a variety of sensors, making these environments heterogeneous. However, the lack of standard input interfaces in such ecosystems poses a challenge in securing them. Among other existing vulnerabilities, the most prevalent are the lack of adequate access control mechanisms and the exploitation of cross-channel interactions between smart devices.
In order to tackle the first challenge, I propose a novel behavioral biometric system based on naturally occurring interactions with objects in smart environments. This system is designed to reduce the reliance on existing app-based authentication mechanisms of current smart home platforms and it leverages existing heterogeneous IoT devices to both identify and authenticate users without requiring any hardware modifications of existing smart home devices.
To be able to collect the data and evaluate this system, I introduce an end-to-end framework for remote experiments. Such experiments play an important role across multiple fields of studies, from medical science to engineering, as they allow for better representation of human participants and more realistic experimental environments, and ensure research continuity in exceptional circumstances, such as nationwide lockdowns. Yet cyber security has few standards for conducting experiments with human participants, let alone in a remote setting. This framework systematizes design and deployment practices while preserving realistic, reproducible data collection and the safety and privacy of participants.
Using this methodology, I conduct two experiments. The first one is a multi-user study taking place in six households composed of 25 participants. The second experiment involves 13 participants in a company environment and is used to study mimicry attacks on the biometric system proposed in this thesis. I demonstrate that this system can identify users in multi-user environments with an accuracy of at least 98% for a single object interaction without requiring any sensors on the object itself. I also show that it can provide seamless and unobtrusive authentication while remaining highly resistant to zero-effort, video, and in-person observation-based mimicry attacks. Even when at most 1% of the strongest type of mimicry attacks are successful, this system does not require the user to take out their phone to approve legitimate transactions in more than 80% of cases for a single interaction. This increases to 92% of transactions when interactions with more objects are considered.
To mitigate the second vulnerability, where an attacker exploits multiple heterogeneous devices in a chain such that each one triggers the next, I propose a novel approach that uses only dynamic analysis to examine such interactions in smart ecosystems. I use real-time device data to generate a knowledge graph that models the interactions between devices and enables the system to identify attack chains and vulnerable automations. I evaluate this approach in a smart home environment with 8 devices and 10 automations, with and without the presence of an active user. I demonstrate that such a system can accurately detect 10 cross-channel interactions that lead to 30 different cross-channel interaction chains in the unoccupied environment and 6 such interactions that result in 13 interaction chains in the occupied environment
Ontology based Scene Creation for the Development of Automated Vehicles
The introduction of automated vehicles without permanent human supervision
demands a functional system description, including functional system boundaries
and a comprehensive safety analysis. These inputs to the technical development
can be identified and analyzed by a scenario-based approach. Furthermore, to
establish an economical test and release process, a large number of scenarios
must be identified to obtain meaningful test results. Experts are doing well to
identify scenarios that are difficult to handle or unlikely to happen. However,
experts are unlikely to identify all scenarios possible based on the knowledge
they have on hand. Expert knowledge modeled for computer aided processing may
help for the purpose of providing a wide range of scenarios. This contribution
reviews ontologies as knowledge-based systems in the field of automated
vehicles, and proposes a generation of traffic scenes in natural language as a
basis for a scenario creation.Comment: Accepted at the 2018 IEEE Intelligent Vehicles Symposium, 8 pages, 10
figure
Understanding patient safety performance and educational needs using the ‘Safety-II’ approach for complex systems
Participation in projects to improve patient safety is a key component of general practice (GP) specialty training, appraisal and revalidation. Patient safety training priorities for GPs at all career stages are described in the Royal College of General Practitioners’ curriculum. Current methods that are taught and employed to improve safety often use a ‘find-and-fix’ approach to identify components of a system (including humans) where performance could be improved. However, the complex interactions and inter-dependence between components in healthcare systems mean that cause and effect are not always linked in a predictable manner. The Safety-II approach has been proposed as a new way to understand how safety is achieved in complex systems that may improve quality and safety initiatives and enhance GP and trainee curriculum coverage. Safety-II aims to maximise the number of events with a successful outcome by exploring everyday work. Work-as-done often differs from work-as-imagined in protocols and guidelines and various ways to achieve success, dependent on work conditions, may be possible. Traditional approaches to improve the quality and safety of care often aim to constrain variability but understanding and managing variability may be a more beneficial approach. The application of a Safety-II approach to incident investigation, quality improvement projects, prospective analysis of risk in systems and performance indicators may offer improved insight into system performance leading to more effective change. The way forward may be to combine the Safety-II approach with ‘traditional’ methods to enhance patient safety training, outcomes and curriculum coverage
Engineering simulations for cancer systems biology
Computer simulation can be used to inform in vivo and in vitro experimentation, enabling rapid, low-cost hypothesis generation and directing experimental design in order to test those hypotheses. In this way, in silico models become a scientific instrument for investigation, and so should be developed to high standards, be carefully calibrated and their findings presented in such that they may be reproduced. Here, we outline a framework that supports developing simulations as scientific instruments, and we select cancer systems biology as an exemplar domain, with a particular focus on cellular signalling models. We consider the challenges of lack of data, incomplete knowledge and modelling in the context of a rapidly changing knowledge base. Our framework comprises a process to clearly separate scientific and engineering concerns in model and simulation development, and an argumentation approach to documenting models for rigorous way of recording assumptions and knowledge gaps. We propose interactive, dynamic visualisation tools to enable the biological community to interact with cellular signalling models directly for experimental design. There is a mismatch in scale between these cellular models and tissue structures that are affected by tumours, and bridging this gap requires substantial computational resource. We present concurrent programming as a technology to link scales without losing important details through model simplification. We discuss the value of combining this technology, interactive visualisation, argumentation and model separation to support development of multi-scale models that represent biologically plausible cells arranged in biologically plausible structures that model cell behaviour, interactions and response to therapeutic interventions
An Assurance Framework for Independent Co-assurance of Safety and Security
Integrated safety and security assurance for complex systems is difficult for
many technical and socio-technical reasons such as mismatched processes,
inadequate information, differing use of language and philosophies, etc.. Many
co-assurance techniques rely on disregarding some of these challenges in order
to present a unified methodology. Even with this simplification, no methodology
has been widely adopted primarily because this approach is unrealistic when met
with the complexity of real-world system development.
This paper presents an alternate approach by providing a Safety-Security
Assurance Framework (SSAF) based on a core set of assurance principles. This is
done so that safety and security can be co-assured independently, as opposed to
unified co-assurance which has been shown to have significant drawbacks. This
also allows for separate processes and expertise from practitioners in each
domain. With this structure, the focus is shifted from simplified unification
to integration through exchanging the correct information at the right time
using synchronisation activities
Resilience markers for safer systems and organisations
If computer systems are to be designed to foster resilient
performance it is important to be able to identify contributors to resilience. The
emerging practice of Resilience Engineering has identified that people are still a
primary source of resilience, and that the design of distributed systems should
provide ways of helping people and organisations to cope with complexity.
Although resilience has been identified as a desired property, researchers and
practitioners do not have a clear understanding of what manifestations of
resilience look like. This paper discusses some examples of strategies that
people can adopt that improve the resilience of a system. Critically, analysis
reveals that the generation of these strategies is only possible if the system
facilitates them. As an example, this paper discusses practices, such as
reflection, that are known to encourage resilient behavior in people. Reflection
allows systems to better prepare for oncoming demands. We show that
contributors to the practice of reflection manifest themselves at different levels
of abstraction: from individual strategies to practices in, for example, control
room environments. The analysis of interaction at these levels enables resilient
properties of a system to be ‘seen’, so that systems can be designed to explicitly
support them. We then present an analysis of resilience at an organisational
level within the nuclear domain. This highlights some of the challenges facing
the Resilience Engineering approach and the need for using a collective
language to articulate knowledge of resilient practices across domains
Feature-Aware Verification
A software product line is a set of software products that are distinguished
in terms of features (i.e., end-user--visible units of behavior). Feature
interactions ---situations in which the combination of features leads to
emergent and possibly critical behavior--- are a major source of failures in
software product lines. We explore how feature-aware verification can improve
the automatic detection of feature interactions in software product lines.
Feature-aware verification uses product-line verification techniques and
supports the specification of feature properties along with the features in
separate and composable units. It integrates the technique of variability
encoding to verify a product line without generating and checking a possibly
exponential number of feature combinations. We developed the tool suite
SPLverifier for feature-aware verification, which is based on standard
model-checking technology. We applied it to an e-mail system that incorporates
domain knowledge of AT&T. We found that feature interactions can be detected
automatically based on specifications that have only feature-local knowledge,
and that variability encoding significantly improves the verification
performance when proving the absence of interactions.Comment: 12 pages, 9 figures, 1 tabl
Adaptation and implementation of a process of innovation and design within a SME
A design process is a sequence of design phases, starting with the design requirement and leading to a definition of one or several system architectures. For every design phase, various support tools and resolution methods are proposed in the literature. These tools are however very difficult to implement in an SME, which may often lack resources. In this article we propose a complete design process for new manufacturing techniques, based on creativity and knowledge re-use in searching for technical solutions. Conscious of the difficulties of appropriation in SME, for every phase of our design process we propose resolution tools which are adapted to the context of a small firm. Design knowledge has been capitalized in a knowledge base. The knowledge structuring we propose is based on functional logic and the design process too is based on the functional decomposition of the system, and integrates the simplification of the system architecture, from the early phases of the process. For this purpose, aggregation phases and embodiment are proposed and guided by heuristics
- …