6 research outputs found

    COVID-19 & privacy: Enhancing of indoor localization architectures towards effective social distancing

    Get PDF
    Abstract The way people access services in indoor environments has dramatically changed in the last year. The countermeasures to the COVID-19 pandemic imposed a disruptive requirement, namely preserving social distance among people in indoor environments. We explore in this work the possibility of adopting the indoor localization technologies to measure the distance among users in indoor environments. We discuss how information about people's contacts collected can be exploited during three stages: before, during, and after people access a service. We present a reference architecture for an Indoor Localization System (ILS), and we illustrate three representative use-cases. We derive some architectural requirements, and we discuss some issues that concretely cope with the real installation of an ILS in real-world settings. In particular, we explore the privacy and trust reputation of an ILS, the discovery phase, and the deployment of the ILS in real-world settings. We finally present an evaluation framework for assessing the performance of the architecture proposed

    Model-connected safety cases

    Get PDF
    Regulatory authorities require justification that safety-critical systems exhibit acceptable levels of safety. Safety cases are traditionally documents which allow the exchange of information between stakeholders and communicate the rationale of how safety is achieved via a clear, convincing and comprehensive argument and its supporting evidence. In the automotive and aviation industries, safety cases have a critical role in the certification process and their maintenance is required throughout a system’s lifecycle. Safety-case-based certification is typically handled manually and the increase in scale and complexity of modern systems renders it impractical and error prone.Several contemporary safety standards have adopted a safety-related framework that revolves around a concept of generic safety requirements, known as Safety Integrity Levels (SILs). Following these guidelines, safety can be justified through satisfaction of SILs. Careful examination of these standards suggests that despite the noticeable differences, there are converging aspects. This thesis elicits the common elements found in safety standards and defines a pattern for the development of safety cases for cross-sector application. It also establishes a metamodel that connects parts of the safety case with the target system architecture and model-based safety analysis methods. This enables the semi- automatic construction and maintenance of safety arguments that help mitigate problems related to manual approaches. Specifically, the proposed metamodel incorporates system modelling, failure information, model-based safety analysis and optimisation techniques to allocate requirements in the form of SILs. The system architecture and the allocated requirements along with a user-defined safety argument pattern, which describes the target argument structure, enable the instantiation algorithm to automatically generate the corresponding safety argument. The idea behind model-connected safety cases stemmed from a critical literature review on safety standards and practices related to safety cases. The thesis presents the method, and implemented framework, in detail and showcases the different phases and outcomes via a simple example. It then applies the method on a case study based on the Boeing 787’s brake system and evaluates the resulting argument against certain criteria, such as scalability. Finally, contributions compared to traditional approaches are laid out

    Real Time Reasoning in OWL2 for GDPR Compliance

    Full text link
    This paper shows how knowledge representation and reasoning techniques can be used to support organizations in complying with the GDPR, that is, the new European data protection regulation. This work is carried out in a European H2020 project called SPECIAL. Data usage policies, the consent of data subjects, and selected fragments of the GDPR are encoded in a fragment of OWL2 called PL (policy language); compliance checking and policy validation are reduced to subsumption checking and concept consistency checking. This work proposes a satisfactory tradeoff between the expressiveness requirements on PL posed by the GDPR, and the scalability requirements that arise from the use cases provided by SPECIAL's industrial partners. Real-time compliance checking is achieved by means of a specialized reasoner, called PLR, that leverages knowledge compilation and structural subsumption techniques. The performance of a prototype implementation of PLR is analyzed through systematic experiments, and compared with the performance of other important reasoners. Moreover, we show how PL and PLR can be extended to support richer ontologies, by means of import-by-query techniques. PL and its integration with OWL2's profiles constitute new tractable fragments of OWL2. We prove also some negative results, concerning the intractability of unrestricted reasoning in PL, and the limitations posed on ontology import

    A Semantic Testing Approach for Deep Neural Networks Using Bayesian Network Abstraction

    Get PDF
    The studies presented in this thesis are directed at investigating the internal decision process of Deep Neural Networks (DNNs) and testing their performance based on feature impor- tance weights. Deep learning models have achieved state-of-the-art performance in a variety of machine learning tasks, which has led to their integration into safety-critical domains such as autonomous vehicles. The susceptibility of deep learning models to adversarial examples raises serious concerns about their application in safety-critical contexts. Most existing testing methodologies have failed to consider the interactions between neurons and the semantic representations formed in the DNN during the training process. This thesis designed weight-based semantic testing metrics that first modelled the internal behaviour of the DNNs into Bayesian networks and the contribution of the hidden features to their decisions into importance weight. Moreover, it measured the test data coverage according to the weight of the features. These approaches were followed to answer the main research question, "Is it a better measure of trustworthiness to measure the coverage of the semantic aspect of deep neural networks and treat each internal component according to its contribu- tion value to the decision when testing these learning models’ performance than relying on traditional structural unweighted measures?". This thesis makes three main contributions to the field of machine learning. First, the thesis proposes a novel technique for estimating the importance of a neural network’s latent features through its abstracted behaviour into a Bayesian Network (BN). The algo- rithm analysed the sensitivity of each extracted feature to distributional shifts by observing changes in BN distribution. The experimental results showed that computing the distance between two BN probability distributions, clean as well as perturbed by interval-shifts or adversarial attacks, can detect the distribution shift wherever it exists. The hidden features were assigned weight scores according to the computed sensitivity distances. Secondly, to further justify the contribution of each latent feature to the classification decision, the ab- stract scheme of the BN was extended to perform a prediction. The performance of the BN in predicting input classification labels was shown to be a decent approximator of the original DNN. Moreover, feature perturbation on the BN classifier demonstrated that each feature influenced prediction accuracy differently, thereby validating the presented feature importance assumption. Lastly, the developed feature importance measure was used to assess the extent to which a given test dataset exercises high-level features that have been learned by hidden layers of the DNN, taking into account significant representations as a priority when generating new test inputs. The evaluation was conducted to compare the initial and final coverage of the proposed weighting approach with normal BN-based feature coverage. The testing coverage experiments indicated that the proposed weight metrics achieved higher coverage compared to the original feature metrics while maintain- ing the effectiveness of finding adversarial samples during the test case generation process. Furthermore, the weight metrics guaranteed that the achieved testing percent covered the most crucial components, where the test generation algorithm was directed to synthesise new input targeting features with higher importance scores. Hence, the evidence of DNNs’ trustworthy behaviour is subsequently furthered through this study

    Renforcement formel et automatique de politiques de sécurité dans des applications Android par réécriture

    Get PDF
    Autant les applications Android ont réussi à positionner Android parmi les systèmes d'exploitation les plus utilisés, autant elles ont facilité aux créateurs de maliciels de s'introduire et de compromettre ses appareils. Une longue liste de menaces causées par les applications téléchargées vise l'intégrité du système et la vie privée de ses utilisateurs. Malgré l'évolution incessante du système Android pour améliorer son mécanisme de sécurité, le niveau de sophistication des logiciels malveillants a augmenté et s'adapte continuellement avec les nouvelles mesures. L'une des principales faiblesses menaçant la sécurité de ce système est le manque abyssal d'outils et d'environnements permettant la spécification et la vérification formelle des comportements des applications avant que les dommages ne soient causés. À cet égard, les méthodes formelles semblent être le moyen le plus naturel et le plus sûr pour une spécification et une vérification rigoureuses et non ambiguës de telles applications. Notre objectif principal est de développer un cadre formel pour le renforcement de politiques de sécurité dans les applications Android. L'idée est d'établir une synergie entre le paradigme orienté aspect et les méthodes formelles. L'approche consiste à réécrire le programme de l'application en ajoutant des tests de sécurité à certains points soigneusement sélectionnés pour garantir le respect de la politique de sécurité. La version réécrite du programme préserve tous les bons comportements de la version originale qui sont conformes à la politique de sécurité et agit contre les mauvais.As much as they have positioned Android among the most widely used operating systems, Android applications have helped malware creators to break in and infect its devices. A long list of threats caused by downloaded applications targets the integrity of the system and the privacy of its users. While the Android system is constantly evolving to improve its security mechanism, the malware's sophistication level is skyrocketing and continuously adapting with the new measures. One of the main weaknesses threatening smartphone security is the abysmal lack of tools and environments that allow formal specification and verification of application behaviors before damage is done. In this regard, formal methods seem to be the most natural and secure way for rigorous and unambiguous specification and verification of such applications. Our ultimate goal is to formally enforce security policies on Android applications. The main idea is to establish a synergy between the aspect-oriented paradigm and formal methods such as the program rewriting technique. The approach consists of rewriting the application program by adding security tests at certain carefully selected points to ensure that the security policy is respected. The rewritten version of the program preserves all the good behaviors of the original one that comply with the security policy and acts against the bad ones
    corecore