94 research outputs found
Mobility control via passports
International audienceDpi is a simple distributed extension of the pi-calculus in which agents are explicitly located, and may use an explicit migration construct to move between locations. In this paper we introduce passports to control those migrations; in order to gain access to a location agents are now expected to show some credentials, granted by the destination location. Passports are tied to specific locations, from which migration is permitted. We describe a type system for these passports, which includes a novel use of dependent types, and prove that well-typing enforces the desired behaviour in migrating processes. Passports allow locations to control incoming processes. This induces a major modification to the possible observations which can be made of agent-based systems. Using the type system we describe these observations, and use them to build a loyal notion of observational equivalence for this setting. Finally we provide a complete proof technique in the form of a bisimilarity for establishing equivalences between systems
Summary-based inference of quantitative bounds of live heap objects
This article presents a symbolic static analysis for computing parametric upper bounds of the number of simultaneously live objects of sequential Java-like programs. Inferring the peak amount of irreclaimable objects is the cornerstone for analyzing potential heap-memory consumption of stand-alone applications or libraries. The analysis builds method-level summaries quantifying the peak number of live objects and the number of escaping objects. Summaries are built by resorting to summaries of their callees. The usability, scalability and precision of the technique is validated by successfully predicting the object heap usage of a medium-size, real-life application which is significantly larger than other previously reported case-studies.Fil: Braberman, Victor Adrian. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de ComputaciĂłn; Argentina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas; ArgentinaFil: Garbervetsky, Diego David. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de ComputaciĂłn; Argentina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas; ArgentinaFil: Hym, Samuel. Universite Lille 3; FranciaFil: Yovine, Sergio Fabian. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de ComputaciĂłn; Argentina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas; Argentin
Lightweight verification of control flow policies on Java bytecode
This paper presents the enforcement of control flow policies for Java bytecode devoted to open and constrained devices. On-device enforcement of security policies mostly relies on run-time monitoring or inline checking code, which is not appropriate for strongly constrained devices such as mobile phones and smart-cards. We present a proof-carrying code approach with on-device lightweight verification of control flow policies statically at loading- time. Our approach is suitable for evolving, open and constrained Java-based systems as it is compositional, to avoid re-verification of already verified bytecode upon loading of new bytecode, and it is regressive, to cleanly support bytecode unloading.Ce rapport présente l'application de politiques de flot de contrôle sur du bytecode Java pour les petits systèmes ouverts. La plupart du temps, l'application de ce type de politiques de sécurité est réalisée par l'observation du système ou l'insertion de code pour assuré en assurer le respect, ce qui n'est pas approprié pour les petits systèmes fortement contraints tels que les téléphones mobiles ou les cartes à puce. Nous présentons une méthode basée sur le proof-carrying code pour faire appliquer ce type de politiques avec une vérification embarquée réalisée au chargement. Notre approche est bien adaptée aux petits systèmes ouverts évolutifs car elle est compositionnelle, pour éviter la revérification du code déjà chargé, et régressive, afin de traiter proprement le déchargement de code déjà installé et vérifié
Sur la conception d'un service de changement de contexte et de sa preuve dans le proto-noyau Pip
International audienceThe Pip protokernel is a kernel whose trusted computing base is reduced to its bare bones. The goal of such minimisation is twofold: reduce the attack surface and reduce the cost of the formal proof of security. In particular, multiplexing is not implemented in the kernel but in a partition whose code is executed in user mode. This of course assumes that the kernel provides minimal services dedicated to signal sending. In this paper, we describe a streamlined service designed to allow for inter-partition communication through userland structures that mimic the traditional Interrupt Descriptor Table
Encapsulation and Dynamic Modularity in the Pi-Calculus
We describe a process calculus featuring high level constructs for
component-oriented programming in a distributed setting. We propose an
extension of the higher-order pi-calculus intended to capture several important
mechanisms related to component-based programming, such as dynamic update,
reconfiguration and code migration. In this paper, we are primarily concerned
with the possibility to build a distributed implementation of our calculus.
Accordingly, we define a low-level calculus, that describes how the high-level
constructs are implemented, as well as details of the data structures
manipulated at runtime. We also discuss current and future directions of
research in relation to our analysis of component-based programming
Semantically aware hierarchical Bayesian network model for knowledge discovery in data : an ontology-based framework
Several mining algorithms have been invented over the course of recent decades. However, many of the invented algorithms are confined to generating frequent patterns and do not illustrate how to act upon them. Hence, many researchers have argued that existing mining algorithms have some limitations with respect to performance and workability. Quantity and quality are the main limitations of the existing mining algorithms. While quantity states that the generated patterns are abundant, quality indicates that they cannot be integrated into the business domain seamlessly. Consequently, recent research has suggested that the limitations of the existing mining algorithms are the result of treating the mining process as an isolated and autonomous data-driven trial-and-error process and ignoring the domain knowledge. Accordingly, the integration of domain knowledge into the mining process has become the goal of recent data mining algorithms. Domain knowledge can be represented using various techniques. However, recent research has stated that ontology is the natural way to represent knowledge for data mining use. The structural nature of ontology makes it a very strong candidate for integrating domain knowledge with data mining algorithms. It has been claimed that ontology can play the following roles in the data mining process: •Bridging the semantic gap. •Providing prior knowledge and constraints. •Formally representing the DM results. Despite the fact that a variety of research has used ontology to enrich different tasks in the data mining process, recent research has revealed that the process of developing a framework that systematically consolidates ontology and the mining algorithms in an intelligent mining environment has not been realised. Hence, this thesis proposes an automatic, systematic and flexible framework that integrates the Hierarchical Bayesian Network (HBN) and domain ontology. The ultimate aim of this thesis is to propose a data mining framework that implicitly caters for the underpinning domain knowledge and eventually leads to a more intelligent and accurate mining process. To a certain extent the proposed mining model will simulate the cognitive system in the human being. The similarity between ontology, the Bayesian Network (BN) and bioinformatics applications establishes a strong connection between these research disciplines. This similarity can be summarised in the following points: •Both ontology and BN have a graphical-based structure. •Biomedical applications are known for their uncertainty. Likewise, BN is a powerful tool for reasoning under uncertainty. •The medical data involved in biomedical applications is comprehensive and ontology is the right model for representing comprehensive data. Hence, the proposed ontology-based Semantically Aware Hierarchical Bayesian Network (SAHBN) is applied to eight biomedical data sets in the field of predicting the effect of the DNA repair gene in the human ageing process and the identification of hub protein. Consequently, the performance of SAHBN was compared with existing Bayesian-based classification algorithms. Overall, SAHBN demonstrated a very competitive performance. The contribution of this thesis can be summarised in the following points. •Proposed an automatic, systematic and flexible framework to integrate ontology and the HBN. Based on the literature review, and to the best of our knowledge, no such framework has been proposed previously. •The complexity of learning HBN structure from observed data is significant. Hence, the proposed SAHBN model utilized the domain knowledge in the form of ontology to overcome this challenge. •The proposed SAHBN model preserves the advantages of both ontology and Bayesian theory. It integrates the concept of Bayesian uncertainty with the deterministic nature of ontology without extending ontology structure and adding probability-specific properties that violate the ontology standard structure. •The proposed SAHBN utilized the domain knowledge in the form of ontology to define the semantic relationships between the attributes involved in the mining process, guides the HBN structure construction procedure, checks the consistency of the training data set and facilitates the calculation of the associated conditional probability tables (CPTs). •The proposed SAHBN model lay out a solid foundation to integrate other semantic relations such as equivalent, disjoint, intersection and union
Lightweight verification of control flow policies on Java bytecode
This paper presents the enforcement of control flow policies for Java bytecode devoted to open and constrained devices. On-device enforcement of security policies mostly relies on run-time monitoring or inline checking code, which is not appropriate for strongly constrained devices such as mobile phones and smart-cards. We present a proof-carrying code approach with on-device lightweight verification of control flow policies statically at loading- time. Our approach is suitable for evolving, open and constrained Java-based systems as it is compositional, to avoid re-verification of already verified bytecode upon loading of new bytecode, and it is regressive, to cleanly support bytecode unloading.Ce rapport présente l'application de politiques de flot de contrôle sur du bytecode Java pour les petits systèmes ouverts. La plupart du temps, l'application de ce type de politiques de sécurité est réalisée par l'observation du système ou l'insertion de code pour assuré en assurer le respect, ce qui n'est pas approprié pour les petits systèmes fortement contraints tels que les téléphones mobiles ou les cartes à puce. Nous présentons une méthode basée sur le proof-carrying code pour faire appliquer ce type de politiques avec une vérification embarquée réalisée au chargement. Notre approche est bien adaptée aux petits systèmes ouverts évolutifs car elle est compositionnelle, pour éviter la revérification du code déjà chargé, et régressive, afin de traiter proprement le déchargement de code déjà installé et vérifié
Validation of a measurement instrument for parental child feeding in a low and middle-income country
Background: Parental child feeding practices (PCFP) are a key factor influencing children's dietary intake, especially in the preschool years when eating behavior is being established. Instruments to measure PCFP have been developed and validated in high-income countries with a high prevalence of childhood obesity. The aim of this study was to test the appropriateness, content, and construct validity of selected measures of PCFP in a low and middle-income country (LMIC) in which there is both undernutrition and obesity in children. Methods: An expert panel selected subscales and items from measures of PCFP that have been well-tested in high-income countries to measure both "coercive" and "structural" behaviors. Two sequential cross-sectional studies (Study 1, n = 154; Study 2, n = 238) were conducted in two provinces in Indonesia. Findings of the first study were used to refine subscales used in Study 2. An additional qualitative study tested content validity from the perspective of mothers (the intended respondents). Factorial validation and reliability were also tested. Convergent validity was tested with child nutritional status. Results: In Study 1, a confirmatory factor analysis (CFA) model with 11 factors provided good fit (RMSEA = 0.045; CFI = 0.95 and TLI = 0.95) after two subscales were removed. Reliability was good among seven of the subscales. Following a decision to take out an additional subscale, the instrument was tested for factorial validity (Study 2). A CFA model with 10 subscales provided good fit (RMSEA = 0.03; CFI = 0.92 and TLI = 0.90). The reliability of subscales was lower than in Study 1. Convergent validity with nutrition status was found with two subscales. Conclusions: The two studies provide evidence of acceptable psychometric properties for 10 subscales from tested instruments to measure PCFP in Indonesia. This provides the first evidence of the validity of these measures in a LMIC setting. Some shortcomings, such in the reliability of some subscales and further tests of predictive validity, require further investigation
End-to-end Mechanized Proof of an eBPF Virtual Machine for Micro-controllers
International audienceRIOT is a micro-kernel dedicated to IoT applications that adopts eBPF (extended Berkeley Packet Filters) to implement so-called femto-containers. As micro-controllers rarely feature hardware memory protection, the isolation of eBPF virtual machines (VM) is critical to ensure system integrity against potentially malicious programs. This paper shows how to directly derive, within the Coq proof assistant, the verified C implementation of an eBPF virtual machine from a Gallina specification. Leveraging the formal semantics of the CompCert C compiler, we obtain an end-to-end theorem stating that the C code of our VM inherits the safety and security properties of the Gallina specification. Our refinement methodology ensures that the isolation property of the specification holds in the verified C implementation. Preliminary experiments demonstrate satisfying performance
- …