3,093 research outputs found
Probabilistic Methodology and Techniques for Artefact Conception and Development
The purpose of this paper is to make a state of the art on probabilistic methodology and techniques for artefact conception and development. It is the 8th deliverable of the BIBA (Bayesian Inspired Brain and Artefacts) project. We first present the incompletness problem as the central difficulty that both living creatures and artefacts have to face: how can they perceive, infer, decide and act efficiently with incomplete and uncertain knowledge?. We then introduce a generic probabilistic formalism called Bayesian Programming. This formalism is then used to review the main probabilistic methodology
and techniques. This review is organized in 3 parts: first the probabilistic models from Bayesian networks to Kalman filters and from sensor fusion to CAD systems, second the inference techniques and finally the learning and model acquisition and comparison methodologies. We conclude with the perspectives of the BIBA project as they rise from this state of the art
A Design of MAC Model Based on the Separation of Duties and Data Coloring: DSDC-MAC
Among the access control methods for database security, there is Mandatory Access Control (MAC) model in which the security level is set to both the subject and the object to enhance the security control. Legacy MAC models have focused only on one thing, either confidentiality or integrity. Thus, it can cause collisions between security policies in supporting confidentiality and integrity simultaneously. In addition, they do not provide a granular security class policy of subjects and objects in terms of subjects\u27 roles or tasks. In this paper, we present the security policy of Bell_LaPadula Model (BLP) model and Biba model as one complemented policy. In addition, Duties Separation and Data Coloring (DSDC)-MAC model applying new data coloring security method is proposed to enable granular access control from the viewpoint of Segregation of Duty (SoD). The case study demonstrated that the proposed modeling work maintains the practicality through the design of Human Resources management System. The proposed model in this study is suitable for organizations like military forces or intelligence agencies where confidential information should be carefully handled. Furthermore, this model is expected to protect systems against malicious insiders and improve the confidentiality and integrity of data
On Properties of Policy-Based Specifications
The advent of large-scale, complex computing systems has dramatically
increased the difficulties of securing accesses to systems' resources. To
ensure confidentiality and integrity, the exploitation of access control
mechanisms has thus become a crucial issue in the design of modern computing
systems. Among the different access control approaches proposed in the last
decades, the policy-based one permits to capture, by resorting to the concept
of attribute, all systems' security-relevant information and to be, at the same
time, sufficiently flexible and expressive to represent the other approaches.
In this paper, we move a step further to understand the effectiveness of
policy-based specifications by studying how they permit to enforce traditional
security properties. To support system designers in developing and maintaining
policy-based specifications, we formalise also some relevant properties
regarding the structure of policies. By means of a case study from the banking
domain, we present real instances of such properties and outline an approach
towards their automatised verification.Comment: In Proceedings WWV 2015, arXiv:1508.0338
Expressing Bayesian Fusion as a Product of Distributions: Application to Randomized Hough Transform
Data fusion is a common issue of mobile robotics, computer assisted
medical diagnosis or behavioral control of simulated character for instance. However
data sources are often noisy, opinion for experts are not known with absolute
precision, and motor commands do not act in the same exact manner on the environment.
In these cases, classic logic fails to manage efficiently the fusion process.
Confronting different knowledge in an uncertain environment can therefore be adequately
formalized in the bayesian framework.
Besides, bayesian fusion can be expensive in terms of memory usage and processing
time. This paper precisely aims at expressing any bayesian fusion process as a
product of probability distributions in order to reduce its complexity. We first study
both direct and inverse fusion schemes. We show that contrary to direct models,
inverse local models need a specific prior in order to allow the fusion to be computed
as a product. We therefore propose to add a consistency variable to each local
model and we show that these additional variables allow the use of a product of the
local distributions in order to compute the global probability distribution over the
fused variable. Finally, we take the example of the Randomized Hough Transform.
We rewrite it in the bayesian framework, considering that it is a fusion process
to extract lines from couples of dots in a picture. As expected, we can find back
the expression of the Randomized Hough Transform from the literature with the
appropriate assumptions
Design-Time Quantification of Integrity in Cyber-Physical-Systems
In a software system it is possible to quantify the amount of information
that is leaked or corrupted by analysing the flows of information present in
the source code. In a cyber-physical system, information flows are not only
present at the digital level, but also at a physical level, and to and fro the
two levels. In this work, we provide a methodology to formally analyse a
Cyber-Physical System composite model (combining physics and control) using an
information flow-theoretic approach. We use this approach to quantify the level
of vulnerability of a system with respect to attackers with different
capabilities. We illustrate our approach by means of a water distribution case
study
- …