14 research outputs found
Toward Security Verification against Inference Attacks on Data Trees
This paper describes our ongoing work on security verification against
inference attacks on data trees. We focus on infinite secrecy against inference
attacks, which means that attackers cannot narrow down the candidates for the
value of the sensitive information to finite by available information to the
attackers. Our purpose is to propose a model under which infinite secrecy is
decidable. To be specific, we first propose tree transducers which are
expressive enough to represent practical queries. Then, in order to represent
attackers' knowledge, we propose data tree types such that type inference and
inverse type inference on those tree transducers are possible with respect to
data tree types, and infiniteness of data tree types is decidable.Comment: In Proceedings TTATT 2013, arXiv:1311.505
Interoperability of DRM Systems
The study deals with the cutting-edge subject of electronic contracts which have the potential to automatically process and control the access rights for (electronic) goods. It shows the design and the implementation of a rights expression exchange framework. The framework allows DRM systems to exchange electronic contracts, formulated in a standardized rights expression language, and thus provides DRM system interoperability. The work introduces a methodology for the standardized composition, exchange and processing of electronic contracts or rights expressions
Spécification, validation et satisfiabilité [i.e. satisfaisabilité] de contraintes hybrides par réduction à la logique temporelle
Depuis quelques annĂ©es, de nombreux champs de l'informatique ont Ă©tĂ© transformĂ©s par l'introduction d'une nouvelle vision de la conception et de l'utilisation d'un systĂšme, appelĂ©e approche dĂ©clarative. Contrairement Ă l'approche dite impĂ©rative, qui consiste Ă dĂ©crire au moyen d'un langage formelles opĂ©rations Ă effectuer pour obtenir un rĂ©sultat, l'approche dĂ©clarative suggĂšre plutĂŽt de dĂ©crire le rĂ©sultat dĂ©sirĂ©, sans spĂ©cifier comment ce «but» doit ĂȘtre atteint. L'approche dĂ©clarative peut ĂȘtre vue comme le prolongement d'une tendance ayant cours depuis les dĂ©buts de l'informatique et visant Ă rĂ©soudre des problĂšmes en manipulant des concepts d'un niveau d'abstraction toujours plus Ă©levĂ©. Le passage Ă un paradigme dĂ©claratif pose cependant certains problĂšmes: les outils actuels sont peu appropriĂ©s Ă une utilisation dĂ©clarative. On identifie trois questions fondamentales qui doivent ĂȘtre rĂ©solues pour souscrire Ă ce nouveau paradigme: l'expression de contraintes dans un langage formel, la validation de ces contraintes sur une structure, et enfin la construction d'une structure satisfaisant une contrainte donnĂ©e. Cette thĂšse Ă©tudie ces trois problĂšmes selon l'angle de la logique mathĂ©matique. On verra qu'en utilisant une logique comme fondement formel d'un langage de « buts », les questions de validation et de construction d'une structure se transposent en deux questions mathĂ©matiques, le model checking et la satisfiabilitĂ©, qui sont fondamentales et largement Ă©tudiĂ©es. En utilisant comme motivation deux contextes concrets, la gestion de rĂ©seaux et les architectures orientĂ©es services, le travail montrera qu'il est possible d'utiliser la logique mathĂ©matique pour dĂ©crire, vĂ©rifier et construire des configurations de rĂ©seaux ou des compositions de services web. L'aboutissement de la recherche consiste en le dĂ©veloppement de la logique CTLFO+, permettant d'exprimer des contraintes sur les donnĂ©es, sur la sĂ©quences des opĂ©rations\ud
d'un systĂšme, ainsi que des contraintes dites «hybrides». Une rĂ©duction de CTL-FO+ Ă la logique temporelle CTL permet de rĂ©utiliser de maniĂšre efficace des outils de vĂ©rification existants. ______________________________________________________________________________ MOTS-CLĂS DE LâAUTEUR : MĂ©thodes formelles, Services web, RĂ©seaux
From Relations to XML: Cleaning, Integrating and Securing Data
While relational databases are still the preferred approach for storing data, XML is emerging
as the primary standard for representing and exchanging data. Consequently, it has
been increasingly important to provide a uniform XML interface to various data sourcesâ
integration; and critical to protect sensitive and confidential information in XML data â
access control. Moreover, it is preferable to first detect and repair the inconsistencies in
the data to avoid the propagation of errors to other data processing steps. In response to
these challenges, this thesis presents an integrated framework for cleaning, integrating and
securing data.
The framework contains three parts. First, the data cleaning sub-framework makes
use of a new class of constraints specially designed for improving data quality, referred
to as conditional functional dependencies (CFDs), to detect and remove inconsistencies in
relational data. Both batch and incremental techniques are developed for detecting CFD
violations by SQL efficiently and repairing them based on a cost model. The cleaned relational
data, together with other non-XML data, is then converted to XML format by using
widely deployed XML publishing facilities. Second, the data integration sub-framework
uses a novel formalism, XML integration grammars (XIGs), to integrate multi-source XML
data which is either native or published from traditional databases. XIGs automatically
support conformance to a target DTD, and allow one to build a large, complex integration
via composition of component XIGs. To efficiently materialize the integrated data, algorithms
are developed for merging XML queries in XIGs and for scheduling them. Third, to
protect sensitive information in the integrated XML data, the data security sub-framework
allows users to access the data only through authorized views. User queries posed on these
views need to be rewritten into equivalent queries on the underlying document to avoid the
prohibitive cost of materializing and maintaining large number of views. Two algorithms
are proposed to support virtual XML views: a rewriting algorithm that characterizes the
rewritten queries as a new form of automata and an evaluation algorithm to execute the
automata-represented queries. They allow the security sub-framework to answer queries
on views in linear time.
Using both relational and XML technologies, this framework provides a uniform approach
to clean, integrate and secure data. The algorithms and techniques in the framework
have been implemented and the experimental study verifies their effectiveness and efficiency
Semantic discovery and reuse of business process patterns
Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse
Adding Privacy Protection to Policy Based Authorisation Systems
An authorisation system determines who is authorised to do what i.e. it assigns privileges to users and provides a decision on whether someone is allowed to perform a requested action on a resource. A traditional authorisation decision system, which is simply called authorisation system or system in the rest of the thesis, provides the decision based on a policy which is usually written by the system administrator. Such a traditional authorisation system is not sufficient to protect privacy of personal data, since users (the data subjects) are usually given a take it or leave it choice to accept the controlling organisationâs policy. Privacy is the ability of the owners or subjects of personal data to control the flow of data about themselves, according to their own preferences. This thesis describes the design of an authorisation system that will provide privacy for personal data by including sticky authorisation policies from the issuers and data subjects, to supplement the authorisation policy of the controlling organisation. As personal data moves from controlling system to controlling system, the sticky policies travel with the data.
A number of data protection laws and regulations have been formulated to protect the privacy of individuals. The rights and prohibitions provided by the law need to be enforced by the
authorisation system. Hence, the designed authorisation system also includes the authorisation rules from the legislation. This thesis describes the conversion of rules from the EU Data Protection
Directive into machine executable rules. Due to the nature of the legislative rules, not all of them could be converted into deterministic machine executable rules, as in several cases human intervention or human judgement is required. This is catered for by allowing the machine rules to be configurable.
Since the system includes independent policies from various authorities (law, issuer, data subject and controller) conflicts may arise among the decisions provided by them. Consequently, this thesis describes a dynamic, automated conflict resolution mechanism. Different conflict resolution algorithms are chosen based on the request contexts.
As the EU Data Protection Directive allows processing of personal data based on contracts, we designed and implemented a component, Contract Validation Service (ConVS) that can validate an XML based digital contract to allow processing of personal data based on a contract.
The authorisation system has been implemented as a web service and the performance of the system is measured, by first deploying it in a single computer and then in a cloud server. Finally the validity of the design and implementation are tested against a number of use cases based on scenarios involving accessing medical data in a health service providerâs system and accessing personal data such as CVs and degree certificates in an employment service providerâs system. The machine computed authorisation decisions are compared to the theoretical decisions to ensure that the system returns the correct decisions
An Approach for Managing Access to Personal Information Using Ontology-Based Chains
The importance of electronic healthcare has caused numerous
changes in both substantive and procedural aspects of healthcare
processes. These changes have produced new challenges to patient
privacy and information secrecy. Traditional privacy policies cannot
respond to rapidly increased privacy needs of patients in electronic
healthcare. Technically enforceable privacy policies are needed in
order to protect patient privacy in modern healthcare with its cross
organisational information sharing and decision making.
This thesis proposes a personal information flow model that specifies
a limited number of acts on this type of information. Ontology
classified Chains of these acts can be used instead of the
"intended/business purposes" used in privacy access control to
seamlessly imbuing current healthcare applications and their
supporting infrastructure with security and privacy functionality. In
this thesis, we first introduce an integrated basic architecture, design
principles, and implementation techniques for privacy-preserving
data mining systems. We then discuss the key methods of privacypreserving
data mining systems which include four main methods:
Role based access control (RBAC), Hippocratic database, Chain
method and eXtensible Access Control Markup Language (XACML).
We found out that the traditional methods suffer from two main
problems: complexity of privacy policy design and the lack of context
flexibility that is needed while working in critical situations such as the
one we find in hospitals. We present and compare strategies for
realising these methods. Theoretical analysis and experimental
evaluation show that our new method can generate accurate data
mining models and safe data access management while protecting
the privacy of the data being mined. The experiments followed
comparative kind of experiments, to show the ease of the design first
and then follow real scenarios to show the context flexibility in saving
personal information privacy of our investigated method
Recommended from our members
Ontology-based information standards development
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Standards may be argued to be important enablers for achieving interoperability as they aim to provide unambiguous specifications for error-free exchange of documents and information. By implication, therefore, it is important to model and represent the concept of a standard in a clear, precise and unambiguous way. Although standards development organisations usually provide guidelines for the process of developing and approving standards, they are usually more concerned with administrative aspect of the process. As a consequence, the state-of-the-art lacks practical support for developing the structure and content of a standard specification. In short, there is no systematic development method currently available: (a) For developing the conceptual model underpinning a standard; and/or (b) to guide a group of stakeholders to develop a standard specification.
Semantic interoperability is considered to be an essential factor for effective interoperation â the ability to achieve semantic interoperability effectively and efficiently being strongly equated with quality by some. Semantics require that the meaning of terms, their relationships and also the restrictions and rules in the standards should be clearly defined in the early stages of standard development and act as a basis for the latter stages. This research proposes that ontology can help standards developers and stakeholders to address the issues of improving conceptual models and providing a robust and shared understanding of the domain. This thesis presents OntoStanD, a comprehensive ontology-based standards development methodology, which utilises the best practices of the existing ontology creation methods.
The potential value of OntoStanD is in providing a comprehensive, clear and unambiguous method for developing robust information standards, which are more test friendly and of higher quality. OntoStanD also facilitates standards conformance testing and change management, impacts interoperability and also assists in improved communication among the standards development team. Last, OntoStanD provides an approach that is repeatable, teachable and potentially general enough for creating any kinds of information standard.Fujitsu Laboratories of Europe Ltd, Google Anitaborg Memorial Scholarshi
A programming system for process coordination in virtual organisations
PhD thesisDistributed business applications are increasingly being constructed by composing them from services provided by various online businesses. Typically, this leads to trading partners coming together to
form virtual organizations (VOs). Each member of a VO maintains their autonomy, except with respect to their agreed goals. The structure of the Virtual Organisation may contain one dominant organisation who dictates the method of achieving the goals or the members may be considered peers of equal importance. The goals of VOs can be defined by the shared global business processes they contain. To be able to execute these business processes, VOs require a flexible enactment model as there may be no single âownerâ of the business process and therefore no natural place to
enact the business processes. One solution is centralised enactment using a trusted third party, but in some cases this may not be acceptable (for instance because of security reasons). This thesis will present a programming system that allows centralised as well as distributed enactment where
each organisation enacts part of the business process. To achieve distributed enactment we must address the problem of specifying the business process in a manner that is amenable to distribution.
The first contribution of this thesis is the presentation of the Task Model, a set of languages and notations for describing workflows that can be enacted in a centralised or decentralised manner.
The business processes that we specify will coordinate the services that each organisation owns.
The second contribution of this thesis is the presentation of a method of describing the observable behaviour of these services. The language we present, SSDL, provides a flexible and extensible way of describing the messaging behaviour of Web Services. We present a method for checking that a set of services described in SSDL are compatible with each other and also that a workflow interacts with a service in the desired manner. The final contribution of this thesis is the presentation of an abstract architecture and prototype implementation of a decentralised workflow engine. The
prototype is able to enact workflows described in the Task Model notation in either a centralised or decentralised scenario