9,061 research outputs found
Contributions to the privacy provisioning for federated identity management platforms
Identity information, personal data and userâs profiles are key assets for organizations
and companies by becoming the use of identity management (IdM) infrastructures a prerequisite
for most companies, since IdM systems allow them to perform their business
transactions by sharing information and customizing services for several purposes in more
efficient and effective ways.
Due to the importance of the identity management paradigm, a lot of work has been done
so far resulting in a set of standards and specifications. According to them, under the
umbrella of the IdM paradigm a personâs digital identity can be shared, linked and reused
across different domains by allowing users simple session management, etc. In this way,
usersâ information is widely collected and distributed to offer new added value services
and to enhance availability. Whereas these new services have a positive impact on usersâ
life, they also bring privacy problems.
To manage usersâ personal data, while protecting their privacy, IdM systems are the ideal
target where to deploy privacy solutions, since they handle usersâ attribute exchange.
Nevertheless, current IdM models and specifications do not sufficiently address comprehensive
privacy mechanisms or guidelines, which enable users to better control over the
use, divulging and revocation of their online identities. These are essential aspects, specially
in sensitive environments where incorrect and unsecured management of userâs data
may lead to attacks, privacy breaches, identity misuse or frauds.
Nowadays there are several approaches to IdM that have benefits and shortcomings, from
the privacy perspective.
In this thesis, the main goal is contributing to the privacy provisioning for federated
identity management platforms. And for this purpose, we propose a generic architecture
that extends current federation IdM systems. We have mainly focused our contributions
on health care environments, given their particularly sensitive nature. The two main
pillars of the proposed architecture, are the introduction of a selective privacy-enhanced
user profile management model and flexibility in revocation consent by incorporating an
event-based hybrid IdM approach, which enables to replace time constraints and explicit
revocation by activating and deactivating authorization rights according to events. The
combination of both models enables to deal with both online and offline scenarios, as well
as to empower the user role, by letting her to bring together identity information from
different sources.
Regarding userâs consent revocation, we propose an implicit revocation consent mechanism
based on events, that empowers a new concept, the sleepyhead credentials, which
is issued only once and would be used any time. Moreover, we integrate this concept
in IdM systems supporting a delegation protocol and we contribute with the definition
of mathematical model to determine event arrivals to the IdM system and how they are
managed to the corresponding entities, as well as its integration with the most widely
deployed specification, i.e., Security Assertion Markup Language (SAML).
In regard to user profile management, we define a privacy-awareness user profile management
model to provide efficient selective information disclosure. With this contribution a
service provider would be able to accesses the specific personal information without being
able to inspect any other details and keeping user control of her data by controlling
who can access. The structure that we consider for the user profile storage is based on
extensions of Merkle trees allowing for hash combining that would minimize the need of
individual verification of elements along a path. An algorithm for sorting the tree as we
envision frequently accessed attributes to be closer to the root (minimizing the accessâ
time) is also provided.
Formal validation of the above mentioned ideas has been carried out through simulations
and the development of prototypes. Besides, dissemination activities were performed in
projects, journals and conferences.Programa Oficial de Doctorado en IngenierĂa TelemĂĄticaPresidente: MarĂa Celeste Campo VĂĄzquez.- Secretario: MarĂa Francisca Hinarejos Campos.- Vocal: Ăscar Esparza MartĂ
An Architecture for Provenance Systems
This document covers the logical and process architectures of provenance systems. The logical architecture identifies key roles and their interactions, whereas the process architecture discusses distribution and security. A fundamental aspect of our presentation is its technology-independent nature, which makes it reusable: the principles that are exposed in this document may be applied to different technologies
Reasoning in Description Logic Ontologies for Privacy Management
A rise in the number of ontologies that are integrated and distributed in numerous application systems may provide the users to access the ontologies with different privileges and purposes. In this situation, preserving confidential information from possible unauthorized disclosures becomes a critical requirement. For instance, in the clinical sciences, unauthorized disclosures of medical information do not only threaten the system but also, most importantly, the patient data. Motivated by this situation, this thesis initially investigates a privacy problem, called the identity problem, where the identity of (anonymous) objects stored in Description Logic ontologies can be revealed or not. Then, we consider this problem in the context of role-based access control to ontologies and extend it to the problem asking if the identity belongs to a set of known individuals of cardinality smaller than the number k. If it is the case that some confidential information of persons, such as their identity, their relationships or their other properties, can be deduced from an ontology, which implies that some privacy policy is not fulfilled, then one needs to repair this ontology such that the modified one complies with the policies and preserves the information from the original ontology as much as possible. The repair mechanism we provide is called gentle repair and performed via axiom weakening instead of axiom deletion which was commonly used in classical approaches of ontology repair. However, policy compliance itself is not enough if there is a possible attacker that can obtain relevant information from other sources, which together with the modified ontology still violates the privacy policies. Safety property is proposed to alleviate this issue and we investigate this in the context of privacy-preserving ontology publishing. Inference procedures to solve those privacy problems and additional investigations on the complexity of the procedures, as well as the worst-case complexity of the problems, become the main contributions of this thesis.:1. Introduction
1.1 Description Logics
1.2 Detecting Privacy Breaches in Information System
1.3 Repairing Information Systems
1.4 Privacy-Preserving Data Publishing
1.5 Outline and Contribution of the Thesis
2. Preliminaries
2.1 Description Logic ALC
2.1.1 Reasoning in ALC Ontologies
2.1.2 Relationship with First-Order Logic
2.1.3. Fragments of ALC
2.2 Description Logic EL
2.3 The Complexity of Reasoning Problems in DLs
3. The Identity Problem and Its Variants in Description Logic Ontologies
3.1 The Identity Problem
3.1.1 Description Logics with Equality Power
3.1.2 The Complexity of the Identity Problem
3.2 The View-Based Identity Problem
3.3 The k-Hiding Problem
3.3.1 Upper Bounds
3.3.2 Lower Bound
4. Repairing Description Logic Ontologies
4.1 Repairing Ontologies
4.2 Gentle Repairs
4.3 Weakening Relations
4.4 Weakening Relations for EL Axioms
4.4.1 Generalizing the Right-Hand Sides of GCIs
4.4.2 Syntactic Generalizations
4.5 Weakening Relations for ALC Axioms
4.5.1 Generalizations and Specializations in ALC w.r.t. Role Depth
4.5.2 Syntactical Generalizations and Specializations in ALC
5. Privacy-Preserving Ontology Publishing for EL Instance Stores
5.1 Formalizing Sensitive Information in EL Instance Stores
5.2 Computing Optimal Compliant Generalizations
5.3 Computing Optimal Safe^{\exists} Generalizations
5.4 Deciding Optimality^{\exists} in EL Instance Stores
5.5 Characterizing Safety^{\forall}
5.6 Optimal P-safe^{\forall} Generalizations
5.7 Characterizing Safety^{\forall\exists} and Optimality^{\forall\exists}
6. Privacy-Preserving Ontology Publishing for EL ABoxes
6.1 Logical Entailments in EL ABoxes with Anonymous Individuals
6.2 Anonymizing EL ABoxes
6.3 Formalizing Sensitive Information in EL ABoxes
6.4 Compliance and Safety for EL ABoxes
6.5 Optimal Anonymizers
7. Conclusion
7.1 Main Results
7.2 Future Work
Bibliograph
Model Checking of Software Defined Networks using Header Space Analysis
This thesis investigates the topic of verifying network status validity with a Cyber Security perspective.
The fields of interest are dynamic networks like OpenFlow and Software Defined Networks, where these problems may have larger attack surface and greater impact.
The framework under study is called Header Space Analysis, a formal model and protocol-agnostic framework that allows to perform static policy checking both in classical TCP/IP networks and modern dynamic SDN.
The goal is to analyse some classes of network failure, declaring valid network states and recognizing invalid ones.
HSA has evolved in NetPlumber, to face problems caused by high dynamics of SDN networks.
The main difference between HSA and NetPlumber is the incremental way that the latter performs checks and keeps state updated, verifying the actual state compliance with the expected state defined in its model, but the concept is the same: declare what's allowed and recognize states violating that model.
The second and main contribute of this thesis is to expand existing vision with the purpose of increasing the network security degree, introducing model-checking-based networks through the definition of an abstraction layer that provides a security-focused model-checking service to SDN.
The developed system is called MCS (Model Checking Service) and is implemented for an existing SDN solution called ONOS, using NetPlumber as underlying model-checking technology, but it's validity is general, uncoupled with any kind of SDN implementation.
Finally, the demo shows how some cases of well-known security attacks in modern networks can be prevented or mitigated using the reactive behavior of MCS
Trust negotiation policy management for service-oriented applications
Service-oriented architectures (SOA), and in particular Web services, have quickly become a popular technology to
connect applications both within and across enterprise boundaries. However, as services are increasingly used to
implement critical functionality, security has become an important concern impeding the widespread adoption of SOA.
Trust negotiation is an approach to access control that may be applied in scenarios where service requesters are often
unknown in advance, such as for services available via the public Internet. Rather than relying on requesters'
identities, trust negotiation makes access decisions based on the level of trust established between the requester and
the provider in a negotiation, during which the parties exchange credentials, which are signed assertions that describe
some attributes of the owner.
However, managing the evolution of trust negotiation policies is a difficult problem that has not been sufficiently
addressed to date. Access control policies have a lifecycle, and they are revised based on applicable business
policies. Additionally, because a trust relationship established in a trust negotiation may be long lasting, their
evolution must also be managed. Simply allowing a negotiation to continue according to an old policy may be
undesirable, especially if new important constraints have been added.
In this thesis, we introduce a model-driven trust negotiation framework for service-oriented applications. The
framework employs a model for trust negotiation, based on state machines, that allows automated generation of the
control structures necessary to enforce trust negotiation policies from the visual model of the policy. Our policy
model also supports lifecycle management. We provide sets of operations to modify policies and to manage ongoing
negotiations, and operators for identifying and managing impacts of changes to trust negotiation policies on ongoing
trust negotiations.
The framework presented in the thesis has been implemented in the Trust-Serv prototype, which leverages industry
specifications such as WS-Security and WS-Trust to offer a container-centric mechanism for deploying trust negotiation
that is transparent to the services being protected
Semantic data integration and knowledge graph creation at scale
Contrary to data, knowledge is often abstract. Concrete knowledge can be achieved through the inclusion of semantics in the data models, highlighting the role of data integration. The massive growing number of data, in recent years, has promoted the demand for scaling up data management techniques; materializing data integration, a.k.a., knowledge graph creation falls in that category.
In this thesis, we investigate efficient methods and techniques for materializing data integration. We formalize the process of materializing data integration. We formally define the characteristics of a materialized data integration system that merge the data operators and sources. Owing to this formalism, both layers of data integration, including data and schema-level integration, are formalized in the context of mapping assertions. We explore optimization opportunities for improving the materialization of data integration systems. We recognize three angles including intra/inter-mapping assertions from which the materialization can be improved. Accordingly, we propose source-based, mapping-based, and inter-mapping assertion groups of optimization techniques. We utilize our proposed techniques in three real-world projects. We illustrate how applying these optimization techniques contribute to meeting the objectives of the mentioned projects.
Furthermore, we study the parameters impacting the performance of materialization of data integration. Relying on reported parameters and the presumably impacting parameters, we build four groups of testbeds. We empirically study the performances of these different testbeds in the presence and absence of our proposed techniques, in terms of execution time. We observe that the savings can be up to 75%.
Lastly, we contribute to facilitating the process of declarative data integration system definition. We propose two data operation function signatures in Function Ontology (FnO). The first set of functions is designed to perform the task of entity alignment by resorting to an entity and relation linking tool. The second library consists of domain-specific functions to align genomic entities by harmonizing their representations. Finally, we introduce a tool equipped with a user interface to facilitate the process of defining declarative mapping rules by allowing users to explore the data sources and unified schema while defining their correspondences.Im Gegensatz zu den Daten ist das Wissen oft abstrakt. Konkretes Wissen kann
durch die Einbeziehung von Semantik in die Datenmodelle erreicht werden, was die
Rolle der Datenintegration unterstreicht. Die massiv wachsende Zahl von Daten hat
in den letzten Jahren die Nachfrage nach einer Ausweitung der Datenverwaltungstechnikengef¨ordert; die materialisierende Datenintegration, auch bekannt als die Erstellung von Wissensgraphen, f¨allt in diese Kategorie.
In dieser Arbeit untersuchen wir effiziente Methoden und Techniken zur Materialisierung der Datenintegration. Wir formalisieren den Prozess der Materialisierung der Datenintegration. Wir definieren formal die Eigenschaften eines materialisierten Datenintegrationssystems, so dass die Datenoperatoren und -quellen zusammengef¨uhrt werden. Dank dieses Formalismus werden beide Ebenen der Datenintegration, einschlieĂlich der Integration auf Daten- und Schemaebene, im Kontext von Mapping-Assertions formalisiert. Wir untersuchen die Optimierungsm¨oglichkeiten zur Verbesserung der Materialisierung von Datenintegrationssystemen. Wir erkennen drei Gesichtspunkte, einschlieĂlich Intra-/Inter-Mapping-Assertions, unter denen die Materialisierung verbessert werden kann. Dementsprechend schlagen wir quellenbasierte, mappingbasierte und inter-mapping Assertionsgruppen von Optimierungstechniken vor. Wir setzen die von uns vorgeschlagenen Techniken in drei Forschungsprojekte ein. Wir veranschaulichen, wie die Anwendung dieser Optimierungstechniken dazu beitr¨agt, die Ziele der genannten Projekte zu erreichen. Wir untersuchen die Parameter, die sich auf die Leistung der Materialisierung der Datenintegration auswirken. Auf der Grundlage der gemeldeten Parameter und der vermutlich ausschlaggebenden Parameter erstellen wir vier Gruppen von Testumgebungen.
Wir untersuchen empirisch die Leistung dieser verschiedenen Testbeds mit
und ohne die von uns vorgeschlagenen Techniken in Bezug auf die Ausf¨uhrungszeit.
Wir stellen fest, dass die Einsparungen bis zu 75% betragen k¨onnen.
SchlieĂlich tragen wir zur Erleichterung des Prozesses der deklarativen Definition
von Datenintegrationssystemen bei, indem wir zwei Funktionssignaturen f¨ur Datenoperationen
in der Function Ontology (FnO) vorschlagen. Die erste Gruppe von
Funktionen ist f¨ur die Aufgabe des Entit¨atsabgleichs konzipiert, w¨ahrend die zweite
Bibliothek aus dom¨anenspezifischen Funktionen zum Abgleich genomischer Entit¨aten
durch Harmonisierung ihrer Darstellungen besteht. SchlieĂlich stellen wir ein Tool
vor, das mit einer Benutzeroberfl¨ache ausgestattet ist, um den Prozess der Definition
deklarativer Mapping-Regeln zu erleichtern, indem es den Benutzern erm¨oglicht, die
Datenquellen und das einheitliche Schema zu erkunden
Securing cloud service archives for function and data shipping in industrial environments
Cloud Computing paradigm needs a standard for portability, and automated deployment and management of cloud services, to eliminate vendor lock-in and minimization of management efforts respectively. Topology and Orchestration Specification for Cloud Applications (TOSCA) language provides such standard by employing semantics for representation of components and business processes of a cloud application. Advancements in the fields of Cloud Computing and Internet of Things (IoT) has opened new research areas to support 4th industrial revolution (Industry 4.0), which in turn has resulted in the emergence of smart services. One application of smart services is predictive maintenance, which enables the anticipation of future devicesĂ states by implementing functions, for example, analytics algorithms, and collecting huge amounts of data from sensors. Considering performance demands and runtime constraints, either the data can be shipped to the function site, called data shipping or the functionality is provisioned closely to the data site, called function shipping. However, since this data can contain confidential information, it has to be assured that access to the data is strictly controlled. Although TOSCA already enables defining policies in general, a concrete data security policy approach is missing. Moreover, constituents of TOSCA are packaged in a self-contained and portable archive, called Cloud Service Archive (CSAR), which is also required to be secured and restricted to authorized personals only.
Taking the above facts into account, the goal of this thesis is to refine and enhance the TOSCA standard to the field of smart services in production environments through the usage of policies, for example, being effectively able to define the security aspects. In this thesis, various available policy languages with frameworks supporting them are researched, and their applicability for the field of Industry 4.0 is analyzed. An approach is formulated with one language selected, to define policies for TOSCA compliant cloud applications. Furthermore, a prototype is developed to secure the content of CSAR using the proposed approach
- âŚ