354 research outputs found

    On the Evaluation of RDF Distribution Algorithms Implemented over Apache Spark

    Full text link
    Querying very large RDF data sets in an efficient manner requires a sophisticated distribution strategy. Several innovative solutions have recently been proposed for optimizing data distribution with predefined query workloads. This paper presents an in-depth analysis and experimental comparison of five representative and complementary distribution approaches. For achieving fair experimental results, we are using Apache Spark as a common parallel computing framework by rewriting the concerned algorithms using the Spark API. Spark provides guarantees in terms of fault tolerance, high availability and scalability which are essential in such systems. Our different implementations aim to highlight the fundamental implementation-independent characteristics of each approach in terms of data preparation, load balancing, data replication and to some extent to query answering cost and performance. The presented measures are obtained by testing each system on one synthetic and one real-world data set over query workloads with differing characteristics and different partitioning constraints.Comment: 16 pages, 3 figure

    Security Aspects in Web of Data Based on Trust Principles. A brief of Literature Review

    Get PDF
    Within scientific community, there is a certain consensus to define "Big Data" as a global set, through a complex integration that embraces several dimensions from using of research data, Open Data, Linked Data, Social Network Data, etc. These data are scattered in different sources, which suppose a mix that respond to diverse philosophies, great diversity of structures, different denominations, etc. Its management faces great technological and methodological challenges: The discovery and selection of data, its extraction and final processing, preservation, visualization, access possibility, greater or lesser structuring, between other aspects, which allow showing a huge domain of study at the level of analysis and implementation in different knowledge domains. However, given the data availability and its possible opening: What problems do the data opening face? This paper shows a literature review about these security aspects

    Security Aspects in Web of Data Based on Trust Principles. A brief of Literature Review

    Get PDF
    Within scientific community, there is a certain consensus to define "Big Data" as a global set, through a complex integration that embraces several dimensions from using of research data, Open Data, Linked Data, Social Network Data, etc. These data are scattered in different sources, which suppose a mix that respond to diverse philosophies, great diversity of structures, different denominations, etc. Its management faces great technological and methodological challenges: The discovery and selection of data, its extraction and final processing, preservation, visualization, access possibility, greater or lesser structuring, between other aspects, that allow showing a huge domain of study at the level of analysis and implementation in different knowledge domains. However, given the data availability and its possible opening: What problems do the data opening face? This paper shows a literature review about these security aspects

    Semantic-Based Access Control Mechanisms in Dynamic Environments

    Get PDF
    The appearance of dynamic distributed networks in early eighties of the last century has evoked technologies like pervasive systems, ubiquitous computing, ambient intelligence, and more recently, Internet of Things (IoT) to be developed. Moreover, sensing capabil- ities embedded in computing devices offer users the ability to share, retrieve, and update resources on anytime and anywhere basis. These resources (or data) constitute what is widely known as contextual information. In these systems, there is an association between a system and its environment and the system should always adapt to its ever-changing environment. This situation makes the Context-Based Access Control (CBAC) the method of choice for such environments. However, most traditional policy models do not address the issue of dynamic nature of dynamic distributed systems and are limited in addressing issues like adaptability, extensibility, and reasoning over security policies. We propose a security framework for dynamic distributed network domain that is based on semantic technologies. This framework presents a flexible and adaptable context-based access control authoriza- tion model for protecting dynamic distributed networks’ resources. We extend our secu- rity model to incorporate context delegation in context-based access control environments. We show that security mechanisms provided by the framework are sound and adhere to the least-privilege principle. We develop a prototype implementation of our framework and present the results to show that our framework correctly derives Context-Based au- thorization decision. Furthermore, we provide complexity analysis for the authorization framework in its response to the requests and contrast the complexity against possible op- timization that can be applied on the framework. Finally, we incorporate semantic-based obligation into our security framework. In phase I of our research, we design two lightweight Web Ontology Language (OWL) ontologies CTX-Lite and CBAC. CTX-Lite ontology serves as a core ontology for context handling, while CBAC ontology is used for modeling access control policy requirements. Based on the two OWL ontologies, we develop access authorization approach in which access decision is solely made based on the context of the request. We separate context operations from access authorization operations to reduce processing time for distributed networks’ devices. In phase II, we present two novel ontology-based context delegation ap- proaches. Monotonic context delegation, which adopts GRANT version of delegation, and non-monotonic for TRANSFER version of delegation. Our goal is to present context del- egation mechanisms that can be adopted by existing CBAC systems which do not provide delegation services. Phase III has two sub-phases, the first is to provide complexity anal- ysis of the authorization framework. The second sub-phase is dedicated to incorporating semantic-based obligation

    A succinct data structure for self-indexing ternary relations

    Get PDF
    The final publication is available via http://dx.doi.org/10.1016/j.jda.2016.10.002[Abstract] The representation of binary relations has been intensively studied and many different theoretical and practical representations have been proposed to answer the usual queries in multiple domains. However, ternary relations have not received as much attention, even though many real-world applications require the processing of ternary relations. In this paper we present a new compressed and self-indexed data structure that we call Interleaved K2-tree (IK2-tree), designed to compactly represent and efficiently query general ternary relations. The IK2-tree is an evolution of an existing data structure, the K2-tree [6], initially designed to represent Web graphs and later applied to other domains. The IK2-tree is able to extend the K2-tree to represent a ternary relation, based on the idea of decomposing it into a collection of binary relations but providing indexing capabilities in all the three dimensions. We present different ways to use IK2-tree to model different types of ternary relations using as reference two typical domains: RDF and Temporal Graphs. We also experimentally evaluate our representations comparing them in space usage and performance with other solutions of the state of the art.Ministerio de Economía y Competitividad; TIN2013-46238-C4-3-RXunta de Galicia; GRC2013/053Chile. Núcleo Milenio Información y Coordinación en Redes; ICM/FIC RC130003Chile.Fondo Nacional de Desarrollo Científico y Tecnológico; 1-14079

    DIN Spec 91345 RAMI 4.0 compliant data pipelining: An approach to support data understanding and data acquisition in smart manufacturing environments

    Get PDF
    Today, data scientists in the manufacturing domain are confronted with a set of challenges associated to data acquisition as well as data processing including the extraction of valuable in-formation to support both, the work of the manufacturing equipment as well as the manufacturing processes behind it. One essential aspect related to data acquisition is the pipelining, including various commu-nication standards, protocols and technologies to save and transfer heterogenous data. These circumstances make it hard to understand, find, access and extract data from the sources depend-ing on use cases and applications. In order to support this data pipelining process, this thesis proposes the use of the semantic model. The selected semantic model should be able to describe smart manufacturing assets them-selves as well as to access their data along their life-cycle. As a matter of fact, there are many research contributions in smart manufacturing, which already came out with reference architectures or standards for semantic-based meta data descrip-tion or asset classification. This research builds upon these outcomes and introduces a novel se-mantic model-based data pipelining approach using as a basis the Reference Architecture Model for Industry 4.0 (RAMI 4.0).Hoje em dia, os cientistas de dados no domínio da manufatura são confrontados com várias normas, protocolos e tecnologias de comunicação para gravar, processar e transferir vários tipos de dados. Estas circunstâncias tornam difícil compreender, encontrar, aceder e extrair dados necessários para aplicações dependentes de casos de utilização, desde os equipamentos aos respectivos processos de manufatura. Um aspecto essencial poderia ser um processo de canalisação de dados incluindo vários normas de comunicação, protocolos e tecnologias para gravar e transferir dados. Uma solução para suporte deste processo, proposto por esta tese, é a aplicação de um modelo semântico que descreva os próprios recursos de manufactura inteligente e o acesso aos seus dados ao longo do seu ciclo de vida. Muitas das contribuições de investigação em manufatura inteligente já produziram arquitecturas de referência como a RAMI 4.0 ou normas para a descrição semântica de meta dados ou classificação de recursos. Esta investigação baseia-se nestas fontes externas e introduz um novo modelo semântico baseado no Modelo de Arquitectura de Referência para Indústria 4.0 (RAMI 4.0), em conformidade com a abordagem de canalisação de dados no domínio da produção inteligente como caso exemplar de utilização para permitir uma fácil exploração, compreensão, descoberta, selecção e extracção de dados

    A fast engineering approach to high efficiency power amplifier linearization for avionics applications

    Get PDF
    This PhD thesis provides a fast engineering approach to the design of digital predistortion (DPD) linearizers from several perspectives: i) enhancing the off-line training performance of open-loop DPD, ii) providing robustness and reducing the computational complexity of the parameters identification subsystem and, iii) importing machine learning techniques to favor the automatic tuning of power amplifiers (PAs) and DPD linearizers with several free-parameters to maximize power efficiency while meeting the linearity specifications. One of the essential parts of unmanned aerial vehicles (UAV) is the avionics, being the radio control one of the earliest avionics present in the UAV. Unlike the control signal, for transferring user data (such as images, video, etc.) real-time from the drone to the ground station, large transmission rates are required. The PA is a key element in the transmitter chain to guarantee the data transmission (video, photo, etc.) over a long range from the ground station. The more linear output power, the better the coverage or alternatively, with the same coverage, better SNR allows the use of high-order modulation schemes and thus higher transmission rates are achieved. In the context of UAV wireless communications, the power consumption, size and weight of the payload is of significant importance. Therefore, the PA design has to take into account the compromise among bandwidth, output power, linearity and power efficiency (very critical in battery-supplied devices). The PA can be designed to maximize its power efficiency or its linearity, but not both. Therefore, a way to deal with this inherent trade-off is to design high efficient amplification topologies and let the PA linearizers take care of the linearity requirements. Among the linearizers, DPD linearization is the preferred solution to both academia and industry, for its high flexibility and linearization performance. In order to save as many computational and power resources as possible, the implementation of an open-loop DPD results a very attractive solution for UAV applications. This thesis contributes to the PA linearization, especially on off-line training for open-loop DPD, by presenting two different methods for reducing the design and operating costs of an open-loop DPD, based on the analysis of the DPD function. The first method focuses on the input domain analysis, proposing mesh-selecting (MeS) methods to accurately select the proper samples for a computationally efficient DPD parameter estimation. Focusing in the MeS method with better performance, the memory I-Q MeS method is combined with feature extraction dimensionality reduction technique to allow a computational complexity reduction in the identification subsystem by a factor of 65, in comparison to using the classical QR-LS solver and consecutive samples selection. In addition, the memory I-Q MeS method has been proved to be of crucial interest when training artificial neural networks (ANN) for DPD purposes, by significantly reducing the ANN training time. The second method involves the use of machine learning techniques in the DPD design procedure to enlarge the capacity of the DPD algorithm when considering a high number of free parameters to tune. On the one hand, the adaLIPO global optimization algorithm is used to find the best parameter configuration of a generalized memory polynomial behavioral model for DPD. On the other hand, a methodology to conduct a global optimization search is proposed to find the optimum values of a set of key circuit and system level parameters, that properly combined with DPD linearization and crest factor reduction techniques, can exploit at best dual-input PAs in terms of maximizing power efficiency along wide bandwidths while being compliant with the linearity specifications. The advantages of these proposed techniques have been validated through experimental tests and the obtained results are analyzed and discussed along this thesis.Aquesta tesi doctoral proporciona unes pautes per al disseny de linealitzadors basats en predistorsió digital (DPD) des de diverses perspectives: i) millorar el rendiment del DPD en llaç obert, ii) proporcionar robustesa i reduir la complexitat computacional del subsistema d'identificació de paràmetres i, iii) incorporació de tècniques d'aprenentatge automàtic per afavorir l'auto-ajustament d'amplificadors de potència (PAs) i linealitzadors DPD amb diversos graus de llibertat per poder maximitzar l’eficiència energètica i al mateix temps acomplir amb les especificacions de linealitat. Una de les parts essencials dels vehicles aeris no tripulats (UAV) _es l’aviònica, sent el radiocontrol un dels primers sistemes presents als UAV. Per transferir dades d'usuari (com ara imatges, vídeo, etc.) en temps real des del dron a l’estació terrestre, es requereixen taxes de transmissió grans. El PA _es un element clau de la cadena del transmissor per poder garantir la transmissió de dades a grans distàncies de l’estació terrestre. A major potència de sortida, més cobertura o, alternativament, amb la mateixa cobertura, millor relació senyal-soroll (SNR) la qual cosa permet l’ús d'esquemes de modulació d'ordres superiors i, per tant, aconseguir velocitats de transmissió més altes. En el context de les comunicacions sense fils en UAVs, el consum de potència, la mida i el pes de la càrrega útil són de vital importància. Per tant, el disseny del PA ha de tenir en compte el compromís entre ample de banda, potència de sortida, linealitat i eficiència energètica (molt crític en dispositius alimentats amb bateries). El PA es pot dissenyar per maximitzar la seva eficiència energètica o la seva linealitat, però no totes dues. Per tant, per afrontar aquest compromís s'utilitzen topologies amplificadores d'alta eficiència i es deixa que el linealitzador s'encarregui de garantir els nivells necessaris de linealitat. Entre els linealitzadors, la linealització DPD és la solució preferida tant per al món acadèmic com per a la indústria, per la seva alta flexibilitat i rendiment. Per tal d'estalviar tant recursos computacionals com consum de potència, la implementació d'un DPD en lla_c obert resulta una solució molt atractiva per a les aplicacions UAV. Aquesta tesi contribueix a la linealització del PA, especialment a l'entrenament fora de línia de linealitzadors DPD en llaç obert, presentant dos mètodes diferents per reduir el cost computacional i augmentar la fiabilitat dels DPDs en llaç obert. El primer mètode se centra en l’anàlisi de l’estadística del senyal d'entrada, proposant mètodes de selecció de malla (MeS) per seleccionar les mostres més significatives per a una estimació computacionalment eficient dels paràmetres del DPD. El mètode proposat IQ MeS amb memòria es pot combinar amb tècniques de reducció del model del DPD i d'aquesta manera poder aconseguir una reducció de la complexitat computacional en el subsistema d’identificació per un factor de 65, en comparació amb l’ús de l'algoritme clàssic QR-LS i selecció de mostres d'entrenament consecutives. El segon mètode consisteix en l’ús de tècniques d'aprenentatge automàtic pel disseny del DPD quan es considera un gran nombre de graus de llibertat (paràmetres) per sintonitzar. D'una banda, l'algorisme d’optimització global adaLIPO s'utilitza per trobar la millor configuració de paràmetres d'un model polinomial amb memòria generalitzat per a DPD. D'altra banda, es proposa una estratègia per l’optimització global d'un conjunt de paràmetres clau per al disseny a nivell de circuit i sistema, que combinats amb linealització DPD i les tècniques de reducció del factor de cresta, poden maximitzar l’eficiència de PAs d'entrada dual de gran ample de banda, alhora que compleixen les especificacions de linealitat. Els avantatges d'aquestes tècniques proposades s'han validat mitjançant proves experimentals i els resultats obtinguts s'analitzen i es discuteixen al llarg d'aquesta tesi

    간결한 자료구조를 활용한 반구조화된 문서 형식들의 공간 효율적 표현법

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 전기·컴퓨터공학부, 2021. 2. Srinivasa Rao Satti.Numerous big data are generated from a plethora of sources. Most of the data stored as files contain a non-fixed type of schema, so that the files are suitable to be maintained as semi-structured document formats. A number of those formats, such as XML (eXtensible Markup Language), JSON (JavaScript Object Notation), and YAML (YAML Ain't Markup Language) are suggested to sustain hierarchy in the original corpora of data. Several data models structuring the gathered data - including RDF (Resource Description Framework) - depend on the semi-structured document formats to be serialized and transferred for future processing. Since the semi-structured document formats focus on readability and verbosity, redundant space is required to organize and maintain the document. Even though general-purpose compression schemes are widely used to compact the documents, applying those algorithms hinder future handling of the corpora, owing to loss of internal structures. The area of succinct data structures is widely investigated and researched in theory, to provide answers to the queries while the encoded data occupy space close to the information-theoretic lower bound. Bit vectors and trees are the notable succinct data structures. Nevertheless, there were few attempts to apply the idea of succinct data structures to represent the semi-structured documents in space-efficient manner. In this dissertation we propose a unified, space-efficient representation of various semi-structured document formats. The core functionality of this representation is its compactness and query-ability derived from enriched functions of succinct data structures. Incorporation of (a) bit indexed arrays, (b) succinct ordinal trees, and (c) compression techniques engineers the compact representation. We implement this representation in practice, and show by experiments that construction of this representation decreases the disk usage by up to 60% while occupying 90% less RAM. We also allow processing a document in partial manner, to allow processing of larger corpus of big data even in the constrained environment. In parallel to establishing the aforementioned compact semi-structured document representation, we provide and reinforce some of the existing compression schemes in this dissertation. We first suggest an idea to encode an array of integers that is not necessarily sorted. This compaction scheme improves upon the existing universal code systems, by assistance of succinct bit vector structure. We show that our suggested algorithm reduces space usage by up to 44% while consuming 15% less time than the original code system, while the algorithm additionally supports random access of elements upon the encoded array. We also reinforce the SBH bitmap index compression algorithm. The main strength of this scheme is the use of intermediate super-bucket during operations, giving better performance on querying through a combination of compressed bitmap indexes. Inspired from splits done during the intermediate process of the SBH algorithm, we give an improved compression mechanism supporting parallelism that could be utilized in both CPUs and GPUs. We show by experiments that this CPU parallel processing optimization diminishes compression and decompression times by up to 38% in a 4-core machine without modifying the bitmap compressed form. For GPUs, the new algorithm gives 48% faster query processing time in the experiments, compared to the previous existing bitmap index compression schemes.셀 수 없는 빅 데이터가 다양한 원본로부터 생성되고 있다. 이들 데이터의 대부분은 고정되지 않은 종류의 스키마를 포함한 파일 형태로 저장되는데, 이로 인하여 반구조화된 문서 형식을 이용하여 파일을 유지하는 것이 적합하다. XML, JSON 및 YAML과 같은 종류의 반구조화된 문서 형식이 데이터에 내재하는 구조를 유지하기 위하여 제안되었다. 수집된 데이터를 구조화하는 RDF와 같은 여러 데이터 모델들은 사후 처리를 위한 저장 및 전송을 위하여 반구조화된 문서 형식에 의존한다. 반구조화된 문서 형식은 가독성과 다변성에 집중하기 때문에, 문서를 구조화하고 유지하기 위하여 추가적인 공간을 필요로 한다. 문서를 압축시키기 위하여 일반적인 압축 기법들이 널리 사용되고 있으나, 이들 기법들을 적용하게 되면 문서의 내부 구조의 손실로 인하여 데이터의 사후 처리가 어렵게 된다. 데이터를 정보이론적 하한에 가까운 공간만을 사용하여 저장을 가능하게 하면서 질의에 대한 응답을 제공하는 간결한 자료구조는 이론적으로 널리 연구되고 있는 분야이다. 비트열과 트리가 널리 알려진 간결한 자료구조들이다. 그러나 반구조화된 문서들을 저장하는 데 간결한 자료구조의 아이디어를 적용한 연구는 거의 진행되지 않았다. 본 학위논문을 통해 우리는 다양한 종류의 반구조화된 문서 형식을 통일되게 표현하는 공간 효율적 표현법을 제시한다. 이 기법의 주요한 기능은 간결한 자료구조가 강점으로 가지는 특성에 기반한 간결성과 질의 가능성이다. 비트열로 인덱싱된 배열, 간결한 순서 있는 트리 및 다양한 압축 기법을 통합하여 해당 표현법을 고안하였다. 이 기법은 실재적으로 구현되었고, 실험을 통하여 이 기법을 적용한 반구조화된 문서들은 최대 60% 적은 디스크 공간과 90% 적은 메모리 공간을 통해 표현될 수 있다는 것을 보인다. 더불어 본 학위논문에서 반구조화된 문서들은 분할적으로 표현이 가능함을 보이고, 이를 통하여 제한된 환경에서도 빅 데이터를 표현한 문서들을 처리할 수 있다는 것을 보인다. 앞서 언급한 공간 효율적 반구조화된 문서 표현법을 구축함과 동시에, 본 학위논문에서 이미 존재하는 압축 기법 중 일부를 추가적으로 개선한다. 첫째로, 본 학위논문에서는 정렬 여부에 관계없는 정수 배열을 부호화하는 아이디어를 제시한다. 이 기법은 이미 존재하는 범용 코드 시스템을 개선한 형태로, 간결한 비트열 자료구조를 이용한다. 제안된 알고리즘은 기존 범용 코드 시스템에 비해 최대 44\% 적은 공간을 사용할 뿐만 아니라 15\% 적은 부호화 시간을 필요로 하며, 기존 시스템에서 제공하지 않는 부호화된 배열에서의 임의 접근을 지원한다. 또한 본 학위논문에서는 비트맵 인덱스 압축에 사용되는 SBH 알고리즘을 개선시킨다. 해당 기법의 주된 강점은 부호화와 복호화 진행 시 중간 매개인 슈퍼버켓을 사용함으로써 여러 압축된 비트맵 인덱스에 대한 질의 성능을 개선시키는 것이다. 위 압축 알고리즘의 중간 과정에서 진행되는 분할에서 영감을 얻어, 본 학위논문에서 CPU 및 GPU에 적용 가능한 개선된 병렬화 압축 매커니즘을 제시한다. 실험을 통해 CPU 병렬 최적화가 이루어진 알고리즘은 압축된 형태의 변형 없이 4코어 컴퓨터에서 최대 38\%의 압축 및 해제 시간을 감소시킨다는 것을 보인다. GPU 병렬 최적화는 기존에 존재하는 GPU 비트맵 압축 기법에 비해 48\% 빠른 질의 처리 시간을 필요로 함을 확인한다.Chapter 1 Introduction 1 1.1 Contribution 3 1.2 Organization 5 Chapter 2 Background 6 2.1 Model of Computation 6 2.2 Succinct Data Structures 7 Chapter 3 Space-efficient Representation of Integer Arrays 9 3.1 Introduction 9 3.2 Preliminaries 10 3.2.1 Universal Code System 10 3.2.2 Bit Vector 13 3.3 Algorithm Description 13 3.3.1 Main Principle 14 3.3.2 Optimization in the Implementation 16 3.4 Experimental Results 16 Chapter 4 Space-efficient Parallel Compressed Bitmap Index Processing 19 4.1 Introduction 19 4.2 Related Work 23 4.2.1 Byte-aligned Bitmap Code (BBC) 24 4.2.2 Word-Aligned Hybrid (WAH) 27 4.2.3 WAH-derived Algorithms 28 4.2.4 GPU-based WAH Algorithms 31 4.2.5 Super Byte-aligned Hybrid (SBH) 33 4.3 Parallelizing SBH 38 4.3.1 CPU Parallelism 38 4.3.2 GPU Parallelism 39 4.4 Experimental Results 40 4.4.1 Plain Version 41 4.4.2 Parallelized Version 46 4.4.3 Summary 49 Chapter 5 Space-efficient Representation of Semi-structured Document Formats 50 5.1 Preliminaries 50 5.1.1 Semi-structured Document Formats 50 5.1.2 Resource Description Framework 57 5.1.3 Succinct Ordinal Tree Representations 60 5.1.4 String Compression Schemes 64 5.2 Representation 66 5.2.1 Bit String Indexed Array 67 5.2.2 Main Structure 68 5.2.3 Single Document as a Collection of Chunks 72 5.2.4 Supporting Queries 73 5.3 Experimental Results 75 5.3.1 Datasets 76 5.3.2 Construction Time 78 5.3.3 RAM Usage during Construction 80 5.3.4 Disk Usage and Serialization Time 83 5.3.5 Chunk Division 83 5.3.6 String Compression 88 5.3.7 Query Time 89 Chapter 6 Conclusion 94 Bibliography 96 요약 109 Acknowledgements 111Docto

    Hierarchical Group and Attribute-Based Access Control: Incorporating Hierarchical Groups and Delegation into Attribute-Based Access Control

    Get PDF
    Attribute-Based Access Control (ABAC) is a promising alternative to traditional models of access control (i.e. Discretionary Access Control (DAC), Mandatory Access Control (MAC) and Role-Based Access control (RBAC)) that has drawn attention in both recent academic literature and industry application. However, formalization of a foundational model of ABAC and large-scale adoption is still in its infancy. The relatively recent popularity of ABAC still leaves a number of problems unexplored. Issues like delegation, administration, auditability, scalability, hierarchical representations, etc. have been largely ignored or left to future work. This thesis seeks to aid in the adoption of ABAC by filling in several of these gaps. The core contribution of this work is the Hierarchical Group and Attribute-Based Access Control (HGABAC) model, a novel formal model of ABAC which introduces the concept of hierarchical user and object attribute groups to ABAC. It is shown that HGABAC is capable of representing the traditional models of access control (MAC, DAC and RBAC) using this group hierarchy and that in many cases it’s use simplifies both attribute and policy administration. HGABAC serves as the basis upon which extensions are built to incorporate delegation into ABAC. Several potential strategies for introducing delegation into ABAC are proposed, categorized into families and the trade-offs of each are examined. One such strategy is formalized into a new User-to-User Attribute Delegation model, built as an extension to the HGABAC model. Attribute Delegation enables users to delegate a subset of their attributes to other users in an off-line manner (not requiring connecting to a third party). Finally, a supporting architecture for HGABAC is detailed including descriptions of services, high-level communication protocols and a new low-level attribute certificate format for exchanging user and connection attributes between independent services. Particular emphasis is placed on ensuring support for federated and distributed systems. Critical components of the architecture are implemented and evaluated with promising preliminary results. It is hoped that the contributions in this research will further the acceptance of ABAC in both academia and industry by solving the problem of delegation as well as simplifying administration and policy authoring through the introduction of hierarchical user groups
    corecore