154 research outputs found

    Towards adaptive multi-robot systems: self-organization and self-adaptation

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.The development of complex systems ensembles that operate in uncertain environments is a major challenge. The reason for this is that system designers are not able to fully specify the system during specification and development and before it is being deployed. Natural swarm systems enjoy similar characteristics, yet, being self-adaptive and being able to self-organize, these systems show beneficial emergent behaviour. Similar concepts can be extremely helpful for artificial systems, especially when it comes to multi-robot scenarios, which require such solution in order to be applicable to highly uncertain real world application. In this article, we present a comprehensive overview over state-of-the-art solutions in emergent systems, self-organization, self-adaptation, and robotics. We discuss these approaches in the light of a framework for multi-robot systems and identify similarities, differences missing links and open gaps that have to be addressed in order to make this framework possible

    A study of System Interface Sets (SIS) for the host, target and integration environments of the Space Station Program (SSP)

    Get PDF
    System interface sets (SIS) for large, complex, non-stop, distributed systems are examined. The SIS of the Space Station Program (SSP) was selected as the focus of this study because an appropriate virtual interface specification of the SIS is believed to have the most potential to free the project from four life cycle tyrannies which are rooted in a dependance on either a proprietary or particular instance of: operating systems, data management systems, communications systems, and instruction set architectures. The static perspective of the common Ada programming support environment interface set (CAIS) and the portable common execution environment (PCEE) activities are discussed. Also, the dynamic perspective of the PCEE is addressed

    The augmented reality framework : an approach to the rapid creation of mixed reality environments and testing scenarios

    Get PDF
    Debugging errors during real-world testing of remote platforms can be time consuming and expensive when the remote environment is inaccessible and hazardous such as deep-sea. Pre-real world testing facilities, such as Hardware-In-the-Loop (HIL), are often not available due to the time and expense necessary to create them. Testing facilities tend to be monolithic in structure and thus inflexible making complete redesign necessary for slightly different uses. Redesign is simpler in the short term than creating the required architecture for a generic facility. This leads to expensive facilities, due to reinvention of the wheel, or worse, no testing facilities. Without adequate pre-real world testing, integration errors can go undetected until real world testing where they are more costly to diagnose and rectify, e.g. especially when developing Unmanned Underwater Vehicles (UUVs). This thesis introduces a novel framework, the Augmented Reality Framework (ARF), for rapid construction of virtual environments for Augmented Reality tasks such as Pure Simulation, HIL, Hybrid Simulation and real world testing. ARF’s architecture is based on JavaBeans and is therefore inherently generic, flexible and extendable. The aim is to increase the performance of constructing, reconfiguring and extending virtual environments, and consequently enable more mature and stable systems to be developed in less time due to previously undetectable faults being diagnosed earlier in the pre-real-world testing phase. This is only achievable if test harnesses can be created quickly and easily, which in turn allows the developer to visualise more system feedback making faults easier to spot. Early fault detection and less wasted real world testing leads to a more mature, stable and less expensive system. ARF provides guidance on how to connect and configure user made components, allowing for rapid prototyping and complex virtual environments to be created quickly and easily. In essence, ARF tries to provide intuitive construction guidance which is similar in nature to LEGOR pieces which can be so easily connected to form useful configurations. ARF is demonstrated through case studies which show the flexibility and applicability of ARF to testing techniques such as HIL for UUVs. In addition, an informal study was carried out to asses the performance increases attributable to ARF’s core concepts. In comparison to classical programming methods ARF’s average performance increase was close to 200%. The study showed that ARF was incredibly intuitive since the test subjects were novices in ARF but experts in programming. ARF provides key contributions in the field of HIL testing of remote systems by providing more accessible facilities that allow new or modified testing scenarios to be created where it might not have been feasible to do so before. In turn this leads to early detection of faults which in some cases would not have ever been detected before

    Expressive policy based authorization model for resource-constrained device sensors.

    Get PDF
    Los capítulos II, III y IV están sujetos a confidencialidad por el autor 92 p.Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that expose services that can adapt to user behavior or be managed with the goal of achieving higher productivity, often in multistakeholder applications. In such environments, smart things are cheap sensors (and actuators) and, therefore, constrained devices. However, they are also critical components because of the importance of the provided information. Given that, strong security in general and access control in particular is a must.However, tightness, feasibility and usability of existing access control models do not cope well with the principle of least privilege; they lack both expressiveness and the ability to update the policy to be enforced in the sensors. In fact, (1) traditional access control solutions are not feasible in all constrained devices due their big impact on the performance although they provide the highest effectiveness by means of tightness and flexibility. (2) Recent access control solutions designed for constrained devices can be implemented only in not so constrained ones and lack policy expressiveness in the local authorization enforcement. (3) Access control solutions currently feasible in the most severely constrained devices have been based on authentication and very coarse grained and static policies, scale badly, and lack a feasible policy based access control solution aware of local context of sensors.Therefore, there is a need for a suitable End-to-End (E2E) access control model to provide fine grained authorization services in service oriented open scenarios, where operation and management access is by nature dynamic and that integrate massively deployed constrained but manageable sensors. Precisely, the main contribution of this thesis is the specification of such a highly expressive E2E access control model suitable for all sensors including the most severely constrained ones. Concretely, the proposed E2E access control model consists of three main foundations. (1) A hybrid architecture, which combines advantages of both centralized and distributed architectures to enable multi-step authorization. Fine granularity of the enforcement is enabled by (2) an efficient policy language and codification, which are specifically defined to gain expressiveness in the authorization policies and to ensure viability in very-constrained devices. The policy language definition enables both to make granting decisions based on local context conditions, and to react accordingly to the requests by the execution of additional tasks defined as obligations.The policy evaluation and enforcement is performed not only during the security association establishment but also afterward, while such security association is in use. Moreover, this novel model provides also control over access behavior, since iterative re-evaluation of the policy is enabled during each individual resource access.Finally, (3) the establishment of an E2E security association between two mutually authenticated peers through a security protocol named Hidra. Such Hidra protocol, based on symmetric key cryptography, relies on the hybrid three-party architecture to enable multi-step authorization as well as the instant provisioning of a dynamic security policy in the sensors. Hidra also enables delegated accounting and audit trail. Proposed access control features cope with tightness, feasibility and both dimensions of usability such as scalability and manageability, which are the key unsolved challenges in the foreseen open and dynamic scenarios enabled by IoT. Related to efficiency, the high compression factor of the proposed policy codification and the optimized Hidra security protocol relying on a symmetric cryptographic schema enable the feasibility as it is demonstrated by the validation assessment. Specifically, the security evaluation and both the analytical and experimental performance evaluation demonstrate the feasibility and adequacy of the proposed protocol and access control model.Concretely, the security validation consists of the assessment that the Hidra security protocol meets the security goals of mutual strong authentication, fine-grained authorization, confidentiality and integrity of secret data and accounting. The security analysis of Hidra conveys on the one hand, how the design aspects of the message exchange contribute to the resilience against potential attacks. On the other hand, a formal security validation supported by a software tool named AVISPA ensures the absence of flaws and the correctness of the design of Hidra.The performance validation is based on an analytical performance evaluation and a test-bed implementation of the proposed access control model for the most severely constrained devices. The key performance factor is the length of the policy instance, since it impacts proportionally on the three critical parameters such as the delay, energy consumption, memory footprint and therefore, on the feasibility.Attending to the obtained performance measures, it can be concluded that the proposed policy language keeps such balance since it enables expressive policy instances but always under limited length values. Additionally, the proposed policy codification improves notably the performance of the protocol since it results in the best policy length compression factor compared with currently existing and adopted standards.Therefore, the assessed access control model is the first approach to bring to severely constrained devices a similar expressiveness level for enforcement and accounting as in current Internet. The positive performance evaluation concludes the feasibility and suitability of this access control model, which notably rises the security features on severely constrained devices for the incoming smart scenarios.Additionally, there is no comparable impact assessment of policy expressiveness of any other access control model. That is, the presented analysis models as well as results might be a reference for further analysis and benchmarkingGaur egun darabilzkigun hainbeste gailutan mikroprozesadoreak daude txertatuta, eragiten duten prozesuan neurketak egin eta logika baten ondorioz ekiteko. Horretarako, bai sentsoreak eta baita aktuadoreak erabiltzen dira (hemendik aurrera, komunitatean onartuta dagoenez, sentsoreak esango diegu nahiz eta erabilpen biak izan). Orain arteko erabilpen zabalenetako konekzio motak, banaka edota sare lokaletan konekatuta izan dira. Era honetan, sentsoreak elkarlanean elkarreri eraginez edota zerbitzari nagusi baten agindupean, erakunde baten prozesuak ahalbideratu eta hobetzeko erabili izan dira.Internet of Things (IoT) deritzonak, sentsoreak dituzten gailuak Internet sarearen bidez konektatu eta prozesu zabalagoak eta eraginkorragoak ahalbidetzen ditu. Smartcity, Smartgrid, Smartfactory eta bestelako smart adimendun ekosistemak, gaur egun dauden eta datozen komunikaziorako teknologien aukerak baliatuz, erabilpen berriak ahalbideratu eta eragina areagotzea dute helburu.Era honetan, ekosistema hauek zabalak dira, eremu ezberdinetako erakundeek hartzen dute parte, eta berariazko sentsoreak dituzten gailuen kopurua izugarri handia da. Sentsoreak beraz, berariazkoak, merkeak eta txikiak dira, eta orain arteko lehenengo erabilpen nagusia, magnitude fisikoren bat neurtzea eta neurketa hauek zerbitzari zentralizatu batera bidaltzea izan da. Hau da, inguruan gertatzen direnak neurtu, eta zerbitzari jakin bati neurrien datuak aldiro aldiro edota atari baten baldintzapean igorri. Zerbitzariak logika aplikatu eta sistema osoa adimendun moduan jardungo du. Jokabide honetan, aurretik ezagunak diren entitateen arteko komunikazioen segurtasuna bermatzearen kexka, nahiz eta Internetetik pasatu, hein onargarri batean ebatzita dago gaur egun.Baina adimendun ekosistema aurreratuak sentsoreengandik beste jokabide bat ere aurreikusten dute. Sentsoreek eurekin harremanak izateko moduko zerbitzuak ere eskaintzen dituzte. Erakunde baten prozesuetan, beste jatorri bateko erakundeekin elkarlanean, jokabide honen erabilpen nagusiak bi dira. Batetik, prozesuan parte hartzen duen erabiltzaileak (eta jabeak izan beharrik ez duenak) inguruarekin harremanak izan litzake, eta bere ekintzetan gailuak bere berezitasunetara egokitzearen beharrizana izan litzake. Bestetik, sentsoreen jarduera eta mantenimendua zaintzen duten teknikariek, beroriek egokitzeko zerbitzuen beharrizana izan dezakete.Holako harremanak, sentsoreen eta erabiltzaileen kokalekua zehaztugabea izanik, kasu askotan Internet bidez eta zuzenak (end-to-end) izatea aurreikusten da. Hau da, sentsore txiki asko daude handik hemendik sistemaren adimena ahalbidetuz, eta harreman zuzenetarako zerbitzu ñimiñoak eskainiz. Batetik, zerbitzu zuzena, errazagoa eta eraginkorragoa dena, bestetik erronkak ere baditu. Izan ere, sentsoreak hain txikiak izanik, ezin dituzte gaur egungo protokolo eta mekanismo estandarak gauzatu. Beraz, sare mailatik eta aplikazio mailarainoko berariazko protokoloak sortzen ari dira.Tamalez, protokolo hauek arinak izatea dute helburu eta segurtasuna ez dute behar den moduan aztertu eta gauzatzen. Eta egon badaude berariazko sarbide kontrolerako ereduak baina baliabideen urritasuna dela eta, ez dira ez zorrotzak ez kudeagarriak. Are gehiago, Gartnerren arabera, erabilpen aurreratuetan inbertsioa gaur egun mugatzen duen traba Nagusia segurtasunarekiko mesfidantza da.Eta hauxe da erronka eta tesi honek landu duen gaia: batetik sentsoreak hain txikiak izanik, eta baliabideak hain urriak (10kB RAM, 100 kB Flash eta bateriak, sentsore txikienetarikoetan), eta bestetik Internet sarea hain zabala eta arriskutsua izanik, segurtasuna areagotuko duen sarbide zuzenaren kontrolerako eredu zorrotz, arin eta kudeagarri berri bat zehaztu eta bere erabilgarritasuna aztertu

    Doctor of Philosophy

    Get PDF
    dissertationHigh Performance Computing (HPC) on-node parallelism is of extreme importance to guarantee and maintain scalability across large clusters of hundreds of thousands of multicore nodes. HPC programming is dominated by the hybrid model "MPI + X", with MPI to exploit the parallelism across the nodes, and "X" as some shared memory parallel programming model to accomplish multicore parallelism across CPUs or GPUs. OpenMP has become the "X" standard de-facto in HPC to exploit the multicore architectures of modern CPUs. Data races are one of the most common and insidious of concurrent errors in shared memory programming models and OpenMP programs are not immune to them. The OpenMP-provided ease of use to parallelizing programs can often make it error-prone to data races which become hard to find in large applications with thousands lines of code. Unfortunately, prior tools are unable to impact practice owing to their poor coverage or poor scalability. In this work, we develop several new approaches for low overhead data race detection. Our approaches aim to guarantee high precision and accuracy of race checking while maintaining a low runtime and memory overhead. We present two race checkers for C/C++ OpenMP programs that target two different classes of programs. The first, ARCHER, is fast but requires large amount of memory, so it ideally targets applications that require only a small portion of the available on-node memory. On the other hand, SWORD strikes a balance between fast zero memory overhead data collection followed by offline analysis that can take a long time, but it often report most races quickly. Given that race checking was impossible for large OpenMP applications, our contributions are the best available advances in what is known to be a difficult NP-complete problem. We performed an extensive evaluation of the tools on existing OpenMP programs and HPC benchmarks. Results show that both tools guarantee to identify all the races of a program in a given run without reporting any false alarms. The tools are user-friendly, hence serve as an important instrument for the daily work of programmers to help them identify data races early during development and production testing. Furthermore, our demonstrated success on real-world applications puts these tools on the top list of debugging tools for scientists at large

    Security for safety critical space borne systems

    Get PDF
    The Space Station contains safety critical computer software components in systems that can affect life and vital property. These components require a multilevel secure system that provides dynamic access control of the data and processes involved. A study is under way to define requirements for a security model providing access control through level B3 of the Orange Book. The model will be prototyped at NASA-Johnson Space Center

    Expressive policy based authorization model for resource-constrained device sensors.

    Get PDF
    Los capítulos II, III y IV están sujetos a confidencialidad por el autor 92 p.Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that expose services that can adapt to user behavior or be managed with the goal of achieving higher productivity, often in multistakeholder applications. In such environments, smart things are cheap sensors (and actuators) and, therefore, constrained devices. However, they are also critical components because of the importance of the provided information. Given that, strong security in general and access control in particular is a must.However, tightness, feasibility and usability of existing access control models do not cope well with the principle of least privilege; they lack both expressiveness and the ability to update the policy to be enforced in the sensors. In fact, (1) traditional access control solutions are not feasible in all constrained devices due their big impact on the performance although they provide the highest effectiveness by means of tightness and flexibility. (2) Recent access control solutions designed for constrained devices can be implemented only in not so constrained ones and lack policy expressiveness in the local authorization enforcement. (3) Access control solutions currently feasible in the most severely constrained devices have been based on authentication and very coarse grained and static policies, scale badly, and lack a feasible policy based access control solution aware of local context of sensors.Therefore, there is a need for a suitable End-to-End (E2E) access control model to provide fine grained authorization services in service oriented open scenarios, where operation and management access is by nature dynamic and that integrate massively deployed constrained but manageable sensors. Precisely, the main contribution of this thesis is the specification of such a highly expressive E2E access control model suitable for all sensors including the most severely constrained ones. Concretely, the proposed E2E access control model consists of three main foundations. (1) A hybrid architecture, which combines advantages of both centralized and distributed architectures to enable multi-step authorization. Fine granularity of the enforcement is enabled by (2) an efficient policy language and codification, which are specifically defined to gain expressiveness in the authorization policies and to ensure viability in very-constrained devices. The policy language definition enables both to make granting decisions based on local context conditions, and to react accordingly to the requests by the execution of additional tasks defined as obligations.The policy evaluation and enforcement is performed not only during the security association establishment but also afterward, while such security association is in use. Moreover, this novel model provides also control over access behavior, since iterative re-evaluation of the policy is enabled during each individual resource access.Finally, (3) the establishment of an E2E security association between two mutually authenticated peers through a security protocol named Hidra. Such Hidra protocol, based on symmetric key cryptography, relies on the hybrid three-party architecture to enable multi-step authorization as well as the instant provisioning of a dynamic security policy in the sensors. Hidra also enables delegated accounting and audit trail. Proposed access control features cope with tightness, feasibility and both dimensions of usability such as scalability and manageability, which are the key unsolved challenges in the foreseen open and dynamic scenarios enabled by IoT. Related to efficiency, the high compression factor of the proposed policy codification and the optimized Hidra security protocol relying on a symmetric cryptographic schema enable the feasibility as it is demonstrated by the validation assessment. Specifically, the security evaluation and both the analytical and experimental performance evaluation demonstrate the feasibility and adequacy of the proposed protocol and access control model.Concretely, the security validation consists of the assessment that the Hidra security protocol meets the security goals of mutual strong authentication, fine-grained authorization, confidentiality and integrity of secret data and accounting. The security analysis of Hidra conveys on the one hand, how the design aspects of the message exchange contribute to the resilience against potential attacks. On the other hand, a formal security validation supported by a software tool named AVISPA ensures the absence of flaws and the correctness of the design of Hidra.The performance validation is based on an analytical performance evaluation and a test-bed implementation of the proposed access control model for the most severely constrained devices. The key performance factor is the length of the policy instance, since it impacts proportionally on the three critical parameters such as the delay, energy consumption, memory footprint and therefore, on the feasibility.Attending to the obtained performance measures, it can be concluded that the proposed policy language keeps such balance since it enables expressive policy instances but always under limited length values. Additionally, the proposed policy codification improves notably the performance of the protocol since it results in the best policy length compression factor compared with currently existing and adopted standards.Therefore, the assessed access control model is the first approach to bring to severely constrained devices a similar expressiveness level for enforcement and accounting as in current Internet. The positive performance evaluation concludes the feasibility and suitability of this access control model, which notably rises the security features on severely constrained devices for the incoming smart scenarios.Additionally, there is no comparable impact assessment of policy expressiveness of any other access control model. That is, the presented analysis models as well as results might be a reference for further analysis and benchmarkingGaur egun darabilzkigun hainbeste gailutan mikroprozesadoreak daude txertatuta, eragiten duten prozesuan neurketak egin eta logika baten ondorioz ekiteko. Horretarako, bai sentsoreak eta baita aktuadoreak erabiltzen dira (hemendik aurrera, komunitatean onartuta dagoenez, sentsoreak esango diegu nahiz eta erabilpen biak izan). Orain arteko erabilpen zabalenetako konekzio motak, banaka edota sare lokaletan konekatuta izan dira. Era honetan, sentsoreak elkarlanean elkarreri eraginez edota zerbitzari nagusi baten agindupean, erakunde baten prozesuak ahalbideratu eta hobetzeko erabili izan dira.Internet of Things (IoT) deritzonak, sentsoreak dituzten gailuak Internet sarearen bidez konektatu eta prozesu zabalagoak eta eraginkorragoak ahalbidetzen ditu. Smartcity, Smartgrid, Smartfactory eta bestelako smart adimendun ekosistemak, gaur egun dauden eta datozen komunikaziorako teknologien aukerak baliatuz, erabilpen berriak ahalbideratu eta eragina areagotzea dute helburu.Era honetan, ekosistema hauek zabalak dira, eremu ezberdinetako erakundeek hartzen dute parte, eta berariazko sentsoreak dituzten gailuen kopurua izugarri handia da. Sentsoreak beraz, berariazkoak, merkeak eta txikiak dira, eta orain arteko lehenengo erabilpen nagusia, magnitude fisikoren bat neurtzea eta neurketa hauek zerbitzari zentralizatu batera bidaltzea izan da. Hau da, inguruan gertatzen direnak neurtu, eta zerbitzari jakin bati neurrien datuak aldiro aldiro edota atari baten baldintzapean igorri. Zerbitzariak logika aplikatu eta sistema osoa adimendun moduan jardungo du. Jokabide honetan, aurretik ezagunak diren entitateen arteko komunikazioen segurtasuna bermatzearen kexka, nahiz eta Internetetik pasatu, hein onargarri batean ebatzita dago gaur egun.Baina adimendun ekosistema aurreratuak sentsoreengandik beste jokabide bat ere aurreikusten dute. Sentsoreek eurekin harremanak izateko moduko zerbitzuak ere eskaintzen dituzte. Erakunde baten prozesuetan, beste jatorri bateko erakundeekin elkarlanean, jokabide honen erabilpen nagusiak bi dira. Batetik, prozesuan parte hartzen duen erabiltzaileak (eta jabeak izan beharrik ez duenak) inguruarekin harremanak izan litzake, eta bere ekintzetan gailuak bere berezitasunetara egokitzearen beharrizana izan litzake. Bestetik, sentsoreen jarduera eta mantenimendua zaintzen duten teknikariek, beroriek egokitzeko zerbitzuen beharrizana izan dezakete.Holako harremanak, sentsoreen eta erabiltzaileen kokalekua zehaztugabea izanik, kasu askotan Internet bidez eta zuzenak (end-to-end) izatea aurreikusten da. Hau da, sentsore txiki asko daude handik hemendik sistemaren adimena ahalbidetuz, eta harreman zuzenetarako zerbitzu ñimiñoak eskainiz. Batetik, zerbitzu zuzena, errazagoa eta eraginkorragoa dena, bestetik erronkak ere baditu. Izan ere, sentsoreak hain txikiak izanik, ezin dituzte gaur egungo protokolo eta mekanismo estandarak gauzatu. Beraz, sare mailatik eta aplikazio mailarainoko berariazko protokoloak sortzen ari dira.Tamalez, protokolo hauek arinak izatea dute helburu eta segurtasuna ez dute behar den moduan aztertu eta gauzatzen. Eta egon badaude berariazko sarbide kontrolerako ereduak baina baliabideen urritasuna dela eta, ez dira ez zorrotzak ez kudeagarriak. Are gehiago, Gartnerren arabera, erabilpen aurreratuetan inbertsioa gaur egun mugatzen duen traba Nagusia segurtasunarekiko mesfidantza da.Eta hauxe da erronka eta tesi honek landu duen gaia: batetik sentsoreak hain txikiak izanik, eta baliabideak hain urriak (10kB RAM, 100 kB Flash eta bateriak, sentsore txikienetarikoetan), eta bestetik Internet sarea hain zabala eta arriskutsua izanik, segurtasuna areagotuko duen sarbide zuzenaren kontrolerako eredu zorrotz, arin eta kudeagarri berri bat zehaztu eta bere erabilgarritasuna aztertu

    Towards Secure Collaboration in Federated Cloud Environments

    Get PDF
    Public administrations across Europe have been actively following and adopting cloud paradigms at various degrees. By establishing modern data centers and consolidating their infrastructures, many organizations already benefit from a range of cloud advantages. However, there is a growing need to further support the consolidation and sharing of resources across different public entities. The ever increasing volume of processed data and diversity of organizational interactions stress this need even further, calling for the integration on the levels of infrastructure, data and services. This is currently hindered by strict requirements in the field of data security and privacy. In this paper, we present ongoing work aimed at enabling secure private cloud federations for public administrations, performed in the scope of the SUNFISH H2020 project. We focus on architectural components and processes that establish cross-organizational enforcement of data security policies in mixed and heterogeneous environments. Our proposal introduces proactive restriction of data flows in federated environments by integrating real-time based security policy enforcement and its post-execution conformance verification. The goal of this framework is to enable secure service integration and data exchange in cross-entity contexts by inspecting data flows and assuring their conformance with security policies, both on organizational and federation level

    Bootstrapping ontology-based data access specifications from relational databases

    Get PDF
    Ontology-based data access (OBDA) is a useful approach to gain access to existing structured data in an easy and intuitive manner, abstracting from the representation of the data and thus releasing from the burden of having to deal with technical details of the data structuring. By using the approach of ontology bootstrapping, an ontology reflecting the structure of the data can automatically be created by examing the data source. This allows to benefit from ontology-based data access concepts without having to forbear from the advantages regarding performance and data consistency which is offered by relational database systems. Furthermore, establishing a system providing a so-called “virtual RDF view”, data from relational sources can be accessed and even updated by querying or modifying the ontology while remaining in the database. However, the creation of an ontology from another structured data source is a complex task that involves several tools, formats and languages. Also taking changing input data into account, the process of ontology bootstrapping becomes a significant engineering task in its own right. The approach of introducing OBDA specifications proposed by Skjæveland et al. address this issue by concentrating all relevant information on the bootstapping process in one place, allowing for systems which drive the entire tool chain based on this information and thus rendering the use of ad-hoc scripts unnecessary. A concrete format for serializing such an OBDA specification for storage or exchange, however, is not provided. The introduction of such a format is part of this thesis. After the task it accomplishes, the format was named “OBDA Specification Language” (OSL). Furthermore, this thesis describes the design and implementation of db2osl, a software which is able to, in turn, automatically bootstrap an OBDA specification from a relational database schema. Finally, it gives an explanatory description of both the process of ontology bootstrapping and the process of an OBDA specification and proposes a refined scheme for generating IRIs for the entities of the resulting ontology.Ontologiebasierter Datenzugriff (OBDA von engl. „Ontology-based data access“) ist ein nützlicher Ansatz um auf einfache und intuitive Weise Zugriff auf gegebene strukturierte Daten zu erhalten, der von der konkreten Representation der Daten abstrahiert und deshalb die Bürde vom Benutzen nimmt, sich mit technischen Details der Datenunterteilung beschäftigen zu müssen. Durch den Ansatz der Ontologieableitung kann durch eine Untersuchung der Quelldaten automatisch eine Ontologie erstellt werden, die die Struktur der Daten widerspiegelt. Das erlaubt es, von den Vorteilen des ontologiebasierten Datenzugriffs zu profitieren, während die Vorteile von relationalen Datenbanksystemen hinsichtlich der Performanz und Datenkonsistenz nicht aufgegeben müssen. Des Weiteren kann durch den Aufbau eines Systems mit einer so genannten „virtuellen RDF-Ansicht“ auf Daten einer relationalen Datenbank zugegriffen und diese sogar verändert werden indem Anfragen oder Modifikationen an die bzw. der Ontologie durchgeführt werden, während die Daten in der Datenbank bleiben. Jedoch ist die Ableitung einer Ontologie aus einer anderen Form strukturierter Daten eine komplexe Aufgabe, die verschiedene Werkzeuge, Formate und Sprachen involviert. Die zusätzlichen Anforderung sich ändernder Quelldaten macht den Prozess der Ontologieableitung vollends zu einer eigenen technischen Herausforderung. Der von Skjæveland et al. vorgeschlagene Ansatz, OBDA-Spezifikationen einzuführen, begegnet dieser Problematik, indem die für den Ableitungsprozess relevanten Informationen an einer Stelle gesammelt werden und so Systeme ermöglichen, die anhand dieser Informationen den gesamte Werkzeugverband steuern, wodurch zusammengestückelte Systeme redundant werden. Ein genaues Format um diese OBDA-Spezifikationen zu speichern und auszutauschen wird jedoch nicht angegeben. Die Einführung eines solchen Formats ist Teil dieser Arbeit. Nach ihrer Aufgabe wurde das Format „OBDA Specification Language“ (OSL) genannt (deutsch: „OBDA-Spezifikationssprache“). Außerdem beschreibt diese Arbeit das Design und die Implementierung von db2osl, einer Software, die in der Lage ist, wiederum die Ableitung einer OBDA-Spezifikation aus einem relationalen Datenbankschema automatisch zu bewerkstelligen. Zudem enthält sie eine erklärende Beschreibung sowohl der Ontologieableitung als auch der Ableitung einer OBDA-Spezifikation und schlägt ein verfeinertes Schema zur Generierung von IRIs für die Elemente der Zielontologie vor
    corecore