14 research outputs found

    Context-based confidentiality analysis in dynamic Industry 4.0 scenarios

    Get PDF
    In Industry 4.0 environments highly dynamic and flexible access control strategies are needed. State of the art strategies are often not included in the modelling process but must be considered afterwards. This makes it very difficult to analyse the security properties of a system. In the framework of the Trust 4.0 project the confidentiality analysis tries to solve this problem using a context-based approach. Thus, there is a security model named context metamodel. Another important problem is that the transformation of an instance of a security model to a wide-spread access control standard is often not possible. This is also the case for the context metamodel. Moreover, another transformation which is very interesting to consider is one to an ensemble based component system which is also presented in the Trust 4.0 project. This thesis introduces an extension to the beforementioned context metamodel in order to add more extensibility to it. Furthermore, the thesis deals with the creation of a concept and an implementation of the transformations mentioned above. For that purpose, at first, the transformation to the attribute-based access control standard XACML is considered. Thereafter, the transformation from XACML to an ensemble based component system is covered. The evaluation indicated that the model can be used for use cases in Industry 4.0 scenarios. Moreover, it also indicated the transformations produce adequately accurate access policies. Furthermore, the scalability evaluation indicated linear runtime behaviour of the implementations of both transformations for respectively higher number of input contexts or XACML rules

    Evaluating & engineering

    Get PDF
    On a regular basis, we learn about well-known online services that have been misused or compromised by data theft. As insecure applications pose a threat to the users' privacy as well as to the image of companies and organizations, it is absolutely essential to adequately secure them from the start of the development process. Often, reasons for vulnerable applications are related to the insufficient knowledge and experience of involved parties, such as software developers. Unfortunately, they rarely (a) have a comprehensive view of the security-related decisions that should be made, or (b) know how these decisions precisely affect the implementation. A vital decision is the selection of tools and methods that can best support a particular situation in order to shield an application from vulnerabilities. Despite of the level of security that arises from complying with security standards, both reasons inadvertently lead to software that is not secured sufficiently. This thesis tackles both problems. Firstly, in order to know which decision should be made, it is crucial to be aware of security properties, vulnerabilities, threats, security engineering methods, notations, and tools (so-called knowledge objects). Thereby, it is not only important to know which knowledge objects exist, but also how they are related to each other and which attributes they have. Secondly, security decisions made for web applications can have an effect on source code of various components as well as on configuration files of web servers or external protection measures like firewalls. The impact of chosen security measures (i.e., employed methods) can be documented using a modeling approach that provides web-specific modeling elements. Our approach aims to support the conscious construction of secure web applications. Therefore, we develop modeling techniques to represent knowledge objects and to design secure web applications. Our novel conceptual framework SecEval is the foundation of this dissertation. It provides an expandable structure for classifying vulnerabilities, threats, security properties, methods, notations and tools. This structure, called Security Context model, can be instantiated to express attributes and relations, as e.g., which tools exist to support a certain method. Compared with existing approaches, we provide a finer-grained structure that considers security and adapts to the phases of the software development process. In addition to the Security Context model, we define a documentation scheme for the collection and analysis of relevant data. Apart from this domain-independent framework, we focus on secure web applications. We use SecEvalsSecContextM as a basis for a novel Secure Web Applications' Ontology (SecWAO), which serves as a knowledge map. By providing a systematic overview, SecWAO supports a common understanding and supports web engineers who want to systematically specify security requirements or make security-related design decisions. Building on our experience with SecWAO, we further extend the modeling approach UML-based Web Engineering (UWE) by means to model security aspects of web applications. We develop UWE in a way that chosen methods, such as (re)authentication, secure connections, authorization or Cross-Site-Request-Forgery prevention, can be linked to the model of a concrete web application. In short, our approach supports software engineers throughout the software development process. It comprises (1) the conceptual framework SecEval to ease method and tool evaluation, (2) the ontology SecWAO that gives a systematic overview of web security and (3) an extension of UWE that focuses on the development of secure web applications. Various case studies and tools are presented to demonstrate the applicability and extensibility of our approach.Regelmäßig wird von erfolgreichen Angriffen auf Daten und Funktionen bekannter Onlinedienste berichtet. Da unsichere Anwendungen nicht nur eine Bedrohung für die Privatsphäre ihrer Nutzer, sondern auch eine Gefahr für das Image der betroffenen Unternehmen und Organisationen darstellen, ist es unverzichtbar, Anwendungen von Anfang an ausreichend zu schützen. Zwei Gründe für unsichere Anwendungen sind, dass die Beteiligten, wie z.B. Softwareentwickler, nur selten (a) vollständig überblicken, welche sicherheitsbezogenen Entscheidungen getroffen werden müssten oder (b) wissen, welche Auswirkungen diese konkret auf die Implementierung haben. Eine kritische Entscheidung ist die Auswahl von Werkzeugen und Methoden, die in einer bestimmten Situation von Nutzen sein könnten, um die Anwendung vor Schwachstellen zu schützen. Diese Gründe führen - trotz punktuellem Schutz durch das Vorgehen nach IT-Sicherheitsstandards - ungewollt zu Software, die nicht entsprechend ihres Schutzbedarfs abgesichert ist. Die vorliegende Arbeit nimmt sich beider Probleme an. Einerseits ist für die Entscheidungsfindung ein Verständnis von sogenannten "Wissensobjekten", wie Schwachstellen, Bedrohungen, Sicherheitseigenschaften, sicherheitsrelevanten Methoden, Notationen und Werkzeugen essentiell. Dafür ist nicht nur eine Bestandsaufnahme existierender Wissensobjekte wichtig, sondern auch deren Eigenschaften und Zusammenhänge untereinander. Andererseits können sicherheitsrelevante Entscheidungen für Webanwendungen sowohl Auswirkungen auf Quellcodes verschiedener Softwarekomponenten haben, als auch auf Konfigurationsdateien von Webservern oder auf Schutzmaßnahmen wie Firewalls. Mit einem Modellierungsansatz, der webspezifische Modellierungselemente beinhaltet, ist es möglich Sicherheitsmaßnahmen zu dokumentieren. Das Ziel der vorliegenden Arbeit ist es, die bewusste Absicherung sicherheitskritischer Webanwendungen zu unterstützen. Dazu werden Modellierungstechniken zur Darstellung von Wissensobjekten und zum sicheren Webanwendungsdesign entwickelt. Die Basis bildet unser konzeptionelles Framework SecEval. Es beinhaltet eine erweiterbare Struktur für Schwachstellen, Bedrohungen, Sicherheitseigenschaften, Methoden, Notationen und Werkzeuge. Diese Struktur (das sog. Kontextmodell) kann instanziiert werden, um Eigenschaften und Zusammenhänge darzustellen, z.B. Werkzeuge, die eine bestimmte Methode unterstützen. Im Vergleich zu existierenden Arbeiten wird eine detailliertere Struktur aufgebaut, die Sicherheit berücksichtigt und die Phasen des Softwareentwicklungsprozesses mit einbezieht. Zusätzlich zu dem Kontextmodell wird ein Dokumentationsschema zur Sammlung und Analyse passender Daten definiert. Abgesehen von SecEval, das nicht domänenspezifisch ist, liegt der Fokus auf dem Bereich sicherer Webanwendungen. Genutzt wird SecEvals Kontextmodell unter anderem als Basis für die SecWAO-Ontologie - einer Art Wissenslandkarte der Webanwendungssicherheit. SecWAO bietet eine einheitliche Kommunikationsgrundlage und unterstützt Webentwickler, die systematisch Sicherheitsanforderungen spezifizieren oder Designentscheidungen treffen wollen. Aufbauend auf der Struktur von SecWAO wird der Modellierungsansatz UML-based Web Engineering (UWE) mit Elementen zur Dokumentation von Sicherheitsaspekten erweitert. Auf diese Art können ausgewählte Methoden wie z.B. (Re)authentifikation, sichere Verbindungen, Autorisierung oder die Verhinderung von Cross-Site-Request-Forgery direkt in Bezug zur modellierten Webanwendung gesetzt werden. Zusammengefasst unterstützt der vorgestellte Ansatz Softwareentwickler während des Entwicklungsprozesses und umfasst (1) das konzeptionelle Framework SecEval, das die Evaluation von Methoden und Werkzeugen vereinfacht, (2) die Ontologie SecWAO, die einen systematischen Überblick über Websicherheit gibt und (3) eine Erweiterung von UWE für sichere Webanwendungen. Verschiedene Fallstudien und Werkzeuge werden vorgestellt, die die Anwendbarkeit und Erweiterbarkeit des Ansatzes zu veranschaulichen

    DevOps for Trustworthy Smart IoT Systems

    Get PDF
    ENACT is a research project funded by the European Commission under its H2020 program. The project consortium consists of twelve industry and research member organisations spread across the whole EU. The overall goal of the ENACT project was to provide a novel set of solutions to enable DevOps in the realm of trustworthy Smart IoT Systems. Smart IoT Systems (SIS) are complex systems involving not only sensors but also actuators with control loops distributed all across the IoT, Edge and Cloud infrastructure. Since smart IoT systems typically operate in a changing and often unpredictable environment, the ability of these systems to continuously evolve and adapt to their new environment is decisive to ensure and increase their trustworthiness, quality and user experience. DevOps has established itself as a software development life-cycle model that encourages developers to continuously bring new features to the system under operation without sacrificing quality. This book reports on the ENACT work to empower the development and operation as well as the continuous and agile evolution of SIS, which is necessary to adapt the system to changes in its environment, such as newly appearing trustworthiness threats

    DevOps for Trustworthy Smart IoT Systems

    Get PDF
    ENACT is a research project funded by the European Commission under its H2020 program. The project consortium consists of twelve industry and research member organisations spread across the whole EU. The overall goal of the ENACT project was to provide a novel set of solutions to enable DevOps in the realm of trustworthy Smart IoT Systems. Smart IoT Systems (SIS) are complex systems involving not only sensors but also actuators with control loops distributed all across the IoT, Edge and Cloud infrastructure. Since smart IoT systems typically operate in a changing and often unpredictable environment, the ability of these systems to continuously evolve and adapt to their new environment is decisive to ensure and increase their trustworthiness, quality and user experience. DevOps has established itself as a software development life-cycle model that encourages developers to continuously bring new features to the system under operation without sacrificing quality. This book reports on the ENACT work to empower the development and operation as well as the continuous and agile evolution of SIS, which is necessary to adapt the system to changes in its environment, such as newly appearing trustworthiness threats

    An interoperability framework for security policy languages

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophySecurity policies are widely used across the IT industry in order to secure environments. Firewalls, routers, enterprise application or even operating systems like Windows and Unix are all using security policies to some extent in order to secure certain components. In order to automate enforcement of security policies, security policy languages have been introduced. Security policy languages that are classified as computer software, like many other programming languages have been revolutionised during the last decade. A number of security policy languages have been introduced in the industry in order to tackle a specific business requirements. Not to mention each of these security policy languages themselves evolved and enhanced during the last few years. Having said that, a quick research on security policy languages shows that the industry suffers from the lack of a framework for security policy languages. Such a framework would facilitate the management of security policies from an abstract point. In order to achieve that specific goal, the framework utilises an abstract security policy language that is independent of existing security policy languages yet capable of expressing policies written in those languages. Usage of interoperability framework for security policy languages as described above comes with major benefits that are categorised into two levels: short and long-term benefits. In short-term, industry and in particular multi-dimensional organisations that make use of multiple domains for different purposes would lower their security related costs by managing their security policies that are stretched across their environment and often managed locally. In the long term, usage of abstract security policy language that is independent of any existing security policy languages, gradually paves the way for standardising security policy languages. A goal that seems unreachable at this moment of time. Taking the above facts into account, the aim of this research is to introduce and develop a novel framework for security policy languages. Using such a framework would allow multi-dimensional organisations to use an abstract policy language to orchestrate all security policies from a single point, which could then be propagated across their environment. In addition, using such a framework would help security administrators to learn and use only one single, common abstract language to describe and model their environment(s)

    Security and Usability in the HeadREST Language

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitectura, Sistemas e Redes de Computadores) Universidade de Lisboa, Faculdade de Ciências, 2020Actualmente, observa-se o crescimento contínuo de serviços web, sem sinais de abrandar. As trocas de informação com estes serviços seguem diferentes padrões. De entre os muitos padrões utilizados, destaca-se o REST (REpresentational State Transfer). O REST é um estilo arquitectural muito utilizado actualmente. Neste estilo arquitectural as operações e propriedades do protocolo HTTP, sobre o qual o World Wide Web funciona, são aproveitadas para realizar as interacções de clientes com serviços web. Em REST, o elemento basilar são os recursos, que correspondem a pedaços de informação que podem ser referenciados por um identificador. Cada recurso tem uma, ou várias, representações, que podem ter diferentes formatos, e que podem mudar na sequência de operações executadas sobre o mesmo. Um serviço web que adere ao estilo arquitectural REST é chamado de serviço REST. Para programar clientes de um serviço REST é fundamental que esteja disponível uma boa documentação da sua API, com especificações claras das suas operações e dos dados trocados nestas operações entre os clientes e o serviço. No desenvolvimento deste tipo de serviços são utilizadas linguagens de descrição de interfaces, tal como a OpenAPI Specification, o RAML ou a API Blueprint. Estas linguagens permitem especificar formalmente as operações suportadas por um serviço REST e oferecem a capacidade de documentar os dados que são trocados durante as interacções com o serviço. Apesar da sua popularidade, estas linguagens de especificação têm um poder expressivo limitado. Uma das limitações é que não terem capacidade para descrever com precisão o comportamento das diferentes operações. Numa tentativa de endereçar estas limitações, tem vindo a ser desenvolvida a linguagem HeadREST. A linguagem tem um sistema de tipos refinados que permite restringir os valores admissíveis de um tipo, e portanto descrever com mais rigor os tipos dos dados trocados num serviço REST. Para permitir especificar com precisão as operações de um serviço REST, a linguagem HeadREST dispõe de asserções. Estas asserções, semelhantes aos triplos de Hoare, são compostas por uma pré-condição, um URI template da operação e uma pós-condição. As asserções especificam que, quando a pré-condição é satisfeita, a execução da operação estabelece a pós-condição. Devido ao sistema de tipos refinados não é possível resolver através de regras sintácticas as relações de subtipagem. Para endereçar esta situação foi tomada a decisão de utilizar um procedimento semântico para tratar destas situações. A relação de subtipagem é transformada em fórmulas lógicas de primeira ordem, que são depois dadas a um SMT solver para as resolver. Apesar do seu grande poder expressivo, o HeadREST, como linguagem de especificação, está longe de ser perfeita. Um dos problemas mais importantes está relacionado com a sua usabilidade. Apesar da linguagem permitir descrever operações com grande rigor e detalhe, isso é feito à custa de asserções bastante complexas que são não só difíceis de escrever correctamente, como de compreender. Muitas das linguagens de especificação de serviços REST oferecem, mesmo que de forma limitada, uma forma de expressar o que o serviço exige em termos de autenticação e/ou autorização. Existem vários tipos de autenticação e autorização que podem ser usados para restringir acesso a recursos em serviços REST, por exemplo, API keys, Tokens, http authentication&HTTP digest, OAuth 2.0, OpenID Connect. Para além disto, cada serviço REST pode tomar abordagens diferentes em relação a políticas de autorização. Este trabalho endereçou estes dois problemas e pretendeu contribuir com soluções que os ajudassem a resolver. Para o problema de usabilidade, a solução concebida passou pela criação de extensões para a linguagem com ênfase em expressões derivadas. A linguagem foi estendida com: (i) iteradores quantificados que permitem expressar melhor propriedades sobre arrays, (ii) interpolação para permitir criar Strings a partir de URIs de uma forma mais simples e directa, (iii) um operador de extracção que permite aceder à representação de um recurso se esta for única e finalmente, (iv) funções que permitem abstrair expressões repetidas de uma forma mais flexível (apenas as funções não são derivadas). A abordagem para endereçar a especificação de políticas de segurança em APIs REST assentou na adição (i) de um novo tipo Principal, correspondente às entidades autenticadas e (ii) de uma função não-interpretada principalof capturando o Principal autenticado por um valor usado na autenticação. A linguagem foi estendida com a definição de funções não-interpretadas, para permitir que sejam feitas associações entre o tipo Principal e outros dados que possam vir de diferentes fontes (representações, templates de URIs, corpo dos pedidos, etc.), dando assim a possibilidade de especificar os diferentes tipos de políticas de segurança usadas em serviços REST. A avaliação das soluções propostas foi realizada de diferentes formas. Foi realizado um estudo com utilizadores envolvendo a resposta a um questionário com perguntas sobre a linguagem HeadREST antes e depois das extensões e foi feito um estudo quantitativo a comparar o impacto das extensões em termos de métricas de complexidade das especificações e no desempenho do validador. Para avaliar as extensões referentes à segurança foram realizados alguns casos de estudo, envolvendo a especificação parcial de alguns serviços REST do "mundo-real". Foi ainda explorado o impacto que as extensões introduzidas na linguagem têm nas ferramentas que actualmente fazem parte do ecossistema HeadREST: (i) a ferramenta HeadREST-RTester, que permite testar automaticamente a conformidade da implementação de um serviço REST contra uma especificação HeadREST da sua API, (ii) a ferramenta HeadREST-Codegen, que faz a geração de código, e (iii) a linguagem SafeRestScript, uma linguagem de script em que é realizada estaticamente a validação das chamadas a serviços REST cujas APIs tenham sido especificadas com HeadREST. A linguagem HeadREST possui um validador, um plug-in para o IDE Eclipse e uma versão headless para ser utilizada no terminal.The RESTful services are still today the most popular type of web services. Communication between these services and their clients happens through their RESTful APIs and, to correctly use the services, good documentation of their APIs is paramount. With the purpose of facilitating the specification of web APIs, different Interface Definition Languages (IDLs) have been developed. However, they tend to be quite limited and impose severe restrictions in what can be described. As a consequence, many important properties can only be written in natural language. HeadREST is a specification language of RESTful APIs that poses itself as a solution to the limitations faced by other IDLs. The language has an expressive type system via refinement types and supports the description of service endpoints through assertions that, among other things, allow to express relations between the information transmitted in a request and the response. HeadREST, like other IDLs, is however not without its limitations and issues. This thesis addresses the problems that currently affect the usability of HeadREST and also its lack of expressiveness for specifying security properties of RESTful APIs. The proposed solution encompasses (i) an extension of HeadREST with new specification primitives that can improve the degree of usability of the language and (ii) an ortogonal extension of HeadREST with specification primitives that support the description of authentication and authorisation policies for RESTful APIs. The evaluation of the proposed solution, performed through a user study, a quantitative analysis and the development of case studies, indicates that the primitives targeting the usability issues indeed improve usability of the language and that HeadREST become able to capture dynamic, state-based dependencies that exist in the access control policies that can be found in RESTful APIs

    Securing unikernels in cloud infrastructures

    Get PDF
    PhD ThesisCloud computing adoption has seen an increase during the last few years. However, cloud tenants are still concerned about the security that the Cloud Service Provider (CSP) offers. Recent security incidents in cloud infrastructures that exploit vulnerabilities in the software layer highlight the need to develop new protection mechanisms. A recent direction in cloud computing is toward massive consolidation of resources by using lightweight Virtual Machines (VMs) called unikernels. Unikernels are specialised VMs that eliminate the Operating System (OS) layer and include the advantages of small footprint, minimal attack surface, nearinstant boot times and multi-platform deployment. Even though using unikernels has certain advantages, unikernels employ a number of shortcomings. First, unikernels do not employ context switching from user to kernel mode. A malicious user could exploit this shortcoming to escape the isolation boundaries that the hypervisor provides. Second, having a large number of unikernels in a single virtualised host creates complex security policies that are difficult to manage and can introduce exploitable misconfigurations. Third, malicious insiders, such as disgruntled system administrators can use privileged software to exfiltrate data from unikernels. In this thesis, we divide our research into two parts, concerning the development of software and hardware-based protection mechanisms for cloud infrastructures that focus on unikernels. In each part, we propose a new protection mechanism for cloud infrastructures, where tenants develop their workloads using unikernels. In the first part, we propose a software-based protection mechanism that controls access to resources, which results on creating least-privileged unikernels. Current access-control mechanisms that reside in hypervisors do not confine unikernels to accepted behaviour and are susceptible to privilege escalation and Virtual Machine escapes attacks. Therefore, current hypervisors need to take into account the possibility of having one or more malicious unikernels and rethink their access-control mechanisms. We designed and implemented VirtusCap, a capability-based access control mechanism that acts as a lower layer of regulating access to resources in cloud infrastructures. Consequently, unikernels are only assigned the privileges required to perform their task. This ensures that the accesscontrol mechanism that resides in the hypervisor will only grant access to resources specified with capabilities. In addition, capabilities are easier to delegate to other unikernels when they need to and the security policies are less complex. Our performance evaluation shows that up to request rate of 7000 (req/sec) our prototype’s response time is identical to XSM-Flask. In the second part, we address the following problem: how to guarantee the confidentiality and integrity of computations executing in a unikernel even in the presence of privileged software used by malicious insiders? A research prototype was designed and implemented called UniGuard, which aims to protect unikernels from an untrusted cloud, by executing the sensitive computations inside secure enclaves. This approach provides confidentiality and integrity guarantees for unikernels against software and certain physical attacks. We show how we integrated Intel SGX with unikernels and added the ability to spawn enclaves that execute the sensitive computations. We conduct experiments to evaluate the performance of UniGuard, which show that UniGuard exhibits acceptable performance overhead in comparison to when the sensitive computations are not executed inside a enclave. To the best of our knowledge, UniGuard is the first solution that protects the confidentiality and integrity of computations that execute inside unikernels using Intel SGX. Currently, unikernels drive the next generation of virtualisation software and especially the cooperation with other virtualisation technologies, such as containers to form hybrid virtualisation workloads. Thus, it is paramount to scrutinise the security of unikernels in cloud infrastructures and propose novel protection mechanisms that will drive the next cloud evolution

    Security techniques for sensor systems and the Internet of Things

    Get PDF
    Sensor systems are becoming pervasive in many domains, and are recently being generalized by the Internet of Things (IoT). This wide deployment, however, presents significant security issues. We develop security techniques for sensor systems and IoT, addressing all security management phases. Prior to deployment, the nodes need to be hardened. We develop nesCheck, a novel approach that combines static analysis and dynamic checking to efficiently enforce memory safety on TinyOS applications. As security guarantees come at a cost, determining which resources to protect becomes important. Our solution, OptAll, leverages game-theoretic techniques to determine the optimal allocation of security resources in IoT networks, taking into account fixed and variable costs, criticality of different portions of the network, and risk metrics related to a specified security goal. Monitoring IoT devices and sensors during operation is necessary to detect incidents. We design Kalis, a knowledge-driven intrusion detection technique for IoT that does not target a single protocol or application, and adapts the detection strategy to the network features. As the scale of IoT makes the devices good targets for botnets, we design Heimdall, a whitelist-based anomaly detection technique for detecting and protecting against IoT-based denial of service attacks. Once our monitoring tools detect an attack, determining its actual cause is crucial to an effective reaction. We design a fine-grained analysis tool for sensor networks that leverages resident packet parameters to determine whether a packet loss attack is node- or link-related and, in the second case, locate the attack source. Moreover, we design a statistical model for determining optimal system thresholds by exploiting packet parameters variances. With our techniques\u27 diagnosis information, we develop Kinesis, a security incident response system for sensor networks designed to recover from attacks without significant interruption, dynamically selecting response actions while being lightweight in communication and energy overhead

    The Internet of Things and The Web of Things

    Get PDF
    International audienceThe Internet of Things is creating a new world, a quantifiable and measureable world, where people and businesses can manage their assets in better informed ways, and can make more timely and better informed decisions about what they want or need to do. This new con-nected world brings with it fundamental changes to society and to consumers. This special issue of ERCIM News thus focuses on various relevant aspects of the Internet of Things and the Web of Things

    Architectural Alignment of Access Control Requirements Extracted from Business Processes

    Get PDF
    Geschäftsprozesse und IT-Systeme sind einer ständigen Evolution unterworfen und beeinflussen sich in hohem Maße gegenseitig. Dies führt zu der Herausforderung, Sicherheitsaspekte innerhalb von Geschäftsprozessen und Enterprise Application Architectures (EAAs) in Einklang zu bringen. Im Besonderen gilt dies für Zugriffskontrollanforderungen, welche sowohl in der IT-Sicherheit als auch im Datenschutz einen hohen Stellenwert haben. Die folgenden drei Ziele der Geschäftsebene verdeutlichen die Bedeutung von Zugriffskontrollanforderungen: 1) 1) Identifikation und Schutz von kritischen und schützenswerten Daten und Assets. 2) 2) Einführung einer organisationsweiten IT-Sicherheit zum Schutz vor cyberkriminellen Attacken. 3) 3) Einhaltung der zunehmenden Flut an Gesetzen, welche die IT-Sicherheit und den Datenschutz betreffen. Alle drei Ziele sind in einem hohen Maß mit Zugriffskontrollanforderungen auf Seiten der Geschäftsebene verbunden. Aufgrund der Fülle und Komplexität stellt die vollständige und korrekte Umsetzung dieser Zugriffskontrollanforderungen eine Herausforderung für die IT dar. Hierfür muss das Wissen von der Geschäftsebene hin zur IT übertragen werden. Die unterschiedlichen Terminologien innerhalb der Fachdomänen erschweren diesen Prozess. Zusätzlich beeinflussen die Größe von Unternehmen, die Komplexität von EAAs sowie die Verflechtung zwischen EAAs und Geschäftsprozessen die Fehleranfälligkeit im Entwurfsprozess von Zugriffsberechtigungen und EAAs. Dieser Zusammenhang führt zu einer Diskrepanz zwischen ihnen und den Geschäftsprozessen und wird durch den Umstand der immer wiederkehrenden Anpassungen aufgrund von Evolutionen der Geschäftsprozesse und IT-Systeme verstärkt. Bisherige Arbeiten, die auf Erweiterungen von Modellierungssprachen setzen, fordern einen hohen Aufwand von Unternehmen, um vorhandene Modelle zu erweitern und die Erweiterungen zu pflegen. Andere Arbeiten setzen auf manuelle Prozesse. Diese erfordern viel Aufwand, skalieren nicht und sind bei komplexen Systemen fehleranfällig. Ziel meiner Arbeit ist es, zu untersuchen, wie Zugriffskontrollanforderungen zwischen der Geschäftsebene und der IT mit möglichst geringem Mehraufwand für Unternehmen angeglichen werden können. Im Speziellen erforsche ich, wie Zugriffskontrollanforderungen der Geschäftsebene, extrahiert aus Geschäftsprozessen, automatisiert in Zugriffsberechtigungen für Systeme der rollenbasierten Zugriffskontrolle (RBAC) überführt werden können und wie die EAA zur Entwurfszeit auf die Einhaltung der extrahierten Zugriffskontrollanforderungen überprüft werden kann. Hierdurch werden Sicherheitsexperten beim Entwerfen von Zugriffsberechtigungen für RBAC Systeme unterstützt und die Komplexität verringert. Weiterhin werden Enterprise-Architekten in die Lage versetzt, die EAA zur Entwurfszeit auf Datenflüsse von Services zu untersuchen, welche gegen die geschäftsseitige Zugriffskontrollanforderungen verstoßen und diese Fehler zu beheben. Die Kernbeiträge meiner Arbeit lassen sich wie folgt zusammenfassen: I)\textbf{I)} Ein Ansatz zur automatisierten Extraktion von geschäftsseitigen Zugriffskontrollanforderungen aus Geschäftsprozessen mit anschließender Generierung eines initialen Rollenmodells für RBAC. II)\textbf{II)} Ein Ansatz zum automatisierten Erstellen von architekturellen Datenfluss-Bedingungen aus Zugriffskontrollanforderungen zur Identifikation von verbotenen Datenflüssen in Services von IT-Systemen der EAA. III)\textbf{III)} Eine Prozessmodell für Unternehmen über die Einsatzmöglichkeiten der Ansätze innerhalb verschiedener Evolutionsszenarien. IV)\textbf{IV)} Ein Modell zur Verknüpfung relevanter Elemente aus Geschäftsprozessen, RBAC und EAAs im Hinblick auf die Zugriffskontrolle. Dieses wird automatisiert durch die Ansätze erstellt und dient unter anderem zur Dokumentation von Entwurfsentscheidungen, zur Verbesserung des Verständnisses von Modellen aus anderen Domänen und zur Unterstützung des Enterprise-Architekten bei der Auflösung von Fehlern innerhalb der EAA. Die Anwendbarkeit der Ansätze wurden in zwei Fallstudien untersucht. Die erste Studie ist eine Real-Welt-Studie, entstanden durch eine Kooperation mit einer staatlichen Kunsthalle, welche ihre IT-Systeme überarbeitet. Eine weitere Fallstudie wurde auf Basis von Common Component Modeling Example (CoCoME) durchgeführt. CoCoME ist eine durch die Wissenschaftsgemeinde entwickelte Fallstudie einer realistischen Großmarkt-Handelskette, welche speziell für die Erforschung von Software-Modellierung entwickelt wurde und um Evolutinsszenarien ergänzt wurde. Aufgrund verschiedener gesetzlicher Regularien an die IT-Sicherheit und den Datenschutz sowie dem Fluss von sensiblen Daten eignen sich beide Fallstudien für die Untersuchung von Zugriffskontrollanforderungen. Beide Fallstudien wurden anhand der Goal Question Metric-Methode durchgeführt. Es wurden Validierungsziele definiert. Aus diesen wurden systematisch wissenschaftliche Fragen abgleitet, für welche anschließend Metriken aufgestellt wurden, um sie zu untersuchen. Die folgenden Aspekte wurden untersucht: \bullet Qualität der generierten Zugriffsberechtigungen. \bullet Qualität der Identifikation von fehlerhaften Datenflüssen in Services der EAA. \bullet Vollständigkeit und Korrektheit des generierten Modells zur Nachverfolgbarkeit von Zugriffskontrollanforderungen über Modelle hinweg. \bullet Eignung der Ansätze in Evolutionsszenarien von Geschäftsprozessen und EAAs. Am Ende dieser Arbeit wird ein Ausblick gegeben, wie sich die vorgestellten Ansätze dieser Arbeit erweitern lassen. Dabei wird unter anderem darauf eingegangen, wie das Modell zur Verknüpfung relevanter Elemente aus Geschäftsprozessen, RBAC und EAAs im Hinblick auf die Zugriffskontrolle, um Elemente aus weiteren Modellen der IT und der Geschäftsebene, erweitert werden kann. Weiterhin wird erörtert wie die Ansätze der Arbeit mit zusätzlichen Eingabeinformationen angereichert werden können und wie die extrahierten Zugriffskontrollanforderungen in weiteren Domänenmodellen der IT und der Geschäftsebene eingesetzt werden können
    corecore