124 research outputs found

    Comparison of data integrity models

    Get PDF
    Data integrity in computer based information systems is a concern because of the damage that can be done by unauthorized manipulation or modification of data. While a standard exists for data security, there currently is not an acceptable standard for integrity. There is a need for incorporation of a data integrity policy into the standard concerning data security in order to produce a complete protection policy. There are several existing models which address data integrity. The Biba, Goguen and Meseguer, and Clark/Wilson data integrity models each offer a definition of data integrity and introduce their own mechanisms for preserving integrity. Acceptance of one of these models as a standard for data integrity will create a complete protection policy which addresses both security and integrity.http://archive.org/details/comparisonofdati1094543739US Marine Corps (USMC) authorApproved for public release; distribution is unlimited

    Composable Distributed Access Control and Integrity Policies for Query-Based Wireless Sensor Networks

    Get PDF
    An expected requirement of wireless sensor networks (WSN) is the support of a vast number of users while permitting limited access privileges. While WSN nodes have severe resource constraints, WSNs will need to restrict access to data, enforcing security policies to protect data within WSNs. To date, WSN security has largely been based on encryption and authentication schemes. WSN Authorization Specification Language (WASL) is specified and implemented using tools coded in JavaTM. WASL is a mechanism{independent policy language that can specify arbitrary, composable security policies. The construction, hybridization, and composition of well{known security models is demonstrated and shown to preserve security while providing for modifications to permit inter{network accesses with no more impact on the WSN nodes than any other policy update. Using WASL and a naive data compression scheme, a multi-level security policy for a 1000-node network requires 66 bytes of memory per node. This can reasonably be distributed throughout a WSN. The compilation of a variety of policy compositions are shown to be feasible using a notebook{class computer like that expected to be performing typical WSN management responsibilities

    Protection Models for Web Applications

    Get PDF
    Early web applications were a set of static web pages connected to one another. In contrast, modern applications are full-featured programs that are nearly equivalent to desktop applications in functionality. However, web servers and web browsers, which were initially designed for static web pages, have not updated their protection models to deal with the security consequences of these full-featured programs. This mismatch has been the source of several security problems in web applications. This dissertation proposes new protection models for web applications. The design and implementation of prototypes of these protection models in a web server and a web browser are also described. Experiments are used to demonstrate the improvements in security and performance from using these protection models. Finally, this dissertation also describes systematic design methods to support the security of web applications

    Context-Aware Access Control Model for Cloud Computing

    Get PDF
    In view of malicious insider attacks on cloud computing environments, a new Context-Aware Access Control Model for cloud computing (CAACM) was presented. According to the characteristic of cloud computing, we take spatial state, temporal state and platform trust level as context. The model establishes mechanisms of authorization from cloud management role to objects, which enables dynamic activation of role permission by associating cloud management role with context. It also achieves fine-grained access control on cloud objects by supervising the permission of management role in full life cycle. Moreover, it introduces the concept of exclusive managerial role, which extends access control from static protection on resources to dynamic authorization on managerial roles. Further, it describes the approach of role permission activation systematically. CAACM formally proves to be safe and it lays the groundwork for the deployment of CAACM in cloud computing systems

    Considerations towards the development of a forensic evidence management system

    Get PDF
    The decentralized nature of the Internet forms its very foundation, yet it is this very nature that has opened networks and individual machines to a host of threats and attacks from malicious agents. Consequently, forensic specialists - tasked with the investigation of crimes commissioned through the use of computer systems, where evidence is digital in nature - are often unable to adequately reach convincing conclusions pertaining to their investigations. Some of the challenges within reliable forensic investigations include the lack of a global view of the investigation landscape and the complexity and obfuscated nature of the digital world. A perpetual challenge within the evidence analysis process is the reliability and integrity associated with digital evidence, particularly from disparate sources. Given the ease with which digital evidence (such as metadata) can be created, altered, or destroyed, the integrity attributed to digital evidence is of paramount importance. This dissertation focuses on the challenges relating to the integrity of digital evidence within reliable forensic investigations. These challenges are addressed through the proposal of a model for the construction of a Forensic Evidence Management System (FEMS) to preserve the integrity of digital evidence within forensic investigations. The Biba Integrity Model is utilized to maintain the integrity of digital evidence within the FEMS. Casey's Certainty Scale is then employed as the integrity classifcation scheme for assigning integrity labels to digital evidence within the system. The FEMS model consists of a client layer, a logic layer and a data layer, with eight system components distributed amongst these layers. In addition to describing the FEMS system components, a fnite state automata is utilized to describe the system component interactions. In so doing, we reason about the FEMS's behaviour and demonstrate how rules within the FEMS can be developed to recognize and pro le various cyber crimes. Furthermore, we design fundamental algorithms for processing of information by the FEMS's core system components; this provides further insight into the system component interdependencies and the input and output parameters for the system transitions and decision-points infuencing the value of inferences derived within the FEMS. Lastly, the completeness of the FEMS is assessed by comparing the constructs and operation of the FEMS against the published work of Brian D Carrier. This approach provides a mechanism for critically analyzing the FEMS model, to identify similarities or impactful considerations within the solution approach, and more importantly, to identify shortcomings within the model. Ultimately, the greatest value in the FEMS is in its ability to serve as a decision support or enhancement system for digital forensic investigators. CopyrightDissertation (MSc)--University of Pretoria, 2010.Computer Scienceunrestricte

    Trusted data path protecting shared data in virtualized distributed systems

    Get PDF
    When sharing data across multiple sites, service applications should not be trusted automatically. Services that are suspected of faulty, erroneous, or malicious behaviors, or that run on systems that may be compromised, should not be able to gain access to protected data or entrusted with the same data access rights as others. This thesis proposes a context flow model that controls the information flow in a distributed system. Each service application along with its surrounding context in a distributed system is treated as a controllable principal. This thesis defines a trust-based access control model that controls the information exchange between these principals. An online monitoring framework is used to evaluate the trustworthiness of the service applications and the underlining systems. An external communication interception runtime framework enforces trust-based access control transparently for the entire system.Ph.D.Committee Chair: Karsten Schwan; Committee Member: Douglas M. Blough; Committee Member: Greg Eisenhauer; Committee Member: Mustaque Ahamad; Committee Member: Wenke Le

    Refactoring of Security Antipatterns in Distributed Java Components

    Get PDF
    The importance of JAVA as a programming and execution environment has grown steadily over the past decade. Furthermore, the IT industry has adapted JAVA as a major building block for the creation of new middleware as well as a technology facilitating the migration of existing applications towards web-driven environments. Parallel in time, the role of security in distributed environments has gained attention, as a large amount of middleware applications has replaced enterprise-level mainframe systems. The protection of confidentiality, integrity and availability are therefore critical for the market success of a product. The vulnerability level of every product is determined by the weakest embedded component, and selling vulnerable products can cause enormous economic damage to software vendors. An important goal of this work is to create the awareness that the usage of a programming language, which is designed as being secure, is not sufficient to create secure and trustworthy distributed applications. Moreover, the incorporation of the threat model of the programming language improves the risk analysis by allowing a better definition of the attack surface of the application. The evolution of a programming language leads towards common patterns for solutions for recurring quality aspects. Suboptimal solutions, also known as ´antipatterns´, are typical causes for quality weaknesses such as security vulnerabilities. Moreover, the exposure to a specific environment is an important parameter for threat analysis, as code considered secure in a specific scenario can cause unexpected risks when switching the environment. Antipatterns are a well-established means on the abstractional level of system modeling to inform about the effects of incomplete solutions, which are also important in the later stages of the software development process. Especially on the implementation level, we see a deficit of helpful examples, that would give programmers a better and holistic understanding. In our basic assumption, we link the missing experience of programmers regarding the security properties of patterns within their code to the creation of software vulnerabilities. Traditional software development models focus on security properties only on the meta layer. To transfer these efficiently to the practical level, we provide a three-stage approach: First, we focus on typical security problems within JAVA applications, and develop a standardized catalogue of ´antipatterns´ with examples from standard software products. Detecting and avoiding these antipatterns positively influences software quality. We therefore focus, as second element of our methodology, on possible enhancements to common models for the software development process. These help to control and identify the occurrence of antipatterns during development activities, i. e. during the coding phase and during the phase of component assembly, integrating one´s own and third party code. Within the third part, and emphasizing the practical focus of this research, we implement prototypical tools for support of the software development phase. The practical findings of this research helped to enhance the security of the standard JAVA platforms and JEE frameworks. We verified the relevance of our methods and tools by applying these to standard software products leading to a measurable reduction of vulnerabilities and an information exchange with middleware vendors (Sun Microsystems, JBoss) targeting runtime security. Our goal is to enable software architects and software developers developing end-user applications to apply our findings with embedded standard components on their environments. From a high-level perspective, software architects profit from this work through the projection of the quality-of-service goals to protection details. This supports their task of deriving security requirements when selecting standard components. In order to give implementation-near practitioners a helpful starting point to benefit from our research we provide tools and case-studies to achieve security improvements within their own code base.Die Bedeutung der Programmiersprache JAVA als Baustein für Softwareentwicklungs- und Produktionsinfrastrukturen ist im letzten Jahrzehnt stetig gestiegen. JAVA hat sich als bedeutender Baustein für die Programmierung von Middleware-Lösungen etabliert. Ebenfalls evident ist die Verwendung von JAVA-Technologien zur Migration von existierenden Arbeitsplatz-Anwendungen hin zu webbasierten Einsatzszenarien. Parallel zu dieser Entwicklung hat sich die Rolle der IT-Sicherheit nicht zuletzt aufgrund der Verdrängung von mainframe-basierten Systemen hin zu verteilten Umgebungen verstärkt. Der Schutz von Vertraulichkeit, Integrität und Verfügbarkeit ist seit einigen Jahren ein kritisches Alleinstellungsmerkmal für den Markterfolg von Produkten. Verwundbarkeiten in Produkten wirken mittlerweile indirekt über kundenseitigen Vertrauensverlust negativ auf den wirtschaftlichen Erfolg der Softwarehersteller, zumal der Sicherheitsgrad eines Systems durch die verwundbarste Komponente bestimmt wird. Ein zentrales Ziel dieser Arbeit ist die Erkenntnis zu vermitteln, dass die alleinige Nutzung einer als ´sicher´ eingestuften Programmiersprache nicht als alleinige Grundlage zur Erstellung von sicheren und vertrauenswürdigen Anwendungen ausreicht. Vielmehr führt die Einbeziehung des Bedrohungsmodells der Programmiersprache zu einer verbesserten Risikobetrachtung, da die Angriffsfläche einer Anwendung detaillierter beschreibbar wird. Die Entwicklung und fortschreitende Akzeptanz einer Programmiersprache führt zu einer Verbreitung von allgemein anerkannten Lösungsmustern zur Erfüllung wiederkehrender Qualitätsanforderungen. Im Bereich der Dienstqualitäten fördern ´Gegenmuster´, d.h. nichtoptimale Lösungen, die Entstehung von Strukturschwächen, welche in der Domäne der IT-Sicherheit ´Verwundbarkeiten´ genannt werden. Des Weiteren ist die Einsatzumgebung einer Anwendung eine wichtige Kenngröße, um eine Bedrohungsanalyse durchzuführen, denn je nach Beschaffenheit der Bedrohungen im Zielszenario kann eine bestimmte Benutzeraktion eine Bedrohung darstellen, aber auch einen erwarteten Anwendungsfall charakterisieren. Während auf der Modellierungsebene ein breites Angebot von Beispielen zur Umsetzung von Sicherheitsmustern besteht, fehlt es den Programmierern auf der Implementierungsebene häufig an ganzheitlichem Verständnis. Dieses kann durch Beispiele, welche die Auswirkungen der Verwendung von ´Gegenmustern´ illustrieren, vermittelt werden. Unsere Kernannahme besteht darin, dass fehlende Erfahrung der Programmierer bzgl. der Sicherheitsrelevanz bei der Wahl von Implementierungsmustern zur Entstehung von Verwundbarkeiten führt. Bei der Vermittlung herkömmlicher Software-Entwicklungsmodelle wird die Integration von praktischen Ansätzen zur Umsetzung von Sicherheitsanforderungen zumeist nur in Meta-Modellen adressiert. Zur Erweiterung des Wirkungsgrades auf die praktische Ebene wird ein dreistufiger Ansatz präsentiert. Im ersten Teil stellen wir typische Sicherheitsprobleme von JAVA-Anwendungen in den Mittelpunkt der Betrachtung, und entwickeln einen standardisierten Katalog dieser ´Gegenmuster´. Die Relevanz der einzelnen Muster wird durch die Untersuchung des Auftretens dieser in Standardprodukten verifiziert. Der zweite Untersuchungsbereich widmet sich der Integration von Vorgehensweisen zur Identifikation und Vermeidung der ´Sicherheits-Gegenmuster´ innerhalb des Software-Entwicklungsprozesses. Hierfür werden zum einen Ansätze für die Analyse und Verbesserung von Implementierungsergebnissen zur Verfügung gestellt. Zum anderen wird, induziert durch die verbreitete Nutzung von Fremdkomponenten, die arbeitsintensive Auslieferungsphase mit einem Ansatz zur Erstellung ganzheitlicher Sicherheitsrichtlinien versorgt. Da bei dieser Arbeit die praktische Verwendbarkeit der Ergebnisse eine zentrale Anforderung darstellt, wird diese durch prototypische Werkzeuge und nachvollziehbare Beispiele in einer dritten Perspektive unterstützt. Die Relevanz der Anwendung der entwickelten Methoden und Werkzeuge auf Standardprodukte zeigt sich durch die im Laufe der Forschungsarbeit entdeckten Sicherheitsdefizite. Die Rückmeldung bei führenden Middleware-Herstellern (Sun Microsystems, JBoss) hat durch gegenseitigen Erfahrungsaustausch im Laufe dieser Forschungsarbeit zu einer messbaren Verringerung der Verwundbarkeit ihrer Middleware-Produkte geführt. Neben den erreichten positiven Auswirkungen bei den Herstellern der Basiskomponenten sollen Erfahrungen auch an die Architekten und Entwickler von Endprodukten, welche Standardkomponenten direkt oder indirekt nutzen, weitergereicht werden. Um auch dem praktisch interessierten Leser einen möglichst einfachen Einstieg zu bieten, stehen die Werkzeuge mit Hilfe von Fallstudien in einem praktischen Gesamtzusammenhang. Die für das Tiefenverständnis notwendigen Theoriebestandteile bieten dem Software-Architekten die Möglichkeit sicherheitsrelevante Auswirkungen einer Komponentenauswahl frühzeitig zu erkennen und bei der Systemgestaltung zu nutzen

    Consistance et protection des données dans les systèmes collaboratifs par les méthodes formelles

    Get PDF
    Le développement de logiciels complexes ou de contenus multimédias implique de nos jours plusieurs personnes de plus en plus géographiquement dispersées à travers le monde qui collaborent à travers des systèmes d’édition collaborative. Le but principal de cette collaboration est l’amélioration de la productivité et la réduction du temps et des coûts de développement. L’un des défis majeurs de ces outils d’édition collaborative est d’assurer la convergence et la fiabilité des données partagées. Pour répondre à ce défi, plusieurs approches existent dans la littérature parmi lesquelles, nous avons l’approche multiversions (MV), l’approche des types de données commutatives répliquées (CRDT) et l’approche de la transformée opérationnelle (OT). La première se base sur le principe du copier, modifier et fusionner et utilise un serveur central chargé de la fusion des différentes copies provenant des sites participant à la collaboration. Les modifications effectuées par chaque site sur sa copie ne sont fusionnées à la copie centrale qu’à la demande de l’utilisateur. La difficulté majeure de cette approche est le coût lié au stockage des diverses versions sur le serveur, l’utilisation d’estampilles pour ordonner les opérations lors de la fusion. Ce qui la rend difficilement utilisable dans un contexte d’environnement distribué. La deuxième approche considère que toutes les opérations sont commutatives donc pouvant être exécutées dans un ordre quelconque. Quant à la dernière approche, elle s’appuie sur une transformation des opérations reçues des sites distants par rapport à celles qui leur sont concurrentes. Dans cette approche, un algorithme de transformation inclusive (IT) est utilisé afin d’assurer la convergence des copies, mais la plupart des algorithmes proposés dans la littérature ne satisfont pas les critères de convergence. En plus de la cohérence, la fiabilité des données reste un autre défi dans les systèmes collaboratifs. Pour faire face à ce défi, plusieurs applications encapsulent, dans leur code source, des fonctionnalités transverses telles que celles de sécurité afin de répondre aux exigences de confidentialité et d’intégrité des données. Dans la littérature, la programmation orientée aspect (AOP) est l’une des approches utilisées afin d’assurer la modularité, la maintenabilité et la réutilisation des composants d’un logiciel. Une des difficultés de ce paradigme de programmation est l’assurance qu’une propriété de sécurité reste satisfaite après le tissage entre le programme de base et tous les aspects encapsulant les préoccupations transverses. Ce qui implique de trouver des techniques automatiques de vérification des propriétés de sécurité une fois le tissage fait. Dans le registre de la fiabilité des données, le contrôle d’accès joue un rôle prépondérant. Ainsi, en ce qui concerne le partage de contenus multimédias publiés sur le Web, il est nécessaire de collaborer pour les alimenter, mais un des défis majeurs est de les rendre fiables.----------ABSTRACT: Complex software and Web content development involve nowadays multiple programmers located in different areas working together on the same development project using collaborative systems in order to achieve efficiency, improve productivity and reduce development time. One of the key challenge in such a development environment is ensuring the convergence and the reliability of the shared data or content. In the literature, many approaches have been proposed to face this challenge. Among those approaches, we have multi-version (MV), commutative replication data type (CRDT) and operational transformation (OT) approach. The first approach is based on the "copy, modify and merge" principle and uses a central server to merge the updates from the different sites participating in the collaboration. The local updates of a specific site are merged only on demand. The key drawback of this approach is the storage cost of the various versions on the server and the overhead due to the generation of stamps for the operations ordering. Thus, this drawback makes this approach difficult to use in the context of a distributed collaborative environment. The second approach preconizes that all the operations are commutative so that they can be executed in any given order. The latter approach is based on the transformation of all the operations received from the remote sites against their concurrent operations before being merged. In this approach, an inclusive transformation algorithm is used in order to ensure the convergence criteria. Unfortunately, most of the proposed algorithms in the literature do not satisfy the convergence criteria. Beside the convergence, the reliability of the data remains another challenge in the collaborative systems. In order to face this challenge, many programs encapsulate crosscutting concerns (e.g. security, logging) for data confidentiality and integrity purposes. In the literature, aspect-oriented programming (AOP) is one the approaches used to better modularize the separation of concerns in order to make easier the maintenance and the reuse of the software components. However, one challenge of this paradigm is the insurance that a given property such as security one remains satisfied after the weaving of the base program and the aspects. Thus, we may find automated way to verify such security properties in the woven program. Concerning data reliability, access control is one of the major piece of the puzzle. Thus, in the Web content publication, one challenge is to collaborate in order to produce them and the other key challenge is to make them reliable

    Cyber physical security of avionic systems

    Get PDF
    “Cyber-physical security is a significant concern for critical infrastructures. The exponential growth of cyber-physical systems (CPSs) and the strong inter-dependency between the cyber and physical components introduces integrity issues such as vulnerability to injecting malicious data and projecting fake sensor measurements. Traditional security models partition the CPS from a security perspective into just two domains: high and low. However, this absolute partition is not adequate to address the challenges in the current CPSs as they are composed of multiple overlapping partitions. Information flow properties are one of the significant classes of cyber-physical security methods that model how inputs of a system affect its outputs across the security partition. Information flow supports traceability that helps in detecting vulnerabilities and anomalous sources, as well as helps in rendering mitigation measures. To address the challenges associated with securing CPSs, two novel approaches are introduced by representing a CPS in terms of a graph structure. The first approach is an automated graph-based information flow model introduced to identify information flow paths in the avionics system and partition them into security domains. This approach is applied to selected aspects of the avionic systems to identify the vulnerabilities in case of a system failure or an attack and provide possible mitigation measures. The second approach is based on graph neural networks (GNN) to classify the graphs into different security domains. Using these two approaches, successful partitioning of the CPS into different security domains is possible in addition to identifying their optimal coverage. These approaches enable designers and engineers to ensure the integrity of the CPS. The engineers and operators can use this process during design-time and in real-time to identify failures or attacks on the system”--Abstract, page iii
    • …
    corecore