209,757 research outputs found

    Survivability modeling for cyber-physical systems subject to data corruption

    Get PDF
    Cyber-physical critical infrastructures are created when traditional physical infrastructure is supplemented with advanced monitoring, control, computing, and communication capability. More intelligent decision support and improved efficacy, dependability, and security are expected. Quantitative models and evaluation methods are required for determining the extent to which a cyber-physical infrastructure improves on its physical predecessors. It is essential that these models reflect both cyber and physical aspects of operation and failure. In this dissertation, we propose quantitative models for dependability attributes, in particular, survivability, of cyber-physical systems. Any malfunction or security breach, whether cyber or physical, that causes the system operation to depart from specifications will affect these dependability attributes. Our focus is on data corruption, which compromises decision support -- the fundamental role played by cyber infrastructure. The first research contribution of this work is a Petri net model for information exchange in cyber-physical systems, which facilitates i) evaluation of the extent of data corruption at a given time, and ii) illuminates the service degradation caused by propagation of corrupt data through the cyber infrastructure. In the second research contribution, we propose metrics and an evaluation method for survivability, which captures the extent of functionality retained by a system after a disruptive event. We illustrate the application of our methods through case studies on smart grids, intelligent water distribution networks, and intelligent transportation systems. Data, cyber infrastructure, and intelligent control are part and parcel of nearly every critical infrastructure that underpins daily life in developed countries. Our work provides means for quantifying and predicting the service degradation caused when cyber infrastructure fails to serve its intended purpose. It can also serve as the foundation for efforts to fortify critical systems and mitigate inevitable failures --Abstract, page iii

    Identity and Access Management System: a Web-Based Approach for an Enterprise

    Get PDF
    Managing digital identities and access control for enterprise users and applications remains one of the greatest challenges facing computing today. An attempt to address this issue led to the proposed security paradigm called Identity and Access Management (IAM) service based on IAM standards. Current approaches such as Lightweight Directory Access Protocol (LDAP), Central Authentication Service (CAS) and Security Assertion Markup Language (SAML) lack comprehensive analysis from conception to physical implementation to incorporate these solutions thereby resulting in impractical and fractured solutions. In this paper, we have implemented Identity and Access Management System (IAMSys) using the Lightweight Directory Access Protocol (LDAP) which focuses on authentication, authorization, administration of identities and audit reporting. Its primary concern is verification of the identity of the entity and granting correct level of access for resources which are protected in either the cloud environment or on-premise systems. A phased approach methodology was used in the research where it requires any enterprise or organization willing to adopt this must carry out a careful planning and demonstrated a good understanding of the technologies involved. The results of the experimental evaluation indicated that the average rating score is 72.0 % for the participants involved in this study. This implies that the idea of IAMSys is a way to mitigating security challenges associated with authentication, authorization, data protection and accountability if properly deployed

    Providing trusted execution environments using FPGA

    Get PDF
    Trusted Execution Environments (TEEs) drastically reduce the trusted computing base (TCB) of the systems by providing a secure execution environment for security-critical applications that are isolated from the operating system or the hypervisor. TEEs are often assumed to be highly secure; however, over the last few years, TEEs have been proven weak, as either TEEs built upon security-oriented hardware extensions (e.g., Arm TrustZone and Intel SGX) or resorting to dedicated secure elements were exploited multiple times. In this paper, we propose a novel TEE design, named Trusted Execution Environments On-Demand (TEEOD), which leverages the re configurable logic of FPGA-SoCs to dynamically provide secure execution environments for security-critical applications. Unlike other TEE designs, ours can provide high-bandwidth connections and physical on-chip isolation while providing configurable hard ware and software TCBs. We implemented a proof-of-concept (PoC) implementation targeting an Ultra96-V2 platform. The conducted evaluation demonstrated TEEOD can host up to 6 simultaneous enclaves with a resource usage per enclave of 7.0%, 3.8%, and 15.3% of the total LUTs, FFs, and BRAMS, respectively

    A policy language definition for provenance in pervasive computing

    Get PDF
    Recent advances in computing technology have led to the paradigm of pervasive computing, which provides a means of simplifying daily life by integrating information processing into the everyday physical world. Pervasive computing draws its power from knowing the surroundings and creates an environment which combines computing and communication capabilities. Sensors that provide high-resolution spatial and instant measurement are most commonly used for forecasting, monitoring and real-time environmental modelling. Sensor data generated by a sensor network depends on several influences, such as the configuration and location of the sensors or the processing performed on the raw measurements. Storing sufficient metadata that gives meaning to the recorded observation is important in order to draw accurate conclusions or to enhance the reliability of the result dataset that uses this automatically collected data. This kind of metadata is called provenance data, as the origin of the data and the process by which it arrived from its origin are recorded. Provenance is still an exploratory field in pervasive computing and many open research questions are yet to emerge. The context information and the different characteristics of the pervasive environment call for different approaches to a provenance support system. This work implements a policy language definition that specifies the collecting model for provenance management systems and addresses the challenges that arise with stream data and sensor environments. The structure graph of the proposed model is mapped to the Open Provenance Model in order to facilitating the sharing of provenance data and interoperability with other systems. As provenance security has been recognized as one of the most important components in any provenance system, an access control language has been developed that is tailored to support the special requirements of provenance: fine-grained polices, privacy policies and preferences. Experimental evaluation findings show a reasonable overhead for provenance collecting and a reasonable time for provenance query performance, while a numerical analysis was used to evaluate the storage overhead

    Design of secure and trustworthy system-on-chip architectures using hardware-based root-of-trust techniques

    Get PDF
    Cyber-security is now a critical concern in a wide range of embedded computing modules, communications systems, and connected devices. These devices are used in medical electronics, automotive systems, power grid systems, robotics, and avionics. The general consensus today is that conventional approaches and software-only schemes are not sufficient to provide desired security protections and trustworthiness. Comprehensive hardware-software security solutions so far have remained elusive. One major challenge is that in current system-on-chip (SoCs) designs, processing elements (PEs) and executable codes with varying levels of trust, are all integrated on the same computing platform to share resources. This interdependency of modules creates a fertile attack ground and represents the Achilles’ heel of heterogeneous SoC architectures. The salient research question addressed in this dissertation is “can one design a secure computer system out of non-secure or untrusted computing IP components and cores?”. In response to this question, we establish a generalized, user/designer-centric set of design principles which intend to advance the construction of secure heterogeneous multi-core computing systems. We develop algorithms, models of computation, and hardware security primitives to integrate secure and non-secure processing elements into the same chip design while aiming for: (a) maintaining individual core’s security; (b) preventing data leakage and corruption; (c) promoting data and resource sharing among the cores; and (d) tolerating malicious behaviors from untrusted processing elements and software applications. The key contributions of this thesis are: 1. The introduction of a new architectural model for integrating processing elements with different security and trust levels, i.e., secure and non-secure cores with trusted and untrusted provenances; 2. A generalized process isolation design methodology for the new architecture model that covers both the software and hardware layers to (i) create hardware-assisted virtual logical zones, and (ii) perform both static and runtime security, privilege level and trust authentication checks; 3. A set of secure protocols and hardware root-of-trust (RoT) primitives to support the process isolation design and to provide the following functionalities: (i) hardware immutable identities – using physical unclonable functions, (ii) core hijacking and impersonation resistance – through a blind signature scheme, (iii) threshold-based data access control – with a robust and adaptive secure secret sharing algorithm, (iv) privacy-preserving authorization verification – by proposing a group anonymous authentication algorithm, and (v) denial of resource or denial of service attack avoidance – by developing an interconnect network routing algorithm and a memory access mechanism according to user-defined security policies. 4. An evaluation of the security of the proposed hardware primitives in the post-quantum era, and possible extensions and algorithmic modifications for their post-quantum resistance. In this dissertation, we advance the practicality of secure-by-construction methodologies in SoC architecture design. The methodology allows for the use of unsecured or untrusted processing elements in the construction of these secure architectures and tries to extend their effectiveness into the post-quantum computing era

    Quality assessment technique for ubiquitous software and middleware

    Get PDF
    The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future

    Trustee: A Trust Management System for Fog-enabled Cyber Physical Systems

    Get PDF
    In this paper, we propose a lightweight trust management system (TMS) for fog-enabled cyber physical systems (Fog-CPS). Trust computation is based on multi-factor and multi-dimensional parameters, and formulated as a statistical regression problem which is solved by employing random forest regression model. Additionally, as the Fog-CPS systems could be deployed in open and unprotected environments, the CPS devices and fog nodes are vulnerable to numerous attacks namely, collusion, self-promotion, badmouthing, ballot-stuffing, and opportunistic service. The compromised entities can impact the accuracy of trust computation model by increasing/decreasing the trust of other nodes. These challenges are addressed by designing a generic trust credibility model which can countermeasures the compromise of both CPS devices and fog nodes. The credibility of each newly computed trust value is evaluated and subsequently adjusted by correlating it with a standard deviation threshold. The standard deviation is quantified by computing the trust in two configurations of hostile environments and subsequently comparing it with the trust value in a legitimate/normal environment. Our results demonstrate that credibility model successfully countermeasures the malicious behaviour of all Fog-CPS entities i.e. CPS devices and fog nodes. The multi-factor trust assessment and credibility evaluation enable accurate and precise trust computation and guarantee a dependable Fog-CPS system
    • …
    corecore