28 research outputs found

    CENSOR: Privacy-preserving Obfuscation for Outsourcing SAT formulas

    Get PDF
    We propose a novel obfuscation technique that can be used to outsource hard satisfiability (SAT) formulas to the cloud. Servers with large computational power are typically used to solve SAT instances that model real-life problems in task scheduling, AI planning, circuit verification and more. However, outsourcing data to the cloud may lead to privacy and information breaches since satisfying assignments may reveal considerable information about the underlying problem modeled by SAT. In this work, we develop CENSOR (privaCy prEserviNg obfuScation for Outsourcing foRmulas), a novel SAT obfuscation framework that resembles Indistinguishability Obfuscation. At the core of the framework lies a mechanism that transforms any formula to a random one with the same number of satisfying assignments. As a result, obfuscated formulas are indistinguishable from each other thus preserving the input-output privacy of the original SAT instance. Contrary to prior solutions that are rather adhoc in nature, we formally prove the security of our scheme. Additionally, we show that obfuscated formulas are within a polynomial factor of the original ones thus achieving polynomial slowdown. Finally, the whole process is efficient in practice, allowing solutions to original instances to be easily recovered from obfuscated ones. A byproduct of our method is that all NP problems can be potentially outsourced to the cloud by means of reducing to SAT

    From Isomorphism-Based Security for Graphs to Semantics-Preserving Security for the Resource Description Framework (RDF)

    Get PDF
    Resource Description Framework (RDF) is a graph-like data model designed for the web. One of its compelling features is a precise, model-theoretic semantics. We address security in the context of the RDF. We first observe that the problem of securing RDF is related closely to the more traditional problem of securing graphs. Consequently, before we address security for RDF, we make broader contributions to security in the context of graphs. Specifically, we reconcile four different notions of security that have been proposed in prior work, and compare them from the standpoint of strength of security --- whether satisfaction of one implies satisfaction of another. We then ask whether strength of security is correlated to computational complexity. We make the somewhat surprising observation that the answer to this question is, "no.'' We then extend the two strongest notions of security in a natural way for RDF. We establish results on RDF's semantics that then gives us a way of meaningfully quantifying the loss of information from security. Thus, for RDF, we are able to pose the natural trade-off between information-quality and security in a precise way. We show that a corresponding decision problem is NP-Complete, and with a reduction to CNF-SAT that we have designed and implemented, present empirical results on realistic data. Our empirical results show interesting relationships between the various parameters, such as the size of the graph versus the information-loss, to achieve a particular level of security

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Security and trust in cloud computing and IoT through applying obfuscation, diversification, and trusted computing technologies

    Get PDF
    Cloud computing and Internet of Things (IoT) are very widely spread and commonly used technologies nowadays. The advanced services offered by cloud computing have made it a highly demanded technology. Enterprises and businesses are more and more relying on the cloud to deliver services to their customers. The prevalent use of cloud means that more data is stored outside the organization’s premises, which raises concerns about the security and privacy of the stored and processed data. This highlights the significance of effective security practices to secure the cloud infrastructure. The number of IoT devices is growing rapidly and the technology is being employed in a wide range of sectors including smart healthcare, industry automation, and smart environments. These devices collect and exchange a great deal of information, some of which may contain critical and personal data of the users of the device. Hence, it is highly significant to protect the collected and shared data over the network; notwithstanding, the studies signify that attacks on these devices are increasing, while a high percentage of IoT devices lack proper security measures to protect the devices, the data, and the privacy of the users. In this dissertation, we study the security of cloud computing and IoT and propose software-based security approaches supported by the hardware-based technologies to provide robust measures for enhancing the security of these environments. To achieve this goal, we use obfuscation and diversification as the potential software security techniques. Code obfuscation protects the software from malicious reverse engineering and diversification mitigates the risk of large-scale exploits. We study trusted computing and Trusted Execution Environments (TEE) as the hardware-based security solutions. Trusted Platform Module (TPM) provides security and trust through a hardware root of trust, and assures the integrity of a platform. We also study Intel SGX which is a TEE solution that guarantees the integrity and confidentiality of the code and data loaded onto its protected container, enclave. More precisely, through obfuscation and diversification of the operating systems and APIs of the IoT devices, we secure them at the application level, and by obfuscation and diversification of the communication protocols, we protect the communication of data between them at the network level. For securing the cloud computing, we employ obfuscation and diversification techniques for securing the cloud computing software at the client-side. For an enhanced level of security, we employ hardware-based security solutions, TPM and SGX. These solutions, in addition to security, ensure layered trust in various layers from hardware to the application. As the result of this PhD research, this dissertation addresses a number of security risks targeting IoT and cloud computing through the delivered publications and presents a brief outlook on the future research directions.Pilvilaskenta ja esineiden internet ovat nykyään hyvin tavallisia ja laajasti sovellettuja tekniikkoja. Pilvilaskennan pitkälle kehittyneet palvelut ovat tehneet siitä hyvin kysytyn teknologian. Yritykset enenevässä määrin nojaavat pilviteknologiaan toteuttaessaan palveluita asiakkailleen. Vallitsevassa pilviteknologian soveltamistilanteessa yritykset ulkoistavat tietojensa käsittelyä yrityksen ulkopuolelle, minkä voidaan nähdä nostavan esiin huolia taltioitavan ja käsiteltävän tiedon turvallisuudesta ja yksityisyydestä. Tämä korostaa tehokkaiden turvallisuusratkaisujen merkitystä osana pilvi-infrastruktuurin turvaamista. Esineiden internet -laitteiden lukumäärä on nopeasti kasvanut. Teknologiana sitä sovelletaan laajasti monilla sektoreilla, kuten älykkäässä terveydenhuollossa, teollisuusautomaatiossa ja älytiloissa. Sellaiset laitteet keräävät ja välittävät suuria määriä informaatiota, joka voi sisältää laitteiden käyttäjien kannalta kriittistä ja yksityistä tietoa. Tästä syystä johtuen on erittäin merkityksellistä suojata verkon yli kerättävää ja jaettavaa tietoa. Monet tutkimukset osoittavat esineiden internet -laitteisiin kohdistuvien tietoturvahyökkäysten määrän olevan nousussa, ja samaan aikaan suuri osuus näistä laitteista ei omaa kunnollisia teknisiä ominaisuuksia itse laitteiden tai niiden käyttäjien yksityisen tiedon suojaamiseksi. Tässä väitöskirjassa tutkitaan pilvilaskennan sekä esineiden internetin tietoturvaa ja esitetään ohjelmistopohjaisia tietoturvalähestymistapoja turvautumalla osittain laitteistopohjaisiin teknologioihin. Esitetyt lähestymistavat tarjoavat vankkoja keinoja tietoturvallisuuden kohentamiseksi näissä konteksteissa. Tämän saavuttamiseksi työssä sovelletaan obfuskaatiota ja diversifiointia potentiaalisiana ohjelmistopohjaisina tietoturvatekniikkoina. Suoritettavan koodin obfuskointi suojaa pahantahtoiselta ohjelmiston takaisinmallinnukselta ja diversifiointi torjuu tietoturva-aukkojen laaja-alaisen hyödyntämisen riskiä. Väitöskirjatyössä tutkitaan luotettua laskentaa ja luotettavan laskennan suoritusalustoja laitteistopohjaisina tietoturvaratkaisuina. TPM (Trusted Platform Module) tarjoaa turvallisuutta ja luottamuksellisuutta rakentuen laitteistopohjaiseen luottamukseen. Pyrkimyksenä on taata suoritusalustan eheys. Työssä tutkitaan myös Intel SGX:ää yhtenä luotettavan suorituksen suoritusalustana, joka takaa suoritettavan koodin ja datan eheyden sekä luottamuksellisuuden pohjautuen suojatun säiliön, saarekkeen, tekniseen toteutukseen. Tarkemmin ilmaistuna työssä turvataan käyttöjärjestelmä- ja sovellusrajapintatasojen obfuskaation ja diversifioinnin kautta esineiden internet -laitteiden ohjelmistokerrosta. Soveltamalla samoja tekniikoita protokollakerrokseen, työssä suojataan laitteiden välistä tiedonvaihtoa verkkotasolla. Pilvilaskennan turvaamiseksi työssä sovelletaan obfuskaatio ja diversifiointitekniikoita asiakaspuolen ohjelmistoratkaisuihin. Vankemman tietoturvallisuuden saavuttamiseksi työssä hyödynnetään laitteistopohjaisia TPM- ja SGX-ratkaisuja. Tietoturvallisuuden lisäksi nämä ratkaisut tarjoavat monikerroksisen luottamuksen rakentuen laitteistotasolta ohjelmistokerrokseen asti. Tämän väitöskirjatutkimustyön tuloksena, osajulkaisuiden kautta, vastataan moniin esineiden internet -laitteisiin ja pilvilaskentaan kohdistuviin tietoturvauhkiin. Työssä esitetään myös näkemyksiä jatkotutkimusaiheista

    Emerging Opportunities and Challenges in Hardware Security

    Get PDF
    Recent years have seen the rapid development of many emerging technologies in various aspects of computer engineering, such as new devices, new fabrication techniques of integrated circuits (IC), new computation frameworks, etc. In this dissertation, we study the security challenges to these emerging technologies as well as the security opportunities they bring. Specifically, we investigate the security opportunities in double patterning lithography, the security challenges in physical unclonable functions, and security issues in machine learning. Double patterning lithography (DPL) is an emerging fabrication technique for ICs. We study the security opportunities that DPL brings at the layout level. DPL is used to set up two independent mask development lines which do not need to share any information. Under this setup, we consider the attack model where the untrusted employee(s) who has access to only one mask may try to infer the entire circuit design or insert additional malicious circuitry into the design. As a countermeasure, we customize DPL to decompose the layout into two sub-layouts in such a way that each sub-layout individually exposes minimum information about the other and hence protects the entire layout from any untrusted personnel. Physical unclonable functions (PUF) are a type of circuits for which each copy (of the same circuit structure) has a unique and unpredictable functionality. The unpredictable behavior is caused by the manufacturing variations of electronic devices. However, for many state-of-the-art PUF designs, we show that the device variations can be estimated using an optimization-theoretic formulation and hence the PUF's input-output behavior becomes predictable. Simulations show a substantial reduction in attack complexity compared to previously proposed machine learning based attacks. Neural network (NN) is an emerging computation framework for machine learning (ML). It is increasingly popular for system developers to use pre-trained NN models instead of training their own because training is painstaking and sometimes requires private data. We call these pre-trained neural models neural intellectual properties (IP). Neural IPs raise multiple security concerns. On the one hand, as the IP user does not know about the training process, it is crucial to ensure the integrity of the neural IP. To this end, we investigate possible hidden malicious functionality, i.e. neural Trojans, that can be embedded into neural IPs and propose effective mitigation techniques. On the other hand, the neural IP owner may want to protect the NN model from reverse engineering attacks. However, it has been shown that hardware side-channels can be exploited to decipher the structure of neural networks. We propose both a novel attack approach based on cache timing side-channel and a defensive memory access mechanism. NNs also raise challenges to conventional hardware security techniques. Specifically, we focus on its challenge to logic locking, a strong key-based protection of hardware IP against untrusted foundries by injecting incorrect behavior into the digital functionality when the key is incorrect. We formally prove a trade-off between the amount of injected error and the complexity of Boolean satisfiability (SAT)-based attacks to find the correct key. Due to the inherent error resiliency of NNs, state-of-the-art logic locking schemes fail to inject enough error to derail NN-based applications while maintaining exponential SAT complexity. To fix this issue, we propose a novel secure and effective logic locking scheme, called Strong Anti-SAT (SAS), to lock the hardware and make sure that the NN modes undergo significant accuracy loss when any wrong key is applied

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.Basque GovernmentConsolidated Research Group MATHMODE - Department of Education of the Basque Government IT1294-19Spanish GovernmentEuropean Commission TIN2017-89517-PBBVA Foundation through its Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2018 call (DeepSCOP project)European Commission 82561

    Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability
    corecore