58 research outputs found

    On Provable Security for Complex Systems

    Get PDF
    We investigate the contribution of cryptographic proofs of security to a systematic security engineering process. To this end we study how to model and prove security for concrete applications in three practical domains: computer networks, data outsourcing, and electronic voting. We conclude that cryptographic proofs of security can benefit a security engineering process in formulating requirements, influencing design, and identifying constraints for the implementation

    Towards Applying Cryptographic Security Models to Real-World Systems

    Get PDF
    The cryptographic methodology of formal security analysis usually works in three steps: choosing a security model, describing a system and its intended security properties, and creating a formal proof of security. For basic cryptographic primitives and simple protocols this is a well understood process and is performed regularly. For more complex systems, as they are in use in real-world settings it is rarely applied, however. In practice, this often leads to missing or incomplete descriptions of the security properties and requirements of such systems, which in turn can lead to insecure implementations and consequent security breaches. One of the main reasons for the lack of application of formal models in practice is that they are particularly difficult to use and to adapt to new use cases. With this work, we therefore aim to investigate how cryptographic security models can be used to argue about the security of real-world systems. To this end, we perform case studies of three important types of real-world systems: data outsourcing, computer networks and electronic payment. First, we give a unified framework to express and analyze the security of data outsourcing schemes. Within this framework, we define three privacy objectives: \emph{data privacy}, \emph{query privacy}, and \emph{result privacy}. We show that data privacy and query privacy are independent concepts, while result privacy is consequential to them. We then extend our framework to allow the modeling of \emph{integrity} for the specific use case of file systems. To validate our model, we show that existing security notions can be expressed within our framework and we prove the security of CryFS---a cryptographic cloud file system. Second, we introduce a model, based on the Universal Composability (UC) framework, in which computer networks and their security properties can be described We extend it to incorporate time, which cannot be expressed in the basic UC framework, and give formal tools to facilitate its application. For validation, we use this model to argue about the security of architectures of multiple firewalls in the presence of an active adversary. We show that a parallel composition of firewalls exhibits strictly better security properties than other variants. Finally, we introduce a formal model for the security of electronic payment protocols within the UC framework. Using this model, we prove a set of necessary requirements for secure electronic payment. Based on these findings, we discuss the security of current payment protocols and find that most are insecure. We then give a simple payment protocol inspired by chipTAN and photoTAN and prove its security within our model. We conclude that cryptographic security models can indeed be used to describe the security of real-world systems. They are, however, difficult to apply and always need to be adapted to the specific use case

    Preserving privacy in edge computing

    Get PDF
    Edge computing or fog computing enables realtime services to smart application users by storing data and services at the edge of the networks. Edge devices in the edge computing handle data storage and service provisioning. Therefore, edge computing has become a  new norm for several delay-sensitive smart applications such as automated vehicles, ambient-assisted living, emergency response services, precision agriculture, and smart electricity grids. Despite having great potential, privacy threats are the main barriers to the success of edge computing. Attackers can leak private or sensitive information of data owners and modify service-related data for hampering service provisioning in edge computing-based smart applications. This research takes privacy issues of heterogeneous smart application data into account that are stored in edge data centers. From there, this study focuses on the development of privacy-preserving models for user-generated smart application data in edge computing and edge service-related data, such as Quality-of-Service (QoS) data, for ensuring unbiased service provisioning. We begin with developing privacy-preserving techniques for user data generated by smart applications using steganography that is one of the data hiding techniques. In steganography, user sensitive information is hidden within nonsensitive information of data before outsourcing smart application data, and stego data are produced for storing in the edge data center. A steganography approach must be reversible or lossless to be useful in privacy-preserving techniques. In this research, we focus on numerical (sensor data) and textual (DNA sequence and text) data steganography. Existing steganography approaches for numerical data are irreversible. Hence, we introduce a lossless or reversible numerical data steganography approach using Error Correcting Codes (ECC). Modern lossless steganography approaches for text data steganography are mainly application-specific and lacks imperceptibility, and DNA steganography requires reference DNA sequence for the reconstruction of the original DNA sequence. Therefore, we present the first blind and lossless DNA sequence steganography approach based on the nucleotide substitution method in this study. In addition, a text steganography method is proposed that using invisible character and compression based encoding for ensuring reversibility and higher imperceptibility.  Different experiments are conducted to demonstrate the justification of our proposed methods in these studies. The searching capability of the stored stego data is challenged in the edge data center without disclosing sensitive information. We present a privacy-preserving search framework for stego data on the edge data center that includes two methods. In the first method, we present a keyword-based privacy-preserving search method that allows a user to send a search query as a hash string. However, this method does not support the range query. Therefore, we develop a range search method on stego data using an order-preserving encryption (OPE) scheme. In both cases, the search service provider retrieves corresponding stego data without revealing any sensitive information. Several experiments are conducted for evaluating the performance of the framework. Finally, we present a privacy-preserving service computation framework using Fully Homomorphic Encryption (FHE) based cryptosystem for ensuring the service provider's privacy during service selection and composition. Our contributions are two folds. First, we introduce a privacy-preserving service selection model based on encrypted Quality-of-Service (QoS) values of edge services for ensuring privacy. QoS values are encrypted using FHE. A distributed computation model for service selection using MapReduce is designed for improving efficiency. Second, we develop a composition model for edge services based on the functional relationship among edge services for optimizing the service selection process. Various experiments are performed in both centralized and distributed computing environments to evaluate the performance of the proposed framework using a synthetic QoS dataset

    Data Service Outsourcing and Privacy Protection in Mobile Internet

    Get PDF
    Mobile Internet data have the characteristics of large scale, variety of patterns, and complex association. On the one hand, it needs efficient data processing model to provide support for data services, and on the other hand, it needs certain computing resources to provide data security services. Due to the limited resources of mobile terminals, it is impossible to complete large-scale data computation and storage. However, outsourcing to third parties may cause some risks in user privacy protection. This monography focuses on key technologies of data service outsourcing and privacy protection, including the existing methods of data analysis and processing, the fine-grained data access control through effective user privacy protection mechanism, and the data sharing in the mobile Internet

    MATCOS-10

    Get PDF

    A abordagem POESIA para a integração de dados e serviços na Web semantica

    Get PDF
    Orientador: Claudia Bauzer MedeirosTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: POESIA (Processes for Open-Ended Systems for lnformation Analysis), a abordagem proposta neste trabalho, visa a construção de processos complexos envolvendo integração e análise de dados de diversas fontes, particularmente em aplicações científicas. A abordagem é centrada em dois tipos de mecanismos da Web semântica: workflows científicos, para especificar e compor serviços Web; e ontologias de domínio, para viabilizar a interoperabilidade e o gerenciamento semânticos dos dados e processos. As principais contribuições desta tese são: (i) um arcabouço teórico para a descrição, localização e composição de dados e serviços na Web, com regras para verificar a consistência semântica de composições desses recursos; (ii) métodos baseados em ontologias de domínio para auxiliar a integração de dados e estimar a proveniência de dados em processos cooperativos na Web; (iii) implementação e validação parcial das propostas, em urna aplicação real no domínio de planejamento agrícola, analisando os benefícios e as limitações de eficiência e escalabilidade da tecnologia atual da Web semântica, face a grandes volumes de dadosAbstract: POESIA (Processes for Open-Ended Systems for Information Analysis), the approach proposed in this work, supports the construction of complex processes that involve the integration and analysis of data from several sources, particularly in scientific applications. This approach is centered in two types of semantic Web mechanisms: scientific workflows, to specify and compose Web services; and domain ontologies, to enable semantic interoperability and management of data and processes. The main contributions of this thesis are: (i) a theoretical framework to describe, discover and compose data and services on the Web, inc1uding mIes to check the semantic consistency of resource compositions; (ii) ontology-based methods to help data integration and estimate data provenance in cooperative processes on the Web; (iii) partial implementation and validation of the proposal, in a real application for the domain of agricultural planning, analyzing the benefits and scalability problems of the current semantic Web technology, when faced with large volumes of dataDoutoradoCiência da ComputaçãoDoutor em Ciência da Computaçã

    Sublinear Computation Paradigm

    Get PDF
    This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms

    LIPIcs, Volume 248, ISAAC 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 248, ISAAC 2022, Complete Volum

    Connected Information Management

    Get PDF
    Society is currently inundated with more information than ever, making efficient management a necessity. Alas, most of current information management suffers from several levels of disconnectedness: Applications partition data into segregated islands, small notes don’t fit into traditional application categories, navigating the data is different for each kind of data; data is either available at a certain computer or only online, but rarely both. Connected information management (CoIM) is an approach to information management that avoids these ways of disconnectedness. The core idea of CoIM is to keep all information in a central repository, with generic means for organization such as tagging. The heterogeneity of data is taken into account by offering specialized editors. The central repository eliminates the islands of application-specific data and is formally grounded by a CoIM model. The foundation for structured data is an RDF repository. The RDF editing meta-model (REMM) enables form-based editing of this data, similar to database applications such as MS access. Further kinds of data are supported by extending RDF, as follows. Wiki text is stored as RDF and can both contain structured text and be combined with structured data. Files are also supported by the CoIM model and are kept externally. Notes can be quickly captured and annotated with meta-data. Generic means for organization and navigation apply to all kinds of data. Ubiquitous availability of data is ensured via two CoIM implementations, the web application HYENA/Web and the desktop application HYENA/Eclipse. All data can be synchronized between these applications. The applications were used to validate the CoIM ideas

    A patient agent controlled customized blockchain based framework for internet of things

    Get PDF
    Although Blockchain implementations have emerged as revolutionary technologies for various industrial applications including cryptocurrencies, they have not been widely deployed to store data streaming from sensors to remote servers in architectures known as Internet of Things. New Blockchain for the Internet of Things models promise secure solutions for eHealth, smart cities, and other applications. These models pave the way for continuous monitoring of patient’s physiological signs with wearable sensors to augment traditional medical practice without recourse to storing data with a trusted authority. However, existing Blockchain algorithms cannot accommodate the huge volumes, security, and privacy requirements of health data. In this thesis, our first contribution is an End-to-End secure eHealth architecture that introduces an intelligent Patient Centric Agent. The Patient Centric Agent executing on dedicated hardware manages the storage and access of streams of sensors generated health data, into a customized Blockchain and other less secure repositories. As IoT devices cannot host Blockchain technology due to their limited memory, power, and computational resources, the Patient Centric Agent coordinates and communicates with a private customized Blockchain on behalf of the wearable devices. While the adoption of a Patient Centric Agent offers solutions for addressing continuous monitoring of patients’ health, dealing with storage, data privacy and network security issues, the architecture is vulnerable to Denial of Services(DoS) and single point of failure attacks. To address this issue, we advance a second contribution; a decentralised eHealth system in which the Patient Centric Agent is replicated at three levels: Sensing Layer, NEAR Processing Layer and FAR Processing Layer. The functionalities of the Patient Centric Agent are customized to manage the tasks of the three levels. Simulations confirm protection of the architecture against DoS attacks. Few patients require all their health data to be stored in Blockchain repositories but instead need to select an appropriate storage medium for each chunk of data by matching their personal needs and preferences with features of candidate storage mediums. Motivated by this context, we advance third contribution; a recommendation model for health data storage that can accommodate patient preferences and make storage decisions rapidly, in real-time, even with streamed data. The mapping between health data features and characteristics of each repository is learned using machine learning. The Blockchain’s capacity to make transactions and store records without central oversight enables its application for IoT networks outside health such as underwater IoT networks where the unattended nature of the nodes threatens their security and privacy. However, underwater IoT differs from ground IoT as acoustics signals are the communication media leading to high propagation delays, high error rates exacerbated by turbulent water currents. Our fourth contribution is a customized Blockchain leveraged framework with the model of Patient-Centric Agent renamed as Smart Agent for securely monitoring underwater IoT. Finally, the smart Agent has been investigated in developing an IoT smart home or cities monitoring framework. The key algorithms underpinning to each contribution have been implemented and analysed using simulators.Doctor of Philosoph
    corecore