634 research outputs found

    Hidden in the Cloud : Advanced Cryptographic Techniques for Untrusted Cloud Environments

    Get PDF
    In the contemporary digital age, the ability to search and perform operations on encrypted data has become increasingly important. This significance is primarily due to the exponential growth of data, often referred to as the "new oil," and the corresponding rise in data privacy concerns. As more and more data is stored in the cloud, the need for robust security measures to protect this data from unauthorized access and misuse has become paramount. One of the key challenges in this context is the ability to perform meaningful operations on the data while it remains encrypted. Traditional encryption techniques, while providing a high level of security, render the data unusable for any practical purpose other than storage. This is where advanced cryptographic protocols like Symmetric Searchable Encryption (SSE), Functional Encryption (FE), Homomorphic Encryption (HE), and Hybrid Homomorphic Encryption (HHE) come into play. These protocols not only ensure the confidentiality of data but also allow computations on encrypted data, thereby offering a higher level of security and privacy. The ability to search and perform operations on encrypted data has several practical implications. For instance, it enables efficient Boolean queries on encrypted databases, which is crucial for many "big data" applications. It also allows for the execution of phrase searches, which are important for many machine learning applications, such as intelligent medical data analytics. Moreover, these capabilities are particularly relevant in the context of sensitive data, such as health records or financial information, where the privacy and security of user data are of utmost importance. Furthermore, these capabilities can help build trust in digital systems. Trust is a critical factor in the adoption and use of digital services. By ensuring the confidentiality, integrity, and availability of data, these protocols can help build user trust in cloud services. This trust, in turn, can drive the wider adoption of digital services, leading to a more inclusive digital society. However, it is important to note that while these capabilities offer significant advantages, they also present certain challenges. For instance, the computational overhead of these protocols can be substantial, making them less suitable for scenarios where efficiency is a critical requirement. Moreover, these protocols often require sophisticated key management mechanisms, which can be challenging to implement in practice. Therefore, there is a need for ongoing research to address these challenges and make these protocols more efficient and practical for real-world applications. The research publications included in this thesis offer a deep dive into the intricacies and advancements in the realm of cryptographic protocols, particularly in the context of the challenges and needs highlighted above. Publication I presents a novel approach to hybrid encryption, combining the strengths of ABE and SSE. This fusion aims to overcome the inherent limitations of both techniques, offering a more secure and efficient solution for key sharing and access control in cloud-based systems. Publication II further expands on SSE, showcasing a dynamic scheme that emphasizes forward and backward privacy, crucial for ensuring data integrity and confidentiality. Publication III and Publication IV delve into the potential of MIFE, demonstrating its applicability in real-world scenarios, such as designing encrypted private databases and additive reputation systems. These publications highlight the transformative potential of MIFE in bridging the gap between theoretical cryptographic concepts and practical applications. Lastly, Publication V underscores the significance of HE and HHE as a foundational element for secure protocols, emphasizing its potential in devices with limited computational capabilities. In essence, these publications not only validate the importance of searching and performing operations on encrypted data but also provide innovative solutions to the challenges mentioned. They collectively underscore the transformative potential of advanced cryptographic protocols in enhancing data security and privacy, paving the way for a more secure digital future

    A novel smart contract based blockchain with sidechain for electronic voting

    Get PDF
    Several countries have been researching digital voting methods in order to overcome the challenges of paper balloting and physical voting. The recent coronavirus disease 2019 (COVID-19) epidemic has compelled the remote implementation of existing systems and procedures. Online voting will ultimately become the norm just like unified payments interface (UPI) payments and online banking. With digital voting or electronic voting (e-voting) a small bug can cause massive vote rigging. E-voting must be honest, exact, safe, and simple. E-voting is vulnerable to malware, which can disrupt servers. Blockchain’s end-to-end validation solves these problems. Three smart contracts-voter, candidate, and voting-are employed. The problem of fraudulent actions is addressed using vote coins. Vote coins indicate voter status. Sidechain technology complements blockchain. Sidechains improve blockchain functionality by performing operations outside of blockchains and delivering the results to the mainchain. Thus, storing the encrypted vote on the sidechain and using the decrypted result on the mainchain reduces cost. Building access control policies to grant only authorized users’ access to the votes for counting is made simpler by this authorization paradigm. Results of the approach depict the proposed e-voting system improves system security against replay attacks and reduces the processing cost as well as processing time

    DeVoS: Deniable Yet Verifiable Vote Updating

    Get PDF
    peer reviewedInternet voting systems are supposed to meet the same high standards as traditional paper-based systems when used in real political elections: freedom of choice, universal and equal suffrage, secrecy of the ballot, and independent verifiability of the election result. Although numerous Internet voting systems have been proposed to achieve these challenging goals simultaneously, few come close in reality. We propose a novel publicly verifiable and practically efficient Internet voting system, DeVoS, that advances the state of the art. The main feature of DeVoS is its ability to protect voters' freedom of choice in several dimensions. First, voters in DeVoS can intuitively update their votes in a way that is deniable to observers but verifiable by the voters; in this way voters can secretly overwrite potentially coerced votes. Second, in addition to (basic) vote privacy, DeVoS also guarantees strong participation privacy by end-to-end hiding which voters have submitted ballots and which have not. Finally, DeVoS is fully compatible with Perfectly Private Audit Trail, a state-of-the-art Internet voting protocol with practical everlasting privacy. In combination, DeVoS offers a new way to secure free Internet elections with strong and long-term privacy properties

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Formal Verification of Verifiability in E-Voting Protocols

    Get PDF
    Election verifiability is one of the main security properties of e-voting protocols, referring to the ability of independent entities, such as voters or election observers, to validate the outcome of the voting process. It can be ensured by means of formal verification that applies mathematical logic to verify the considered protocols under well-defined assumptions, specifications, and corruption scenarios. Automated tools allow an efficient and accurate way to perform formal verification, enabling comprehensive analysis of all execution scenarios and eliminating the human errors in the manual verification. The existing formal verification frameworks that are suitable for automation are not general enough to cover a broad class of e-voting protocols. They do not cover revoting and cannot be tuned to weaker or stronger levels of security that may be achievable in practice. We therefore propose a general formal framework that allows automated verification of verifiability in e-voting protocols. Our framework is easily applicable to many protocols and corruption scenarios. It also allows refined specifications of election procedures, for example accounting for revote policies. We apply our framework to the analysis of several real-world case studies, where we capture both known and new attacks, and provide new security guarantees. First, we consider Helios, a prominent web-based e-voting protocol, which aims to provide end-to-end verifiability. It is however vulnerable to ballot stuffing when the voting server is corrupt. Second, we consider Belenios, which builds upon Helios and aims to achieve stronger verifiability, preventing ballot stuffing by splitting the trust between a registrar and the server. Both of these systems have been used in many real-world elections. Our third case study is Selene, which aims to simplify the individual verification procedure for voters, providing them with trackers for verifying their votes in the clear at the end of election. Finally, we consider the Estonian e-voting protocol, that has been deployed for national elections since 2005. The protocol has continuously evolved to offer better verifiability guarantees but has no formal analysis. We apply our framework to realistic models of all these protocols, deriving the first automated formal analysis in each case. As a result, we find several new attacks, improve the corresponding protocols to address their weakness, and prove that verifiability holds for the new versions

    Breaking the t<n/3t< n/3 Consensus Bound: Asynchronous Dynamic Proactive Secret Sharing under Honest Majority

    Get PDF
    A proactive secret sharing scheme (PSS), expressed in the dynamic-membership setting, enables a committee of n holders of secret-shares, dubbed as players, to securely hand-over new shares of the same secret to a new committee. We dub such a sub-protocol as a Refresh. All existing PSS under an honest majority, require the use of a broadcast (BC) in each refresh. BC is costly to implement, and its security relies on timing assumptions on the network. So the privacy of the secret and/or its guaranteed delivery, either depend on network assumptions, or, on the reliability of a public ledger. By contrast, PSS over asynchronous channels do not have these constraints. However, all of them (but one, with exponential complexity) use asynchronous verifiable secret sharing (AVSS) and consensus (MVBA and/or ACS), which are impossible under asynchrony beyond t<n/3 corruptions, whatever the setup. We present a PSS, named asynchronous-proactive secret sharing (APSS), which is the first PSS under honest majority with guaranteed output delivery in a completely asynchronous network. More generally, APSS allows any flexible threshold t<nt<n, such that privacy and correctness are guaranteed up to t corruptions, and liveness as soon as t+1t+1 players behave honestly. Correctness can be lifted to any number of corruptions, provided a linearly homomorphic commitment scheme. Moreover, each refresh completes at the record speed of 2δ2\delta, where δ\delta is the actual message delivery delay. APSS demonstrates that proactive refreshes are possible as long as players of the initial committee only, have a common view on a set of (publicly committed or encrypted) shares. Despite not providing consensus on a unique set of shares, APSS surprisingly enables the opening of any linear map over secrets { non-interactively, without consensus }. This, in turn, applies to threshold signing, decryption and randomness generation. APSS can also be directly integrated into the asynchronous Schnorr threshold signing scheme Roast [CCS\u2722]. Of independent interest, we: - provide the first UC formalization (and proof) of proactive AVSS, furthermore for arbitrary thresholds; - provide additional mechanisms enabling players of a committee to start a refresh then erase their old shares, synchronously up to δ\delta from each other; - improve by 50x the verification speed of the NIZKs of encrypted re-sharing of [Cascudo et al, Asiacrypt\u2722], by using novel optimizations of batch Schnorr proofs of knowledge. We demonstrate efficiency of APSS with an implementation which uses this optimization as baseline

    A Generative Framework for Low-Cost Result Validation of Outsourced Machine Learning Tasks

    Full text link
    The growing popularity of Machine Learning (ML) has led to its deployment in various sensitive domains, which has resulted in significant research focused on ML security and privacy. However, in some applications, such as autonomous driving, integrity verification of the outsourced ML workload is more critical--a facet that has not received much attention. Existing solutions, such as multi-party computation and proof-based systems, impose significant computation overhead, which makes them unfit for real-time applications. We propose Fides, a novel framework for real-time validation of outsourced ML workloads. Fides features a novel and efficient distillation technique--Greedy Distillation Transfer Learning--that dynamically distills and fine-tunes a space and compute-efficient verification model for verifying the corresponding service model while running inside a trusted execution environment. Fides features a client-side attack detection model that uses statistical analysis and divergence measurements to identify, with a high likelihood, if the service model is under attack. Fides also offers a re-classification functionality that predicts the original class whenever an attack is identified. We devised a generative adversarial network framework for training the attack detection and re-classification models. The evaluation shows that Fides achieves an accuracy of up to 98% for attack detection and 94% for re-classification.Comment: 16 pages, 11 figure

    Current issues of the management of socio-economic systems in terms of globalization challenges

    Get PDF
    The authors of the scientific monograph have come to the conclusion that the management of socio-economic systems in the terms of global challenges requires the use of mechanisms to ensure security, optimise the use of resource potential, increase competitiveness, and provide state support to economic entities. Basic research focuses on assessment of economic entities in the terms of global challenges, analysis of the financial system, migration flows, logistics and product exports, territorial development. The research results have been implemented in the different decision-making models in the context of global challenges, strategic planning, financial and food security, education management, information technology and innovation. The results of the study can be used in the developing of directions, programmes and strategies for sustainable development of economic entities and regions, increasing the competitiveness of products and services, decision-making at the level of ministries and agencies that regulate the processes of managing socio-economic systems. The results can also be used by students and young scientists in the educational process and conducting scientific research on the management of socio-economic systems in the terms of global challenges

    Futures of Data Ownership: Defining Data Policies in Canadian Context

    Get PDF
    The importance of data is increasing along with its inflation in our world today. In today's world, data is becoming the primary source for innovation, knowledge, insight, and a competitive and financial advantage in the race of information procurement. This interest in acquiring and exploiting data and the current concerns regarding the privacy and security of information raises the question of who should own the data and how policies can preserve data ownership. There is a growing awareness that companies benefit disproportionately from collecting and selling personal information, driving the desire for greater individual control of personal data. As technology progresses exponentially, there is a dire need to regulate Tech organizations. With the increasing use of personal data by tech companies, data privacy and ownership concerns have become more significant in today's society. Although governments worldwide have introduced privacy regulations to protect citizens' data, there is still a need for policies and legislation that safeguard citizens' rights, allow consumers to control their data, and implement strict measures in case of data breaches or violation of data rights. The research project "Futures of Data Ownership - Informing Data Policies in Canadian Context" aims to explore emerging technological shifts and promote ethical use and data protection by developing data policies that consider the Canadian context. The research will employ primary and secondary research methods, including horizon scanning, semi-structured interviews, and a literature review, to inform policy and strategy development. In conclusion, the research project informs potential policies and legislation that regulate tech organizations and protect data ownership, ensuring a secure and trustworthy digital future for all

    Current Challenges in the Application of Algorithms in Multi-institutional Clinical Settings

    Get PDF
    The Coronavirus disease pandemic has highlighted the importance of artificial intelligence in multi-institutional clinical settings. Particularly in situations where the healthcare system is overloaded, and a lot of data is generated, artificial intelligence has great potential to provide automated solutions and to unlock the untapped potential of acquired data. This includes the areas of care, logistics, and diagnosis. For example, automated decision support applications could tremendously help physicians in their daily clinical routine. Especially in radiology and oncology, the exponential growth of imaging data, triggered by a rising number of patients, leads to a permanent overload of the healthcare system, making the use of artificial intelligence inevitable. However, the efficient and advantageous application of artificial intelligence in multi-institutional clinical settings faces several challenges, such as accountability and regulation hurdles, implementation challenges, and fairness considerations. This work focuses on the implementation challenges, which include the following questions: How to ensure well-curated and standardized data, how do algorithms from other domains perform on multi-institutional medical datasets, and how to train more robust and generalizable models? Also, questions of how to interpret results and whether there exist correlations between the performance of the models and the characteristics of the underlying data are part of the work. Therefore, besides presenting a technical solution for manual data annotation and tagging for medical images, a real-world federated learning implementation for image segmentation is introduced. Experiments on a multi-institutional prostate magnetic resonance imaging dataset showcase that models trained by federated learning can achieve similar performance to training on pooled data. Furthermore, Natural Language Processing algorithms with the tasks of semantic textual similarity, text classification, and text summarization are applied to multi-institutional, structured and free-text, oncology reports. The results show that performance gains are achieved by customizing state-of-the-art algorithms to the peculiarities of the medical datasets, such as the occurrence of medications, numbers, or dates. In addition, performance influences are observed depending on the characteristics of the data, such as lexical complexity. The generated results, human baselines, and retrospective human evaluations demonstrate that artificial intelligence algorithms have great potential for use in clinical settings. However, due to the difficulty of processing domain-specific data, there still exists a performance gap between the algorithms and the medical experts. In the future, it is therefore essential to improve the interoperability and standardization of data, as well as to continue working on algorithms to perform well on medical, possibly, domain-shifted data from multiple clinical centers
    • …
    corecore