83 research outputs found

    Size-efficient interval time stamps

    Get PDF
    http://www.ester.ee/record=b4338625~S1*es

    Stamp \& Extend -- Instant but Undeniable Timestamping based on Lazy Trees

    Get PDF
    We present a Stamp\&Extend time-stamping scheme based on linking via modified creation of Schnorr signatures. The scheme is based on lazy construction of a tree of signatures. Stamp\&Extend returns a timestamp immediately after the request, unlike the schemes based on the concept of timestamping rounds. Despite the fact that all timestamps are linearly linked, verification of a timestamp requires a logarithmic number of steps with respect to the chain length. An extra feature of the scheme is that any attempt to forge a timestamp by the Time Stamping Authority (TSA) results in revealing its secret key, providing an undeniable cryptographic evidence of misbehavior of TSA. Breaking Stamp\&Extend requires not only breaking Schnorr signatures, but to some extend also breaking Pedersen commitments

    Practical Forward Secure Signatures using Minimal Security Assumptions

    Get PDF
    Digital signatures are one of the most important cryptographic primitives in practice. They are an enabling technology for eCommerce and eGovernment applications and they are used to distribute software updates over the Internet in a secure way. In this work we introduce two new digital signature schemes: XMSS and its extension XMSS^MT. We present security proofs for both schemes in the standard model, analyze their performance, and discuss parameter selection. Both our schemes have certain properties that make them favorable compared to today's signature schemes. Our schemes are forward secure, meaning even in case of a key compromise, previously generated signatures can be trusted. This is an important property whenever a signature has to be verifiable in the mid- or long-term. Moreover, our signature schemes are generic constructions that can be instantiated using any hash function. Thereby, if a used hash function becomes insecure for some reason, we can simply replace it by a secure one to obtain a new secure instantiation. The properties we require the hash function to provide are minimal. This implies that as long as there exists any complexity-based cryptography, there exists a secure instantiation for our schemes. In addition, our schemes are secure against quantum computer aided attacks, as long as the used hash functions are. We analyze the performance of our schemes from a theoretical and a practical point of view. On the one hand, we show that given an efficient hash function, we can obtain an efficient instantiation for our schemes. On the other hand, we provide experimental data that show that the performance of our schemes is comparable to that of today's signature schemes. Besides, we show how to select optimal parameters for a given use case that provably reach a given level of security. On the way of constructing XMSS and XMSS^MT, we introduce two new one-time signature schemes (OTS): WOTS+ and WOTS.Onetimesignatureschemesaresignatureschemeswhereakeypairmayonlybeusedonce.WOTS+iscurrentlythemostefficienthashbasedOTSandWOTS. One-time signature schemes are signature schemes where a key pair may only be used once. WOTS+ is currently the most efficient hash-based OTS and WOTS the most efficient hash-based OTS with minimal security assumptions. One-time signature schemes have many more applications besides constructing full fledged signature schemes, including authentication in sensor networks and the construction of chosen-ciphertext secure encryption schemes. Hence, WOTS+ and WOTS$ are contributions on their own. Altogether, this work shows the practicality and usability of forward secure signatures on the one hand and hash-based signatures on the other hand

    A Study of Scalability and Cost-effectiveness of Large-scale Scientific Applications over Heterogeneous Computing Environment

    Get PDF
    Recent advances in large-scale experimental facilities ushered in an era of data-driven science. These large-scale data increase the opportunity to answer many fundamental questions in basic science. However, these data pose new challenges to the scientific community in terms of their optimal processing and transfer. Consequently, scientists are in dire need of robust high performance computing (HPC) solutions that can scale with terabytes of data. In this thesis, I address the challenges in three major aspects of scientific big data processing as follows: 1) Developing scalable software and algorithms for data- and compute-intensive scientific applications. 2) Proposing new cluster architectures that these software tools need for good performance. 3) Transferring the big scientific dataset among clusters situated at geographically disparate locations. In the first part, I develop scalable algorithms to process huge amounts of scientific big data using the power of recent analytic tools such as, Hadoop, Giraph, NoSQL, etc. At a broader level, these algorithms take the advantage of locality-based computing that can scale with increasing amount of data. The thesis mainly addresses the challenges involved in large-scale genome analysis applications such as, genomic error correction and genome assembly which made their way to the forefront of big data challenges recently. In the second part of the thesis, I perform a systematic benchmark study using the above-mentioned algorithms on different distributed cyberinfrastructures to pinpoint the limitations in a traditional HPC cluster to process big data. Then I propose the solution to those limitations by balancing the I/O bandwidth of the solid state drive (SSD) with the computational speed of high-performance CPUs. A theoretical model has been also proposed to help the HPC system designers who are striving for system balance. In the third part of the thesis, I develop a high throughput architecture for transferring these big scientific datasets among geographically disparate clusters. The architecture leverages the power of Ethereum\u27s Blockchain technology and Swarm\u27s peer-to-peer (P2P) storage technology to transfer the data in secure, tamper-proof fashion. Instead of optimizing the computation in a single cluster, in this part, my major motivation is to foster translational research and data interoperability in collaboration with multiple institutions

    Query processing in temporal object-oriented databases

    Get PDF
    This PhD thesis is concerned with historical data management in the context of objectoriented databases. An extensible approach has been explored to processing temporal object queries within a uniform query framework. By the uniform framework, we mean temporal queries can be processed within the existing object-oriented framework that is extended from relational framework, by extending the existing query processing techniques and strategies developed for OODBs and RDBs. The unified model of OODBs and RDBs in UmSQL/X has been adopted as a basis for this purpose. A temporal object data model is thereby defined by incorporating a time dimension into this unified model of OODBs and RDBs to form temporal relational-like cubes but with the addition of aggregation and inheritance hierarchies. A query algebra, that accesses objects through these associations of aggregation, inheritance and timereference, is then defined as a general query model /language. Due to the extensive features of our data model and reducibility of the algebra, a layered structure of query processor is presented that provides a uniforrn framework for processing temporal object queries. Within the uniform framework, query transformation is carried out based on a set of transformation rules identified that includes the known relational and object rules plus those pertaining to the time dimension. To evaluate a temporal query involving a path with timereference, a strategy of decomposition is proposed. That is, evaluation of an enhanced path, which is defined to extend a path with time-reference, is decomposed by initially dividing the path into two sub-paths: one containing the time-stamped class that can be optimized by making use of the ordering information of temporal data and another an ordinary sub-path (without time-stamped classes) which can be further decomposed and evaluated using different algorithms. The intermediate results of traversing the two sub-paths are then joined together to create the query output. Algorithms for processing the decomposed query components, i. e., time-related operation algorithms, four join algorithms (nested-loop forward join, sort-merge forward join, nested-loop reverse join and sort-merge reverse join) and their modifications, have been presented with cost analysis and implemented with stream processing techniques using C++. Simulation results are also provided. Both cost analysis and simulation show the effects of time on the query processing algorithms: the join time cost is linearly increased with the expansion in the number of time-epochs (time-dimension in the case of a regular TS). It is also shown that using heuristics that make use of time information can lead to a significant time cost saving. Query processing with incomplete temporal data has also been discussed

    A composable approach to design of newer techniques for large-scale denial-of-service attack attribution

    Get PDF
    Since its early days, the Internet has witnessed not only a phenomenal growth, but also a large number of security attacks, and in recent years, denial-of-service (DoS) attacks have emerged as one of the top threats. The stateless and destination-oriented Internet routing combined with the ability to harness a large number of compromised machines and the relative ease and low costs of launching such attacks has made this a hard problem to address. Additionally, the myriad requirements of scalability, incremental deployment, adequate user privacy protections, and appropriate economic incentives has further complicated the design of DDoS defense mechanisms. While the many research proposals to date have focussed differently on prevention, mitigation, or traceback of DDoS attacks, the lack of a comprehensive approach satisfying the different design criteria for successful attack attribution is indeed disturbing. Our first contribution here has been the design of a composable data model that has helped us represent the various dimensions of the attack attribution problem, particularly the performance attributes of accuracy, effectiveness, speed and overhead, as orthogonal and mutually independent design considerations. We have then designed custom optimizations along each of these dimensions, and have further integrated them into a single composite model, to provide strong performance guarantees. Thus, the proposed model has given us a single framework that can not only address the individual shortcomings of the various known attack attribution techniques, but also provide a more wholesome counter-measure against DDoS attacks. Our second contribution here has been a concrete implementation based on the proposed composable data model, having adopted a graph-theoretic approach to identify and subsequently stitch together individual edge fragments in the Internet graph to reveal the true routing path of any network data packet. The proposed approach has been analyzed through theoretical and experimental evaluation across multiple metrics, including scalability, incremental deployment, speed and efficiency of the distributed algorithm, and finally the total overhead associated with its deployment. We have thereby shown that it is realistically feasible to provide strong performance and scalability guarantees for Internet-wide attack attribution. Our third contribution here has further advanced the state of the art by directly identifying individual path fragments in the Internet graph, having adopted a distributed divide-and-conquer approach employing simple recurrence relations as individual building blocks. A detailed analysis of the proposed approach on real-life Internet topologies with respect to network storage and traffic overhead, has provided a more realistic characterization. Thus, not only does the proposed approach lend well for simplified operations at scale but can also provide robust network-wide performance and security guarantees for Internet-wide attack attribution. Our final contribution here has introduced the notion of anonymity in the overall attack attribution process to significantly broaden its scope. The highly invasive nature of wide-spread data gathering for network traceback continues to violate one of the key principles of Internet use today - the ability to stay anonymous and operate freely without retribution. In this regard, we have successfully reconciled these mutually divergent requirements to make it not only economically feasible and politically viable but also socially acceptable. This work opens up several directions for future research - analysis of existing attack attribution techniques to identify further scope for improvements, incorporation of newer attributes into the design framework of the composable data model abstraction, and finally design of newer attack attribution techniques that comprehensively integrate the various attack prevention, mitigation and traceback techniques in an efficient manner

    Replica Placement and Location using Distributed Hash Tables

    Full text link
    Abstract—Interest in distributed storage is fueled by demand for reliability and resilience combined with decreasing hardware costs. Peer-to-peer storage networks based on distributed hash tables are attractive for their efficient use of resources and result-ing performance. The placement and subsequent efficient location of replicas in such systems remain open problems, especially (1) the requirement to update replicated content, (2) working in the absence of global information, and (3) determination of the locations in a dynamic system without introducing single points of failure. We present and evaluate a novel and versatile technique, replica enumeration, which allows for controlled replication and replica access. The possibility of enumerating and addressing individual replicas allows dynamic updates as well as superior performance without burdening the network with state informa-tion, yet taking advantage of locality information when available. We simulate, analyze, and prove properties of the system, and discuss some applications. I

    Concepts for the Representation, Storage, and Retrieval of Spatio-Temporal Objects in 3D/4D Geo-Informations-Systems

    Get PDF
    The quickly increasing number of spatio-temporal applications in fields like environmental management or geology is a new challenge to the development of database systems. This thesis addresses three areas of the problem of integrating spatio-temporal objects into databases. First, a new representational model for continuously changing, spatial 3D objects is introduced and transferred into a small system of classes within an object-oriented database framework. The model extends simplicial cell complexes to the spatio-temporal setting. The problem of closure under certain operations is investigated. Second, internal data structures are introduced that represent instances of the (user-level) spatio-temporal classes. A new technique provides a compromise between compact storage and efficient retrieval of spatio-temporal objects. These structures correspond to temporal graphs and support updates as well as the maintainance of connected components over time. Third, it is shown how to realise further operations on the new type of objects. Among these operations are range queries, intersection tests, and the Euclidean distance function
    corecore