332 research outputs found

    Managing contamination delay to improve Timing Speculation architectures

    Get PDF
    Timing Speculation (TS) is a widely known method for realizing better-than-worst-case systems. Aggressive clocking, realizable by TS, enable systems to operate beyond specified safe frequency limits to effectively exploit the data dependent circuit delay. However, the range of aggressive clocking for performance enhancement under TS is restricted by short paths. In this paper, we show that increasing the lengths of short paths of the circuit increases the effectiveness of TS, leading to performance improvement. Also, we propose an algorithm to efficiently add delay buffers to selected short paths while keeping down the area penalty. We present our algorithm results for ISCAS-85 suite and show that it is possible to increase the circuit contamination delay by up to 30% without affecting the propagation delay. We also explore the possibility of increasing short path delays further by relaxing the constraint on propagation delay and analyze the performance impact

    An Efficient Way to Allocate and Read Directory Entries in the Ext4 File System

    Get PDF
    C­lem t©to prce je zvit vkon sekvenÄn­ho prochzen­ adres v souborov©m syst©mu ext4. Datov struktura HTree, jen je v souÄasn© dobÄ pouita k implementaci adresu v ext4 zvld velmi dobe nhodn© p­stupy do adrese, avak nen­ optimalizovna pro sekvenÄn­ prochzen­. Tato prce pin­ analzu tohoto probl©mu. Nejprve studuje implementaci souborov©ho syst©mu ext4 a dal­ch subsyst©mu Linuxov©ho jdra, kter© s n­m souvis­. Pro vyhodnocen­ vkonu souÄasn© implementace adresov©ho indexu byla vytvoena sada test. Na zkladÄ vsledk tÄchto test bylo navreno een­, kter© bylo nslednÄ implementovno do Linuxov©ho jdra. V zvÄru t©to prce naleznete vyhodnocen­ p­nosu a porovnn­ vkonu nov© implementace s dal­mi souborovmi syst©my v Linuxu.The aim of this thesis is to improve the performance of sequential directory traversal in the ext4 file system. The HTree data structure that is used to store directories in ext4 at the moment works very well for random accesses, however, it is not optimal when it comes to traversing a directory sequentially. This thesis investigates the issue; it explores the implementation of ext4 and the associated Linux kernel subsystems. To assess the performance of the directory index, a set of test cases and benchmarks was implemented. Based on the analysis, an optimization was designed and implemented to the ext4 driver within the Linux kernel. The implementation was tested, evaluated, and compared to other native Linux file systems in the last chapter of this document.

    A shared-disk parallel cluster file system

    Get PDF
    Dissertação apresentada para obtenção do Grau de Doutor em Informática Pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaToday, clusters are the de facto cost effective platform both for high performance computing (HPC) as well as IT environments. HPC and IT are quite different environments and differences include, among others, their choices on file systems and storage: HPC favours parallel file systems geared towards maximum I/O bandwidth, but which are not fully POSIX-compliant and were devised to run on top of (fault prone) partitioned storage; conversely, IT data centres favour both external disk arrays (to provide highly available storage) and POSIX compliant file systems, (either general purpose or shared-disk cluster file systems, CFSs). These specialised file systems do perform very well in their target environments provided that applications do not require some lateral features, e.g., no file locking on parallel file systems, and no high performance writes over cluster-wide shared files on CFSs. In brief, we can say that none of the above approaches solves the problem of providing high levels of reliability and performance to both worlds. Our pCFS proposal makes a contribution to change this situation: the rationale is to take advantage on the best of both – the reliability of cluster file systems and the high performance of parallel file systems. We don’t claim to provide the absolute best of each, but we aim at full POSIX compliance, a rich feature set, and levels of reliability and performance good enough for broad usage – e.g., traditional as well as HPC applications, support of clustered DBMS engines that may run over regular files, and video streaming. pCFS’ main ideas include: · Cooperative caching, a technique that has been used in file systems for distributed disks but, as far as we know, was never used either in SAN based cluster file systems or in parallel file systems. As a result, pCFS may use all infrastructures (LAN and SAN) to move data. · Fine-grain locking, whereby processes running across distinct nodes may define nonoverlapping byte-range regions in a file (instead of the whole file) and access them in parallel, reading and writing over those regions at the infrastructure’s full speed (provided that no major metadata changes are required). A prototype was built on top of GFS (a Red Hat shared disk CFS): GFS’ kernel code was slightly modified, and two kernel modules and a user-level daemon were added. In the prototype, fine grain locking is fully implemented and a cluster-wide coherent cache is maintained through data (page fragments) movement over the LAN. Our benchmarks for non-overlapping writers over a single file shared among processes running on different nodes show that pCFS’ bandwidth is 2 times greater than NFS’ while being comparable to that of the Parallel Virtual File System (PVFS), both requiring about 10 times more CPU. And pCFS’ bandwidth also surpasses GFS’ (600 times for small record sizes, e.g., 4 KB, decreasing down to 2 times for large record sizes, e.g., 4 MB), at about the same CPU usage.Lusitania, Companhia de Seguros S.A, Programa IBM Shared University Research (SUR

    Caching, crashing & concurrency - verification under adverse conditions

    Get PDF
    The formal development of large-scale software systems is a complex and time-consuming effort. Generally, its main goal is to prove the functional correctness of the resulting system. This goal becomes significantly harder to reach when the verification must be performed under adverse conditions. When aiming for a realistic system, the implementation must be compatible with the “real world”: it must work with existing system interfaces, cope with uncontrollable events such as power cuts, and offer competitive performance by using mechanisms like caching or concurrency. The Flashix project is an example of such a development, in which a fully verified file system for flash memory has been developed. The project is a long-term team effort and resulted in a sequential, functionally correct and crash-safe implementation after its first project phase. This thesis continues the work by performing modular extensions to the file system with performance-oriented mechanisms that mainly involve caching and concurrency, always considering crash-safety. As a first contribution, this thesis presents a modular verification methodology for destructive heap algorithms. The approach simplifies the verification by separating reasoning about specifics of heap implementations, like pointer aliasing, from the reasoning about conceptual correctness arguments. The second contribution of this thesis is a novel correctness criterion for crash-safe, cached, and concurrent file systems. A natural criterion for crash-safety is defined in terms of system histories, matching the behavior of fine-grained caches using complex synchronization mechanisms that reorder operations. The third contribution comprises methods for verifying functional correctness and crash-safety of caching mechanisms and concurrency in file systems. A reference implementation for crash-safe caches of high-level data structures is given, and a strategy for proving crash-safety is demonstrated and applied. A compatible concurrent implementation of the top layer of file systems is presented, using a mechanism for the efficient management of fine-grained file locking, and a concurrent version of garbage collection is realized. Both concurrency extensions are proven to be correct by applying atomicity refinement, a methodology for proving linearizability. Finally, this thesis contributes a new iteration of executable code for the Flashix file system. With the efficiency extensions introduced with this thesis, Flashix covers all performance-oriented concepts of realistic file system implementations and achieves competitiveness with state-of-the-art flash file systems

    File system metadata virtualization

    Get PDF
    The advance of computing systems has brought new ways to use and access the stored data that push the architecture of traditional file systems to its limits, making them inadequate to handle the new needs. Current challenges affect both the performance of high-end computing systems and its usability from the applications perspective. On one side, high-performance computing equipment is rapidly developing into large-scale aggregations of computing elements in the form of clusters, grids or clouds. On the other side, there is a widening range of scientific and commercial applications that seek to exploit these new computing facilities. The requirements of such applications are also heterogeneous, leading to dissimilar patterns of use of the underlying file systems. Data centres have tried to compensate this situation by providing several file systems to fulfil distinct requirements. Typically, the different file systems are mounted on different branches of a directory tree, and the preferred use of each branch is publicised to users. A similar approach is being used in personal computing devices. Typically, in a personal computer, there is a visible and clear distinction between the portion of the file system name space dedicated to local storage, the part corresponding to remote file systems and, recently, the areas linked to cloud services as, for example, directories to keep data synchronized across devices, to be shared with other users, or to be remotely backed-up. In practice, this approach compromises the usability of the file systems and the possibility of exploiting all the potential benefits. We consider that this burden can be alleviated by determining applicable features on a per-file basis, and not associating them to the location in a static, rigid name space. Moreover, usability would be further increased by providing multiple dynamic name spaces that could be adapted to specific application needs. This thesis contributes to this goal by proposing a mechanism to decouple the user view of the storage from its underlying structure. The mechanism consists in the virtualization of file system metadata (including both the name space and the object attributes) and the interposition of a sensible layer to take decisions on where and how the files should be stored in order to benefit from the underlying file system features, without incurring on usability or performance penalties due to inadequate usage. This technique allows to present multiple, simultaneous virtual views of the name space and the file system object attributes that can be adapted to specific application needs without altering the underlying storage configuration. The first contribution of the thesis introduces the design of a metadata virtualization framework that makes possible the above-mentioned decoupling; the second contribution consists in a method to improve file system performance in large-scale systems by using such metadata virtualization framework; finally, the third contribution consists in a technique to improve the usability of cloud-based storage systems in personal computing devices.Postprint (published version

    GPUs as Storage System Accelerators

    Full text link
    Massively multicore processors, such as Graphics Processing Units (GPUs), provide, at a comparable price, a one order of magnitude higher peak performance than traditional CPUs. This drop in the cost of computation, as any order-of-magnitude drop in the cost per unit of performance for a class of system components, triggers the opportunity to redesign systems and to explore new ways to engineer them to recalibrate the cost-to-performance relation. This project explores the feasibility of harnessing GPUs' computational power to improve the performance, reliability, or security of distributed storage systems. In this context, we present the design of a storage system prototype that uses GPU offloading to accelerate a number of computationally intensive primitives based on hashing, and introduce techniques to efficiently leverage the processing power of GPUs. We evaluate the performance of this prototype under two configurations: as a content addressable storage system that facilitates online similarity detection between successive versions of the same file and as a traditional system that uses hashing to preserve data integrity. Further, we evaluate the impact of offloading to the GPU on competing applications' performance. Our results show that this technique can bring tangible performance gains without negatively impacting the performance of concurrently running applications.Comment: IEEE Transactions on Parallel and Distributed Systems, 201

    Evaluation and Improvement of Internet Voting Schemes Based on Legally-Founded Security Requirements

    Get PDF
    In recent years, several nations and private associations have introduced Internet voting as additional means to conduct elections. To date, a variety of voting schemes to conduct Internet-based elections have been constructed, both from the scientific community and industry. Because of its fundamental importance to democratic societies, Internet voting – as any other voting method – is bound to high legal standards, particularly imposing security requirements on the voting method. However, these legal standards, and resultant derived security requirements, partially oppose each other. As a consequence, Internet voting schemes cannot enforce these legally-founded security requirements to their full extent, but rather build upon specific assumptions. The criticality of these assumptions depends on the target election setting, particularly the adversary expected within that setting. Given the lack of an election-specific evaluation framework for these assumptions, or more generally Internet voting schemes, the adequacy of Internet voting schemes for specific elections cannot readily be determined. Hence, selecting the Internet voting scheme that satisfies legally-founded security requirements within a specific election setting in the most appropriate manner, is a challenging task. To support election officials in the selection process, the first goal of this dissertation is the construction of a evaluation framework for Internet voting schemes based on legally-founded security requirements. Therefore, on the foundation of previous interdisciplinary research, legally-founded security requirements for Internet voting schemes are derived. To provide election officials with improved decision alternatives, the second goal of this dissertation is the improvement of two established Internet voting schemes with regard to legally-founded security requirements, namely the Polyas Internet voting scheme and the Estonian Internet voting scheme. Our research results in five (partially opposing) security requirements for Internet voting schemes. On the basis of these security requirements, we construct a capability-based risk assessment approach for the security evaluation of Internet voting schemes in specific election settings. The evaluation of the Polyas scheme reveals the fact that compromised voting devices can alter votes undetectably. Considering surrounding circumstances, we eliminate this shortcoming by incorporating out of band codes to acknowledge voters’ votes. It turns out that in the Estonian scheme, four out of five security requirements rely on the correct behaviour of voting devices. We improve the Estonian scheme in that regard by incorporating out of band voting and acknowledgment codes. Thereby, we maintain four out of five security requirements against adversaries capable of compromising voting devices

    Developing an In-kernel File Sharing Server Solution Based on Server Message Block protocol

    Get PDF
    Multi-device and multi-service smart environments make heavy use of the Internet and intra-net, thus constantly transferring and saving large amounts of digital data leading to an exponential data growth. This has led to the development of network storage systems such as Storage Area Networks and Network Attached Storage. Network Attached Storage provides a file system level access to data from storage elements that are connected to the network. One of the most widely used protocols in network storage systems, is the Server Message Block(SMB) protocol, that interconnects users from various operating systems such as Windows, Linux and Mac OS. Samba is a popular open-source user-space server that implements the SMB protocol. There have been a multitude of discussions about moving traditional user-space applications like web servers to the kernel-space in order to improve various aspects of the server like CPU utilization, memory utilization, memory footprint, context switching, etc. In this thesis, we have designed and implemented a server in the Linux kernel space. We discuss in detail, the features and functionalities of the newly implemented server. We provide an insight into why some of the design considerations were made, in order to improve the efficiency of protocol handling by the in-kernel file sharing server. We compare the performance of the user-space Samba solution with the in-kernel file sharing solution, implemented and discussed in this thesis, against different workloads to identify the competitiveness of the developed solution. We conclude by discussing what we learned, during the implementation process, along with some ideas for further improving the feature set and performance of the in-kernel server solution
    • …
    corecore