2 research outputs found

    A fast and slippery slope for file systems

    Full text link
    There is a vast number and variety of file systems cur-rently available, each optimizing for an ever growing number of storage devices and workloads. Users have an unprece-dented, and somewhat overwhelming, number of data man-agement options. At the same time, the fastest storage de-vices are only getting faster, and it is unclear on how well the existing file systems will adapt. Using emulation tech-niques, we evaluate five popular Linux file systems across a range of storage device latencies typical to low-end hard drives, latest high-performance persistent memory block de-vices, and in between. Our findings are often surprising. De-pending on the workload, we find that some file systems can clearly scale with faster storage devices much better than others. Further, as storage device latency decreases, we find unexpected performance inversions across file systems. Finally, file system scalability in the higher device latency range is not representative of scalability in the lower, sub-millisecond, latency range. We then focus on Nilfs2 as an especially alarming example of an unexpectedly poor scala-bility and present detailed instructions for identifying bottle-necks in the I/O stack

    Efficient and predictable high-speed storage access for real-time embedded systems

    Get PDF
    As the speed, size, reliability and power efficiency of non-volatile storage media increases, and the data demands of many application domains grow, operating systems are being put under escalating pressure to provide high-speed access to storage. Traditional models of storage access assume devices to be slow, expecting plenty of slack time in which to process data between requests being serviced, and that all significant variations in timing will be down to the storage device itself. Modern high-speed storage devices break this assumption, causing storage applications to become processor-bound, rather than I/O-bound, in an increasing number of situations. This is especially an issue in real-time embedded systems, where limited processing resources and strict timing and predictability requirements amplify any issues caused by the complexity of the software storage stack. This thesis explores the issues related to accessing high-speed storage from real-time embedded systems, providing a thorough analysis of storage operations based on metrics relevant to the area. From this analysis, a number of alternative storage architectures are proposed and explored, showing that a simpler, more direct path from applications to storage can have a positive impact on efficiency and predictability in such systems
    corecore