61 research outputs found

    Holographic Data Storage Technology- The future of Data Storage

    Get PDF
    In the present times technological advancement has grown at rapid rate. Today most of the people are using smart devices which comprises of various kinds of technologies. One of the most important factor in using technology is to store digital data. Presently most of the work are done using digital devices such as computers and mobiles and people need to store their data in devices but the device has limited amount of storage. So need arises to store more amount of data using less space. For this purpose we need to invent storage technologies which helps people to store more amount of data. In order to meet demands of greater storage there are various storage technologies such as different types of ROM, optical storage discs, USB flash drives which uses different technologies to store data. This paper focuses on Holographic data storage technology which helps people to store large amount of data

    I/O Schedulers for Proportionality and Stability on Flash-Based SSDs in Multi-Tenant Environments

    Get PDF
    The use of flash based Solid State Drives (SSDs) has expanded rapidly into the cloud computing environment. In cloud computing, ensuring the service level objective (SLO) of each server is the major criterion in designing a system. In particular, eliminating performance interference among virtual machines (VMs) on shared storage is a key challenge. However, studies on SSD performance to guarantee SLO in such environments are limited. In this paper, we present analysis of I/O behavior for a shared SSD as storage in terms of proportionality and stability. We show that performance SLOs of SSD based storage systems being shared by VMs or tasks are not satisfactory. We present and analyze the reasons behind the unexpected behavior through examining the components of SSDs such as channels, DRAM buffer, and Native Command Queuing (NCQ). We introduce two novel SSD-aware host level I/O schedulers on Linux, called A & x002B;CFQ and H & x002B;BFQ, based on our analysis and findings. Through experiments on Linux, we analyze I/O proportionality and stability in multi-tenant environments. In addition, through experiments using real workloads, we analyze the performance interference between workloads on a shared SSD. We then show that the proposed I/O schedulers almost eliminate the interference effect seen in CFQ and BFQ, while still providing I/O proportionality and stability for various I/O weighted scenarios

    Episode 3.12 – Run Length Limited Coding

    Get PDF
    By examining Run Length Limited (RLL) coding, we discover a way to compress the ones and zeros of our binary data by using differential coding. We also chat a bit about magnetic storage media

    An Accessible Infrastructure

    Get PDF
    Professors Carol Tenopir and Suzie Allard, and Mike Frame elucidate how their studies into perceptions of data sharing among scientists are enhancing research collaboration and progress

    Nanoscale heat transfer at contact between a hot tip and a substrate

    Full text link
    Hot tips are used either for characterizing nanostructures by using scanning thermal microscopes or for local heating to assist data writing. The tip-sample thermal interaction involves conduction at solid-solid contact as well as conduction through the ambient gas and through the water meniscus. We analyze those three heat transfer modes with experimental data and modeling. We conclude that the three modes contribute in a similar manner to the thermal contact conductance but they have distinct contact radii ranging from 30 nm to 1 micron. We also show that any scanning thermal microscope has a 1-3 microns resolution when used in ambient air

    CompaCT: Fractal-Based Heuristic Pixel Segmentation for Lossless Compression of High-Color DICOM Medical Images

    Full text link
    Medical image compression is a widely studied field of data processing due to its prevalence in modern digital databases. This domain requires a high color depth of 12 bits per pixel component for accurate analysis by physicians, primarily in the DICOM format. Standard raster-based compression of images via filtering is well-known; however, it remains suboptimal in the medical domain due to non-specialized implementations. This study proposes a lossless medical image compression algorithm, CompaCT, that aims to target spatial features and patterns of pixel concentration for dynamically enhanced data processing. The algorithm employs fractal pixel traversal coupled with a novel approach of segmentation and meshing between pixel blocks for preprocessing. Furthermore, delta and entropy coding are applied to this concept for a complete compression pipeline. The proposal demonstrates that the data compression achieved via fractal segmentation preprocessing yields enhanced image compression results while remaining lossless in its reconstruction accuracy. CompaCT is evaluated in its compression ratios on 3954 high-color CT scans against the efficiency of industry-standard compression techniques (i.e., JPEG2000, RLE, ZIP, PNG). Its reconstruction performance is assessed with error metrics to verify lossless image recovery after decompression. The results demonstrate that CompaCT can compress and losslessly reconstruct medical images, being 37% more space-efficient than industry-standard compression systems.Comment: (8/24/2023) v1a: 16 pages, 9 figures, Word PD

    Archive - A Data Management Program

    Get PDF
    To meet funding agency requirements, a portable data management solution is presented for small research groups. The database created is simple, searchable, robust, and can reside across multiple hard drives. Employing a standard metadata schema for all data, the database ensures a high level of standardization, findability, and organization. The software is written in Perl, runs on UNIX, and presents a web-based user interface. It uses a fast, portable log-in scheme, making it easy to export to other locations. As research continues to move towards more open data sharing and reproducibility, this database solution is agile enough to accommodate external participants, while satisfying the unique needs of the internal research group

    Reliable Memory Storage by Natural Redundancy

    Get PDF
    Non-volatile memories are becoming the dominant type of storage devices in modern computers because of their fast speed, physical robustness and high data density. However, there still exist many challenges, such as the data reliability issues due to noise. An important example is the memristor, which uses programmable resistance to store data. Memristor memories use the crossbar architecture and suffer from the sneak-path problem: when a memristor cell of high resistance is read, it can be mistakenly read as a low-resistance cell due to low-resistance sneak-paths in the crossbar that are parallel to the cell. In this work, we study new ways to correct errors using the inherent redundancy in stored data (called Natural Redundancy), and combine them with conventional error-correcting codes. In particular, we define a Huffman encoding for the English language based on a repository of books. In addition, we study data stored using convolutional codes and use natural redundancy to verify if decoded codewords are valid or invalid. We present statistics over the Viterbi Algorithm and its ability to decode convolutional codewords, then discuss Yen's Algorithm, an augmentation of the Viterbi Algorithm. Finally, we present an efficient algorithm to search for a list of the most likely codewords, and choose a codeword that meets the criteria of both natural redundancy and the ECC as the decoding solution. We find that this algorithm is no more powerful than Yen's Algorithm in terms of decoding noisy convolutional codewords, but does present some interesting ideas for further exploration across multiple fields of study
    corecore