1,727 research outputs found
On Benchmarking Embedded Linux Flash File Systems
Due to its attractive characteristics in terms of performance, weight and
power consumption, NAND flash memory became the main non volatile memory (NVM)
in embedded systems. Those NVMs also present some specific
characteristics/constraints: good but asymmetric I/O performance, limited
lifetime, write/erase granularity asymmetry, etc. Those peculiarities are
either managed in hardware for flash disks (SSDs, SD cards, USB sticks, etc.)
or in software for raw embedded flash chips. When managed in software, flash
algorithms and structures are implemented in a specific flash file system
(FFS). In this paper, we present a performance study of the most widely used
FFSs in embedded Linux: JFFS2, UBIFS,and YAFFS. We show some very particular
behaviors and large performance disparities for tested FFS operations such as
mounting, copying, and searching file trees, compression, etc.Comment: Embed With Linux, Lorient : France (2012
FFSMark : Un Benchmark pour SystÚmes de Fichiers Dédiés aux Mémoires Flash
International audienceLa mĂ©moire flash de type NAND est le principal mĂ©dia de stockage dans l'embarquĂ©. L'un des moyens pour intĂ©grer cette mĂ©moire dans les systĂšmes informatiques est d'utiliser des systĂšmes de fichiers dĂ©diĂ©s aux mĂ©moires flash (Flash File Systems, FFS). Dans ce domaine, les benchmarks sont des programmes permettant de rĂ©aliser des Ă©tudes de performances et de comparer diffĂ©rents systĂšmes entre eux. Nous montrons dans cet article qu'un benchmark pour FFS doit prendre en compte les caractĂ©ristiques spĂ©cifiques des mĂ©moires flash, d'une part dans son comportement, et d'autre part en ce qui concerne les mĂ©triques de performances disponibles en sortie. Dans ce contexte, nous proposons FFSMark, un benchmark ciblant les FFS, sensible aux spĂ©cificitĂ©s des mĂ©moires flash. FFSMark est dĂ©diĂ© Ă ĂȘtre exĂ©cutĂ© sous Linux, un systĂšme d'exploitation supportant les FFS les plus populaires. Nous prĂ©sentons Ă©galement une Ă©tude de cas, utilisant FFSMark pour comparer les performances des FFS JFFS2, YAFFS2 et UBIFS sur une plate-forme embarquĂ©e
Elevating commodity storage with the SALSA host translation layer
To satisfy increasing storage demands in both capacity and performance,
industry has turned to multiple storage technologies, including Flash SSDs and
SMR disks. These devices employ a translation layer that conceals the
idiosyncrasies of their mediums and enables random access. Device translation
layers are, however, inherently constrained: resources on the drive are scarce,
they cannot be adapted to application requirements, and lack visibility across
multiple devices. As a result, performance and durability of many storage
devices is severely degraded.
In this paper, we present SALSA: a translation layer that executes on the
host and allows unmodified applications to better utilize commodity storage.
SALSA supports a wide range of single- and multi-device optimizations and,
because is implemented in software, can adapt to specific workloads. We
describe SALSA's design, and demonstrate its significant benefits using
microbenchmarks and case studies based on three applications: MySQL, the Swift
object store, and a video server.Comment: Presented at 2018 IEEE 26th International Symposium on Modeling,
Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS
Performance Evaluation of Flash File Systems
Today, flash memory are strongly used in the embedded system domain. NAND
flash memories are the building block of main secondary storage systems. Such
memories present many benefits in terms of data density, I/O performance, shock
resistance and power consumption. Nevertheless, flash does not come without
constraints: the write / erase granularity asymmetry and the limited lifetime
bring the need for specific management. This can be done through the operating
system using dedicated Flash File Systems (FFSs). In this document, we present
general concepts about FFSs, and implementations example that are JFFS2, YAFFS2
and UBIFS, the most commonly used flash file systems. Then we give performance
evaluation results for these FFSs.Comment: Colloque du GDR SoC-SiP, Paris : France (2012
FIT A Fog Computing Device for Speech TeleTreatments
There is an increasing demand for smart fogcomputing gateways as the size of
cloud data is growing. This paper presents a Fog computing interface (FIT) for
processing clinical speech data. FIT builds upon our previous work on EchoWear,
a wearable technology that validated the use of smartwatches for collecting
clinical speech data from patients with Parkinson's disease (PD). The fog
interface is a low-power embedded system that acts as a smart interface between
the smartwatch and the cloud. It collects, stores, and processes the speech
data before sending speech features to secure cloud storage. We developed and
validated a working prototype of FIT that enabled remote processing of clinical
speech data to get speech clinical features such as loudness, short-time
energy, zero-crossing rate, and spectral centroid. We used speech data from six
patients with PD in their homes for validating FIT. Our results showed the
efficacy of FIT as a Fog interface to translate the clinical speech processing
chain (CLIP) from a cloud-based backend to a fog-based smart gateway.Comment: 3 pages, 5 figures, 1 table, 2nd IEEE International Conference on
Smart Computing SMARTCOMP 2016, Missouri, USA, 201
I/O Schedulers for Proportionality and Stability on Flash-Based SSDs in Multi-Tenant Environments
The use of flash based Solid State Drives (SSDs) has expanded rapidly into the cloud computing environment. In cloud computing, ensuring the service level objective (SLO) of each server is the major criterion in designing a system. In particular, eliminating performance interference among virtual machines (VMs) on shared storage is a key challenge. However, studies on SSD performance to guarantee SLO in such environments are limited. In this paper, we present analysis of I/O behavior for a shared SSD as storage in terms of proportionality and stability. We show that performance SLOs of SSD based storage systems being shared by VMs or tasks are not satisfactory. We present and analyze the reasons behind the unexpected behavior through examining the components of SSDs such as channels, DRAM buffer, and Native Command Queuing (NCQ). We introduce two novel SSD-aware host level I/O schedulers on Linux, called A & x002B;CFQ and H & x002B;BFQ, based on our analysis and findings. Through experiments on Linux, we analyze I/O proportionality and stability in multi-tenant environments. In addition, through experiments using real workloads, we analyze the performance interference between workloads on a shared SSD. We then show that the proposed I/O schedulers almost eliminate the interference effect seen in CFQ and BFQ, while still providing I/O proportionality and stability for various I/O weighted scenarios
IoT single board computer to replace a home server
Home servers are popular among computer enthusiasts for hosting various applications, including Linux OS with web servers, database solutions, and private cloud services, as well as for VPN, torrent, file-sharing, and streaming. Single Board Computers (SBCs), once used for small projects, have now evolved and can be used to control multiple devices in the IoT space. SBCs have become more powerful and can run many of
the same applications as traditional home servers. In light of the energy crisis, this study will examine the feasibility of replacing a conventional home server with an SBC while maintaining service quality and evaluating performance and availability. The power consumption of both solutions will be compared.info:eu-repo/semantics/publishedVersio
CPC: programming with a massive number of lightweight threads
Threads are a convenient and modular abstraction for writing concurrent
programs, but often fairly expensive. The standard alternative to threads,
event-loop programming, allows much lighter units of concurrency, but leads to
code that is difficult to write and even harder to understand. Continuation
Passing C (CPC) is a translator that converts a program written in threaded
style into a program written with events and native system threads, at the
programmer's choice. Together with two undergraduate students, we taught
ourselves how to program in CPC by writing Hekate, a massively concurrent
network server designed to efficiently handle tens of thousands of
simultaneously connected peers. In this paper, we describe a number of
programming idioms that we learnt while writing Hekate; while some of these
idioms are specific to CPC, many should be applicable to other programming
systems with sufficiently cheap threads.Comment: To appear in PLACES'1
- âŠ