36 research outputs found
Towards Reliable Benchmarks of Timed Automata
The verification of the time-dependent behavior of safety-critical systems is important, as design problems often arise from complex timing conditions. One of the most common formalisms for modeling timed systems is the timed automaton, which introduces clock variables to represent the elapse of time. Various tools and algorithms have been developed for the verification of timed automata. However, it is hard to decide which one to use for a given problem as no exhaustive benchmark of their effectiveness and efficiency can be found in the literature. Moreover, there does not exist a public set of models that can be used as an appropriate benchmark suite. In our work we have collected publicly available timed automaton models and industrial case studies and we used them to compare the efficiency of the algorithms implemented in the Theta model checker. In this paper, we present our preliminary benchmark suite, and demonstrate the results of the performed measurements
Recommended from our members
Understanding metadata latency with MDWorkbench
While parallel file systems often satisfy the need of applica- tions with bulk synchronous I/O, they lack capabilities of dealing with metadata intense workloads. Typically, in procurements, the focus lies on the aggregated metadata throughput using the MDTest benchmark. However, metadata performance is crucial for interactive use. Metadata benchmarks involve even more parameters compared to I/O benchmarks. There are several aspects that are currently uncovered and, therefore, not in the focus of vendors to investigate. Particularly, response latency and interactive workloads operating on a working set of data. The lack of ca- pabilities from file systems can be observed when looking at the IO-500 list, where metadata performance between best and worst system does not differ significantly. In this paper, we introduce a new benchmark called MDWorkbench which generates a reproducible workload emulating many concurrent users or â in an alternative view â queuing systems. This benchmark pro- vides a detailed latency profile, overcomes caching issues, and provides a method to assess the quality of the observed throughput. We evaluate the benchmark on state-of-the-art parallel file systems with GPFS (IBM Spectrum Scale), Lustre, Crayâs Datawarp, and DDN IME, and conclude that we can reveal characteristics that could not be identified before
Towards a Benchmark for Fog Data Processing
Fog data processing systems provide key abstractions to manage data and event
processing in the geo-distributed and heterogeneous fog environment. The lack
of standardized benchmarks for such systems, however, hinders their development
and deployment, as different approaches cannot be compared quantitatively.
Existing cloud data benchmarks are inadequate for fog computing, as their focus
on workload specification ignores the tight integration of application and
infrastructure inherent in fog computing.
In this paper, we outline an approach to a fog-native data processing
benchmark that combines workload specifications with infrastructure
specifications. This holistic approach allows researchers and engineers to
quantify how a software approach performs for a given workload on given
infrastructure. Further, by basing our benchmark in a realistic IoT sensor
network scenario, we can combine paradigms such as low-latency event
processing, machine learning inference, and offline data analytics, and analyze
the performance impact of their interplay in a fog data processing system
Container-based Cloud Virtual Machine benchmarking
This research was pursued under the EPSRC grant, EP/K015745/1, âWorking Together: Constraint Programming and Cloud Computing,â an Erasmus Mundus Masterâs scholarship and an Amazon Web Services Education Research grant.With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed. In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-definedportion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first modeas a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.Postprin
Do Not Be Fooled: Toward a Holistic Comparison of Distributed Ledger Technology Designs
Distributed Ledger Technology (DLT) enables a new way of inter-organizational collaboration via a shared and distributed infrastructure. Meanwhile, there is plenty of DLT designs (e.g., Ethereum, IOTA), which differ in their capabilities to meet use case requirements. A structured comparison of DLT designs is required to support the decision for an appropriate DLT design. However, existing criteria and processes are abstract or not suitable for an in-depth comparison of DLT designs. We select and operationalize DLT characteristics relevant for a comprehensive comparison of DLT designs. Furthermore, we propose a comparison process, which enables the structured comparison of a set of DLT designs according to application requirements. The proposed process is validated with a use case analysis of three use cases. We contribute to research and praxis by introducing ways to operationalize DLT characteristics and generate a process to compare different DLT designs accordingly to their suitability in a use case
TPC-H Analyzed: Hidden Messages and Lessons Learned from an Influential Benchmark
The TPC-D benchmark was developed almost 20 years ago, and even though its current existence as TPC H could be considered superseded by TPC-DS, one can still learn from it. We focus on the technical level, summarizing the challenges posed by the TPC-H workload as we now understand them, which w