609 research outputs found
Incremental Calibration of Architectural Performance Models with Parametric Dependencies
Architecture-based Performance Prediction (AbPP) allows evaluation of the
performance of systems and to answer what-if questions without measurements for
all alternatives. A difficulty when creating models is that Performance Model
Parameters (PMPs, such as resource demands, loop iteration numbers and branch
probabilities) depend on various influencing factors like input data, used
hardware and the applied workload. To enable a broad range of what-if
questions, Performance Models (PMs) need to have predictive power beyond what
has been measured to calibrate the models. Thus, PMPs need to be parametrized
over the influencing factors that may vary.
Existing approaches allow for the estimation of parametrized PMPs by
measuring the complete system. Thus, they are too costly to be applied
frequently, up to after each code change. They do not keep also manual changes
to the model when recalibrating.
In this work, we present the Continuous Integration of Performance Models
(CIPM), which incrementally extracts and calibrates the performance model,
including parametric dependencies. CIPM responds to source code changes by
updating the PM and adaptively instrumenting the changed parts. To allow AbPP,
CIPM estimates the parametrized PMPs using the measurements (generated by
performance tests or executing the system in production) and statistical
analysis, e.g., regression analysis and decision trees.
Additionally, our approach responds to production changes (e.g., load or
deployment changes) and calibrates the usage and deployment parts of PMs
accordingly.
For the evaluation, we used two case studies. Evaluation results show that we
were able to calibrate the PM incrementally and accurately.Comment: Manar Mazkatli is supported by the German Academic Exchange Service
(DAAD
On Experimentation in Software-Intensive Systems
Context: Delivering software that has value to customers is a primary concern of every software company. Prevalent in web-facing companies, controlled experiments are used to validate and deliver value in incremental deployments. At the same that web-facing companies are aiming to automate and reduce the cost of each experiment iteration, embedded systems companies are starting to adopt experimentation practices and leverage their activities on the automation developments made in the online domain. Objective: This thesis has two main objectives. The first objective is to analyze how software companies can run and optimize their systems through automated experiments. This objective is investigated from the perspectives of the software architecture, the algorithms for the experiment execution and the experimentation process. The second objective is to analyze how non web-facing companies can adopt experimentation as part of their development process to validate and deliver value to their customers continuously. This objective is investigated from the perspectives of the software development process and focuses on the experimentation aspects that are distinct from web-facing companies. Method: To achieve these objectives, we conducted research in close collaboration with industry and used a combination of different empirical research methods: case studies, literature reviews, simulations, and empirical evaluations. Results: This thesis provides six main results. First, it proposes an architecture framework for automated experimentation that can be used with different types of experimental designs in both embedded systems and web-facing systems. Second, it proposes a new experimentation process to capture the details of a trustworthy experimentation process that can be used as the basis for an automated experimentation process. Third, it identifies the restrictions and pitfalls of different multi-armed bandit algorithms for automating experiments in industry. This thesis also proposes a set of guidelines to help practitioners select a technique that minimizes the occurrence of these pitfalls. Fourth, it proposes statistical models to analyze optimization algorithms that can be used in automated experimentation. Fifth, it identifies the key challenges faced by embedded systems companies when adopting controlled experimentation, and we propose a set of strategies to address these challenges. Sixth, it identifies experimentation techniques and proposes a new continuous experimentation model for mission-critical and business-to-business. Conclusion: The results presented in this thesis indicate that the trustworthiness in the experimentation process and the selection of algorithms still need to be addressed before automated experimentation can be used at scale in industry. The embedded systems industry faces challenges in adopting experimentation as part of its development process. In part, this is due to the low number of users and devices that can be used in experiments and the diversity of the required experimental designs for each new situation. This limitation increases both the complexity of the experimentation process and the number of techniques used to address this constraint
FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning
Finance is a particularly difficult playground for deep reinforcement
learning. However, establishing high-quality market environments and benchmarks
for financial reinforcement learning is challenging due to three major factors,
namely, low signal-to-noise ratio of financial data, survivorship bias of
historical data, and model overfitting in the backtesting stage. In this paper,
we present an openly accessible FinRL-Meta library that has been actively
maintained by the AI4Finance community. First, following a DataOps paradigm, we
will provide hundreds of market environments through an automatic pipeline that
collects dynamic datasets from real-world markets and processes them into
gym-style market environments. Second, we reproduce popular papers as stepping
stones for users to design new trading strategies. We also deploy the library
on cloud platforms so that users can visualize their own results and assess the
relative performance via community-wise competitions. Third, FinRL-Meta
provides tens of Jupyter/Python demos organized into a curriculum and a
documentation website to serve the rapidly growing community. FinRL-Meta is
available at: https://github.com/AI4Finance-Foundation/FinRL-MetaComment: NeurIPS 2022 Datasets and Benchmarks. 36th Conference on Neural
Information Processing Systems Datasets and Benchmarks Trac
An Automated procedure for simulating complex arrival processes: A Web-based approach
In industry, simulation is one of the most widely used probabilistic modeling tools for modeling highly complex systems. Major sources of complexity include the inputs that drive the logic of the model. Effective simulation input modeling requires the use of accurate and efficient input modeling procedures. This research focuses on nonstationary arrival processes. The fundamental stochastic model on which this study is conducted is the nonhomogeneous Poisson process (NHPP) which has successfully been used to characterize arrival processes where the arrival rate changes over time. Although a number of methods exist for modeling the rate and mean value functions that define the behavior of NHPPs, one of the most flexible is a multiresolution procedure that is used to model the mean value function for processes possessing long-term trends over time or asymmetric, multiple cyclic behavior. In this research, a statistical-estimation procedure for automating the multiresolution procedure is developed that involves the following steps at each resolution level corresponding to a basic cycle: (a) transforming the cumulative relative frequency of arrivals within the cycle to obtain a linear statistical model having normal residuals with homogeneous variance; (b) fitting specially formulated polynomials to the transformed arrival data; (c) performing a likelihood ratio test to determine the degree of the fitted polynomial; and (d) fitting a polynomial of the degree determined in (c) to the original (untransformed) arrival data. Next, an experimental performance evaluation is conducted to test the effectiveness of the estimation method. A web-based application for modeling NHPPs using the automated multiresolution procedure and generating realizations of the NHPP is developed. Finally, a web-based simulation infrastructure that integrates modeling, input analysis, verification, validation and output analysis is discussed
Dynamic Datasets and Market Environments for Financial Reinforcement Learning
The financial market is a particularly challenging playground for deep
reinforcement learning due to its unique feature of dynamic datasets. Building
high-quality market environments for training financial reinforcement learning
(FinRL) agents is difficult due to major factors such as the low
signal-to-noise ratio of financial data, survivorship bias of historical data,
and model overfitting. In this paper, we present FinRL-Meta, a data-centric and
openly accessible library that processes dynamic datasets from real-world
markets into gym-style market environments and has been actively maintained by
the AI4Finance community. First, following a DataOps paradigm, we provide
hundreds of market environments through an automatic data curation pipeline.
Second, we provide homegrown examples and reproduce popular research papers as
stepping stones for users to design new trading strategies. We also deploy the
library on cloud platforms so that users can visualize their own results and
assess the relative performance via community-wise competitions. Third, we
provide dozens of Jupyter/Python demos organized into a curriculum and a
documentation website to serve the rapidly growing community. The open-source
codes for the data curation pipeline are available at
https://github.com/AI4Finance-Foundation/FinRL-MetaComment: 49 pages, 15 figures. arXiv admin note: substantial text overlap with
arXiv:2211.0310
Automatic System Testing of Programs without Test Oracles
Metamorphic testing has been shown to be a simple yet effective technique in addressing the quality assurance of applications that do not have test oracles, i.e., for which it is difficult or impossible to know what the correct output should be for arbitrary input. In metamorphic testing, existing test case input is modified to produce new test cases in such a manner that, when given the new input, the application should produce an output that can be easily be computed based on the original output. That is, if input x produces output f (x), then we create input x' such that we can predict f (x') based on f(x); if the application does not produce the expected output, then a defect must exist, and either f (x) or f (x') (or both) is wrong. In practice, however, metamorphic testing can be a manually intensive technique for all but the simplest cases. The transformation of input data can be laborious for large data sets, or practically impossible for input that is not in human-readable format. Similarly, comparing the outputs can be error-prone for large result sets, especially when slight variations in the results are not actually indicative of errors (i.e., are false positives), for instance when there is non-determinism in the application and multiple outputs can be considered correct. In this paper, we present an approach called Automated Metamorphic System Testing. This involves the automation of metamorphic testing at the system level by checking that the metamorphic properties of the entire application hold after its execution. The tester is able to easily set up and conduct metamorphic tests with little manual intervention, and testing can continue in the field with minimal impact on the user. Additionally, we present an approach called Heuristic Metamorphic Testing which seeks to reduce false positives and address some cases of non-determinism. We also describe an implementation framework called Amsterdam, and present the results of empirical studies in which we demonstrate the effectiveness of the technique on real-world programs without test oracles
Development of wireless-based low-cost current controlled stimulator for patients with spinal cord injuries
A spinal cord injury (SCI) has a severe impact on
human life in general as well as on the physical status and
condition. The use of electrical signals to restore the function of
paralyzed muscles is called functional electrical stimulation
(FES). FES is a promising way to restore mobility to SCI by
applying low-level electrical current to the paralyzed muscles so
as to enhance that person’s ability to function and live
independently. However, due to the limited number of
commercially available FES assisted exerciser systems and their
rather high cost, the conventional devices are unaffordable for
most peoples. It also inconvenient because of wired based system
that creates a limitation in performing exercise. Thus, this
project is concerned with the development of low-cost current
controlled stimulator mainly for the paraplegic subjects. The
developed device should be based on a microcontroller, wireless
based system using Zigbee module, voltage-to-current converter
circuit and should produce proper monophasic and biphasic
current pulses, pulse trains, arbitrary current waveforms, and a
trigger output for FES applications. The performances of the
device will be assessed through simulation study and validated
through experimental work. This device will be developed as in
the new technique of the stimulator development with low cost
and one of the contributing factors in Rehabilitation Engineering
for patients with SCI
API para procedimentos de teste em sistemas embutidos
The proposed case study takes a satellite control application as system under test and entails extending a simple simulator for the system component this application interacts with. Atop of these shall be developed test procedures to study whether there are practical limitations of using simple procedures to test complex interactions between the satellite control application and its simulated environment.
The objective of the proposed work is to define a test frontend API that enables simple test procedures while providing all the means required to test complex machine-to-machine (M2M) interactions, having the system under test (SUT) hard real-time characteristics.O caso de estudo proposto toma uma aplicação de controlo de um satélite como um sistema a ser testado e implica a extensão de um simulador simples para o componente do sistema com o qual esta aplicação interage. Em cima destes devem ser desenvolvidos procedimentos de teste para estudar se há limitações práticas ao usar procedimentos simples para testar interações complexas entre a aplicação de controlo do satélite e o ambiente simulado.
O objetivo do trabalho proposto é definir uma API de teste que possibilite procedimentos de teste simples e forneça todos os meios necessários para testar interações complexas máquina a máquina, tendo o sistema sob teste características de tempo real.Mestrado em Engenharia Eletrónica e Telecomunicaçõe
- …