172,695 research outputs found
Evaluating Model Testing and Model Checking for Finding Requirements Violations in Simulink Models
Matlab/Simulink is a development and simulation language that is widely used
by the Cyber-Physical System (CPS) industry to model dynamical systems. There
are two mainstream approaches to verify CPS Simulink models: model testing that
attempts to identify failures in models by executing them for a number of
sampled test inputs, and model checking that attempts to exhaustively check the
correctness of models against some given formal properties. In this paper, we
present an industrial Simulink model benchmark, provide a categorization of
different model types in the benchmark, describe the recurring logical patterns
in the model requirements, and discuss the results of applying model checking
and model testing approaches to identify requirements violations in the
benchmarked models. Based on the results, we discuss the strengths and
weaknesses of model testing and model checking. Our results further suggest
that model checking and model testing are complementary and by combining them,
we can significantly enhance the capabilities of each of these approaches
individually. We conclude by providing guidelines as to how the two approaches
can be best applied together.Comment: 10 pages + 2 page reference
An Evaluation of Effectiveness of Fuzzy Logic Model in Predicting the Business Bankruptcy
In front of the current global financial crisis, the future existence of the firms is uncertain. The characteristics and the dynamics of the current world and the interdependences between the financial and economic markets around it demand a continuous research for new methods of bankruptcy prediction. The purpose of this article is to present a fuzzy logic-based system that predicts bankruptcy for one, two and three years before the possible failure of companies. The proposed fuzzy model uses as inputs financial ratios, that is dynamics of the financial ratios. In order to design and to implement the model, authors have used financial statements of 132 stock equity companies (25 bankrupt and 107 nonbankrupt). The paper presents also the testing and validation of the created fuzzy logic models.bankruptcy, crisis, prediction, fuzzy logic, ratings
Generating Effective Test Suites for Model Transformations Using Classifying Terms
Generating sample models for testing a model transformation is no easy task. This paper explores the use of classifying terms and stratified sampling for developing richer test cases for model transformations. Classifying terms are used to define the equivalence classes that characterize the relevant subgroups for the test cases. From each equivalence class of object models, several representative models are chosen depending on the required sample size. We compare our
results with test suites developed using random sampling, and conclude that by using an ordered and stratified approach the coverage and effectiveness of the test suite can be significantly improved.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tech
Recommended from our members
Watershed rainfall forecasting using neuro-fuzzy networks with the assimilation of multi-sensor information
The complex temporal heterogeneity of rainfall coupled with mountainous physiographic context makes a great challenge in the development of accurate short-term rainfall forecasts. This study aims to explore the effectiveness of multiple rainfall sources (gauge measurement, and radar and satellite products) for assimilation-based multi-sensor precipitation estimates and make multi-step-ahead rainfall forecasts based on the assimilated precipitation. Bias correction procedures for both radar and satellite precipitation products were first built, and the radar and satellite precipitation products were generated through the Quantitative Precipitation Estimation and Segregation Using Multiple Sensors (QPESUMS) and the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS), respectively. Next, the synthesized assimilated precipitation was obtained by merging three precipitation sources (gauges, radars and satellites) according to their individual weighting factors optimized by nonlinear search methods. Finally, the multi-step-ahead rainfall forecasting was carried out by using the adaptive network-based fuzzy inference system (ANFIS). The Shihmen Reservoir watershed in northern Taiwan was the study area, where 641 hourly data sets of thirteen historical typhoon events were collected. Results revealed that the bias adjustments in QPESUMS and PERSIANN-CCS products did improve the accuracy of these precipitation products (in particular, 30-60% improvement rates for the QPESUMS, in terms of RMSE), and the adjusted PERSIANN-CCS and QPESUMS individually provided about 10% and 24% contribution accordingly to the assimilated precipitation. As far as rainfall forecasting is concerned, the results demonstrated that the ANFIS fed with the assimilated precipitation provided reliable and stable forecasts with the correlation coefficients higher than 0.85 and 0.72 for one- and two-hour-ahead rainfall forecasting, respectively. The obtained forecasting results are very valuable information for the flood warning in the study watershed during typhoon periods. © 2013 Elsevier B.V
Recommended from our members
Automated generation of computationally hard feature models using evolutionary algorithms
This is the post-print version of the final paper published in Expert Systems with Applications. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2014 Elsevier B.V.A feature model is a compact representation of the products of a software product line. The automated extraction of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools. Performance evaluations in this domain mainly rely on the use of random feature models. However, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this article, we propose to model the problem of finding computationally hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm for optimized feature models (ETHOM). Given a tool and an analysis operation, ETHOM generates input models of a predefined size maximizing aspects such as the execution time or the memory consumption of the tool when performing the operation over the model. This allows users and developers to know the performance of tools in pessimistic cases providing a better idea of their real power and revealing performance bugs. Experiments using ETHOM on a number of analyses and tools have successfully identified models producing much longer executions times and higher memory consumption than those obtained with random models of identical or even larger size.European Commission (FEDER), the Spanish Government and
the Andalusian Government
- …