1,040 research outputs found

    A methodology for producing reliable software, volume 1

    Get PDF
    An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software

    Metaheuristic optimization of power and energy systems: underlying principles and main issues of the 'rush to heuristics'

    Get PDF
    In the power and energy systems area, a progressive increase of literature contributions containing applications of metaheuristic algorithms is occurring. In many cases, these applications are merely aimed at proposing the testing of an existing metaheuristic algorithm on a specific problem, claiming that the proposed method is better than other methods based on weak comparisons. This 'rush to heuristics' does not happen in the evolutionary computation domain, where the rules for setting up rigorous comparisons are stricter, but are typical of the domains of application of the metaheuristics. This paper considers the applications to power and energy systems, and aims at providing a comprehensive view of the main issues concerning the use of metaheuristics for global optimization problems. A set of underlying principles that characterize the metaheuristic algorithms is presented. The customization of metaheuristic algorithms to fit the constraints of specific problems is discussed. Some weaknesses and pitfalls found in literature contributions are identified, and specific guidelines are provided on how to prepare sound contributions on the application of metaheuristic algorithms to specific problems

    Proceedings of the 1994 Monterey Workshop, Increasing the Practical Impact of Formal Methods for Computer-Aided Software Development: Evolution Control for Large Software Systems Techniques for Integrating Software Development Environments

    Get PDF
    Office of Naval Research, Advanced Research Projects Agency, Air Force Office of Scientific Research, Army Research Office, Naval Postgraduate School, National Science Foundatio

    Inference for High-Dimensional Sparse Econometric Models

    Full text link
    This article is about estimation and inference methods for high dimensional sparse (HDS) regression models in econometrics. High dimensional sparse models arise in situations where many regressors (or series terms) are available and the regression function is well-approximated by a parsimonious, yet unknown set of regressors. The latter condition makes it possible to estimate the entire regression function effectively by searching for approximately the right set of regressors. We discuss methods for identifying this set of regressors and estimating their coefficients based on 1\ell_1-penalization and describe key theoretical results. In order to capture realistic practical situations, we expressly allow for imperfect selection of regressors and study the impact of this imperfect selection on estimation and inference results. We focus the main part of the article on the use of HDS models and methods in the instrumental variables model and the partially linear model. We present a set of novel inference results for these models and illustrate their use with applications to returns to schooling and growth regression

    Probabilistic data-driven methods for forecasting, identification and control

    Get PDF
    This dissertation presents contributions mainly in three different fields: system identification, probabilistic forecasting and stochastic control. Thanks to the concept of dissimilarity and by defining an appropriate dissimilarity function, it is shown that a family of predictors can be obtained. First, a predictor to compute nominal forecastings of a time-series or a dynamical system is presented. The effectiveness of the predictor is shown by means of a numerical example, where daily predictions of a stock index are computed. The obtained results turn out to be better than those obtained with popular machine learning techniques like Neural Networks. Similarly, the aforementioned dissimilarity function can be used to compute conditioned probability distributions. By means of the obtained distributions, interval predictions can be made by using the concept of quantiles. However, in order to do that, it is necessary to integrate the distribution for all the possible values of the output. As this numerical integration process is computationally expensive, an alternate method bypassing the computation of the probability distribution is also proposed. Not only is computationally cheaper but it also allows to compute prediction regions, which are the multivariate version of the interval predictions. Both methods present better results than other baseline approaches in a set of examples, including a stock forecasting example and the prediction of the Lorenz attractor. Furthermore, new methods to obtain models of nonlinear systems by means of input-output data are proposed. Two different model approaches are presented: a local data approach and a kernel-based approach. A kalman filter can be added to improve the quality of the predictions. It is shown that the forecasting performance of the proposed models is better than other machine learning methods in several examples, such as the forecasting of the sunspot number and the R¨ossler attractor. Also, as these models are suitable for Model Predictive Control (MPC), new MPC formulations are proposed. Thanks to the distinctive features of the proposed models, the nonlinear MPC problem can be posed as a simple quadratic programming problem. Finally, by means of a simulation example and a real experiment, it is shown that the controller performs adequately. On the other hand, in the field of stochastic control, several methods to bound the constraint violation rate of any controller under the presence of bounded or unbounded disturbances are presented. These can be used, for example, to tune some hyperparameters of the controller. Some simulation examples are proposed in order to show the functioning of the algorithms. One of these examples considers the management of a data center. Here, an energy-efficient MPC-inspired policy is developed in order to reduce the electricity consumption while keeping the quality of service at acceptable levels

    Quality Assessment of Ambulatory Electrocardiogram Signals by Noise Detection using Optimal Binary Classification

    Get PDF
    In order to improve the diagnostic capability in Ambulatory Electrocardiogram signal and to reduce the noise signal impacts, there is a need for more robust models in place. In terms of improvising to the existing solutions, this article explores a novel binary classifier that learns from the features optimized by fusion of diversity assessment measures, which performs Quality Assessment of Ambulatory Electrocardiogram Signals (QAAES) by Noise Detection. The performance of the proposed model QAAES has been scaled by comparing it with contemporary models. Concerning performance analysis, the 10-fold cross-validation has been carried on a benchmark dataset. The results obtained from experiments carried on proposed and other contemporary models for cross-validation metrics have been compared to signify the sensitivity, specificity, and noise detection accuracy

    Hiding Outliers in HighDimensional Data Spaces

    Get PDF
    Detecting outliers in high-dimensional data is crucial in many domains. Due to the curse of dimensionality, one typically does not detect outliers in the full space, but in subspaces of it. More specifically, since the number of subspaces is huge, the detection takes place in only some subspaces. In consequence, one might miss hidden outliers, i.e., outliers only detectable in certain subspaces. In this paper, we take the opposite perspective, which is of practical relevance as well, and study how to hide outliers in high-dimensional data spaces. We formally prove characteristics of hidden outliers. We also propose an algorithm to place them in the data. It focuses on the regions close to existing data objects and is more efficient than an exhaustive approach. In experiments, we both evaluate our formal results and show the usefulness of our algorithm using di↵erent subspace selection schemes, outlier detection methods and data sets

    Quantum Nescimus: Improving the characterization of quantum systems from limited information

    Get PDF
    We are currently approaching the point where quantum systems with 15 or more qubits will be controllable with high levels of coherence over long timescales. One of the fundamental problems that has been identified is that, as the number of qubits increases to these levels, there is currently no clear way to use efficiently the information that can be obtained from such a system to make diagnostic inferences and to enable improvements in the underlying quantum gates. Even with systems of only a few bits the exponential scaling in resources required by techniques such as quantum tomography or gate-set tomography will render these techniques impractical. Randomized benchmarking (RB) is a technique that will scale in a practical way with these increased system sizes. Although RB provides only a partial characterization of the quantum system, recent advances in the protocol and the interpretation of the results of such experiments confirm the information obtained as helpful in improving the control and verification of such processes. This thesis examines and extends the techniques of RB including practical analysis of systems affected by low frequency noise, extending techniques to allow the anisotropy of noise to be isolated, and showing how additional gates required for universal computation can be added to the protocol and thus benchmarked. Finally, it begins to explore the use of machine learning to aid in the ability to characterize, verify and validate noise in such systems, demonstrating by way of example how machine learning can be used to explore the edge between quantum non-locality and realism

    Programmiersprachen und Rechenkonzepte

    Get PDF
    Seit 1984 veranstaltet die GI-Fachgruppe "Programmiersprachen und Rechenkonzepte", die aus den ehemaligen Fachgruppen 2.1.3 "Implementierung von Programmiersprachen" und 2.1.4 "Alternative Konzepte für Sprachen und Rechner" hervorgegangen ist, regelmäßig im Frühjahr einen Workshop im Physikzentrum Bad Honnef. Das Treffen dient in erster Linie dem gegenseitigen Kennenlernen, dem Erfahrungsaustausch, der Diskussion und der Vertiefung gegenseitiger Kontakte
    corecore