350 research outputs found

    Software development metrics prediction using time series methods

    Get PDF
    The software development process is an intricate task, with the growing complexity of software solutions and inflating code-line count being part of the reason for the fall of software code coherence and readability thus being one of the causes for software faults and it’s declining quality. Debugging software during development is significantly less expensive than attempting damage control after the software’s release. An automated quality-related analysis of developed code, which includes code analysis and correlation of development data like an ideal solution. In this paper the ability to predict software faults and software quality is scrutinized. Hereby we investigate four models that can be used to analyze time-based data series for prediction of trends observed in the software development process are investigated. Those models are Exponential Smoothing, the Holt-Winters Model, Autoregressive Integrated Moving Average (ARIMA) and Recurrent Neural Networks (RNN). Time-series analysis methods prove a good fit for software related data prediction. Such methods and tools can lend a helping hand for Product Owners in their daily decision-making process as related to e.g. assignment of tasks, time predictions, bugs predictions, time to release etc. Results of the research are presented.Peer ReviewedPostprint (author's final draft

    Estimating an individual's oxygen uptake during cycling exercise with a recurrent neural network trained from easy-to-obtain inputs: A pilot study

    Get PDF
    Measurement of oxygen uptake during exercise ([Formula: see text]) is currently non-accessible to most individuals without expensive and invasive equipment. The goal of this pilot study was to estimate cycling [Formula: see text] from easy-to-obtain inputs, such as heart rate, mechanical power output, cadence and respiratory frequency. To this end, a recurrent neural network was trained from laboratory cycling data to predict [Formula: see text] values. Data were collected on 7 amateur cyclists during a graded exercise test, two arbitrary protocols (Prot-1 and -2) and an "all-out" Wingate test. In Trial-1, a neural network was trained with data from a graded exercise test, Prot-1 and Wingate, before being tested against Prot-2. In Trial-2, a neural network was trained using data from the graded exercise test, Prot-1 and 2, before being tested against the Wingate test. Two analytical models (Models 1 and 2) were used to compare the predictive performance of the neural network. Predictive performance of the neural network was high during both Trial-1 (MAE = 229(35) mlO2min-1, r = 0.94) and Trial-2 (MAE = 304(150) mlO2min-1, r = 0.89). As expected, the predictive ability of Models 1 and 2 deteriorated from Trial-1 to Trial-2. Results suggest that recurrent neural networks have the potential to predict the individual [Formula: see text] response from easy-to-obtain inputs across a wide range of cycling intensities

    Towards using intelligent techniques to assist software specialists in their tasks

    Full text link
    L’automatisation et l’intelligence constituent des préoccupations majeures dans le domaine de l’Informatique. Avec l’évolution accrue de l’Intelligence Artificielle, les chercheurs et l’industrie se sont orientés vers l’utilisation des modèles d’apprentissage automatique et d’apprentissage profond pour optimiser les tâches, automatiser les pipelines et construire des systèmes intelligents. Les grandes capacités de l’Intelligence Artificielle ont rendu possible d’imiter et même surpasser l’intelligence humaine dans certains cas aussi bien que d’automatiser les tâches manuelles tout en augmentant la précision, la qualité et l’efficacité. En fait, l’accomplissement de tâches informatiques nécessite des connaissances, une expertise et des compétences bien spécifiques au domaine. Grâce aux puissantes capacités de l’intelligence artificielle, nous pouvons déduire ces connaissances en utilisant des techniques d’apprentissage automatique et profond appliquées à des données historiques représentant des expériences antérieures. Ceci permettra, éventuellement, d’alléger le fardeau des spécialistes logiciel et de débrider toute la puissance de l’intelligence humaine. Par conséquent, libérer les spécialistes de la corvée et des tâches ordinaires leurs permettra, certainement, de consacrer plus du temps à des activités plus précieuses. En particulier, l’Ingénierie dirigée par les modèles est un sous-domaine de l’informatique qui vise à élever le niveau d’abstraction des langages, d’automatiser la production des applications et de se concentrer davantage sur les spécificités du domaine. Ceci permet de déplacer l’effort mis sur l’implémentation vers un niveau plus élevé axé sur la conception, la prise de décision. Ainsi, ceci permet d’augmenter la qualité, l’efficacité et productivité de la création des applications. La conception des métamodèles est une tâche primordiale dans l’ingénierie dirigée par les modèles. Par conséquent, il est important de maintenir une bonne qualité des métamodèles étant donné qu’ils constituent un artéfact primaire et fondamental. Les mauvais choix de conception, ainsi que les changements conceptuels répétitifs dus à l’évolution permanente des exigences, pourraient dégrader la qualité du métamodèle. En effet, l’accumulation de mauvais choix de conception et la dégradation de la qualité pourraient entraîner des résultats négatifs sur le long terme. Ainsi, la restructuration des métamodèles est une tâche importante qui vise à améliorer et à maintenir une bonne qualité des métamodèles en termes de maintenabilité, réutilisabilité et extensibilité, etc. De plus, la tâche de restructuration des métamodèles est délicate et compliquée, notamment, lorsqu’il s’agit de grands modèles. De là, automatiser ou encore assister les architectes dans cette tâche est très bénéfique et avantageux. Par conséquent, les architectes de métamodèles pourraient se concentrer sur des tâches plus précieuses qui nécessitent de la créativité, de l’intuition et de l’intelligence humaine. Dans ce mémoire, nous proposons une cartographie des tâches qui pourraient être automatisées ou bien améliorées moyennant des techniques d’intelligence artificielle. Ensuite, nous sélectionnons la tâche de métamodélisation et nous essayons d’automatiser le processus de refactoring des métamodèles. A cet égard, nous proposons deux approches différentes: une première approche qui consiste à utiliser un algorithme génétique pour optimiser des critères de qualité et recommander des solutions de refactoring, et une seconde approche qui consiste à définir une spécification d’un métamodèle en entrée, encoder les attributs de qualité et l’absence des design smells comme un ensemble de contraintes et les satisfaire en utilisant Alloy.Automation and intelligence constitute a major preoccupation in the field of software engineering. With the great evolution of Artificial Intelligence, researchers and industry were steered to the use of Machine Learning and Deep Learning models to optimize tasks, automate pipelines, and build intelligent systems. The big capabilities of Artificial Intelligence make it possible to imitate and even outperform human intelligence in some cases as well as to automate manual tasks while rising accuracy, quality, and efficiency. In fact, accomplishing software-related tasks requires specific knowledge and skills. Thanks to the powerful capabilities of Artificial Intelligence, we could infer that expertise from historical experience using machine learning techniques. This would alleviate the burden on software specialists and allow them to focus on valuable tasks. In particular, Model-Driven Engineering is an evolving field that aims to raise the abstraction level of languages and to focus more on domain specificities. This allows shifting the effort put on the implementation and low-level programming to a higher point of view focused on design, architecture, and decision making. Thereby, this will increase the efficiency and productivity of creating applications. For its part, the design of metamodels is a substantial task in Model-Driven Engineering. Accordingly, it is important to maintain a high-level quality of metamodels because they constitute a primary and fundamental artifact. However, the bad design choices as well as the repetitive design modifications, due to the evolution of requirements, could deteriorate the quality of the metamodel. The accumulation of bad design choices and quality degradation could imply negative outcomes in the long term. Thus, refactoring metamodels is a very important task. It aims to improve and maintain good quality characteristics of metamodels such as maintainability, reusability, extendibility, etc. Moreover, the refactoring task of metamodels is complex, especially, when dealing with large designs. Therefore, automating and assisting architects in this task is advantageous since they could focus on more valuable tasks that require human intuition. In this thesis, we propose a cartography of the potential tasks that we could either automate or improve using Artificial Intelligence techniques. Then, we select the metamodeling task and we tackle the problem of metamodel refactoring. We suggest two different approaches: A first approach that consists of using a genetic algorithm to optimize set quality attributes and recommend candidate metamodel refactoring solutions. A second approach based on mathematical logic that consists of defining the specification of an input metamodel, encoding the quality attributes and the absence of smells as a set of constraints and finally satisfying these constraints using Alloy

    Forecasting Network Traffic: A Survey and Tutorial with Open-Source Comparative Evaluation

    Get PDF
    This paper presents a review of the literature on network traffic prediction, while also serving as a tutorial to the topic. We examine works based on autoregressive moving average models, like ARMA, ARIMA and SARIMA, as well as works based on Artifical Neural Networks approaches, such as RNN, LSTM, GRU, and CNN. In all cases, we provide a complete and self-contained presentation of the mathematical foundations of each technique, which allows the reader to get a full understanding of the operation of the different proposed methods. Further, we perform numerical experiments based on real data sets, which allows comparing the various approaches directly in terms of fitting quality and computational costs. We make our code publicly available, so that readers can readily access a wide range of forecasting tools, and possibly use them as benchmarks for more advanced solutions

    New Perspectives in the Development of the Artificial Sport Trainer

    Get PDF
    ABSTRACT: The rapid development of computer science and telecommunications has brought new ways and practices to sport training. The artificial sport trainer, founded on computational intelligence algorithms, has gained momentum in the last years. However, artificial sport trainer usually suffers from a lack of automatisation in realization and control phases of the training. In this study, the Digital Twin is proposed as a framework for helping athletes, during realization of training sessions, to make the proper decisions in situations they encounter. The digital twin for artificial sport trainer is based on the cognitive model of humans. This concept has been applied to cycling, where a version of the system on a Raspberry Pi already exists. The results of porting the digital twin on the mentioned platform shows promising potential for its extension to other sport disciplines.Akemi Galvez and Andres Iglesias have received funding from the project PDE-GIR of the European Union’s Horizon 2020 research and innovation programme under the Marie SklodowskaCurie grant agreement no. 778035, and from the project TIN2017-89275-R funded by MCIN/AEI/10.13039/501100011033/FEDER “Una manera de hacer Europa”

    USING MACHINE LEARNING TO OPTIMIZE PREDICTIVE MODELS USED FOR BIG DATA ANALYTICS IN VARIOUS SPORTS EVENTS

    Get PDF
    In today’s world, data is growing in huge volume and type day by day. Historical data can hence be leveraged to predict the likelihood of the events which are to occur in the future. This process of using statistical or any other form of data to predict future outcomes is commonly termed as predictive modelling. Predictive modelling is becoming more and more important and is trending because of several reasons. But mainly, it enables businesses or individual users to gain accurate insights and allows to decide suitable actions for a profitable outcome. Machine learning techniques are generally used in order to build these predictive models. Examples of machine learning models ranges from time-series-based regression models which can be used for predicting volume of airline related traffic and linear regression-based models which can be used for predicting fuel efficiency. There are many domains which can gain competitive advantage by using predictive modelling with machine learning. Few of these domains include, but are not limited to, banking and financial services, retail, insurance, fraud detection, stock market analysis, sentimental analysis etc. In this research project, predictive analysis is used for the sports domain. It’s an upcoming domain where machine learning can help make better predictions. There are numerous sports events happening around the globe every day and the data gathered from these events can very well be used for predicting as well as improving the future events. In this project, machine learning with statistics would be used to perform quantitative and predictive analysis of dataset related to soccer. Comparisons of these models to see how effectively the models are is also presented. Also, few big data tools and techniques are used in order to optimize these predictive models and increase their accuracy to over 90%

    MAVEN Deliverable 6.4: Integration Final Report

    Get PDF
    This document presents the work that has been performed in WP6 after D6.3, and therefore focussing on the integration sprints 3-6. It describes which parts of the system are implemented and how they are put together. To do so, it builds upon the deliverables created so far, esp. D6.3 and all other deliverables of the underlying work packages 3, 4 and 5. Another important aspect for understanding the content of this deliverable is D2.1 [4] for the scenario definition of the whole MAVEN project, and the deliverables D6.1 [5] and D6.2 [6], which give an overview on the existing infrastructure and vehicles used in MAVEN

    Automated Anomaly Detection and Localization System for a Microservices Based Cloud System

    Get PDF
    Context: With an increasing number of applications running on a microservices-based cloud system (such as AWS, GCP, IBM Cloud), it is challenging for the cloud providers to offer uninterrupted services with guaranteed Quality of Service (QoS) factors. Problem Statement: Existing monitoring frameworks often do not detect critical defects among a large volume of issues generated, thus affecting recovery response times and usage of maintenance human resource. Also, manually tracing the root causes of the issues requires a significant amount of time. Objective: The objective of this work is to: (i) detect performance anomalies, in real-time, through monitoring KPIs (Key Performance Indicators) using distributed tracing events, and (ii) identify their root causes. Proposed Solution: This thesis proposes an automated prediction-based anomaly detection and localization system, capable of detecting performance anomalies of a microservice using machine learning techniques, and determine their root-causes using a localization process. Novelty: The originality of this work lies in the detection process that uses a novel ensemble of a time-series forecasting model and three different unsupervised learning techniques that avoid defining static error thresholds to detect an anomaly and, instead follow a dynamic approach. Experimental Results: The proposed detection system was experimented using different variants of ensembles, evaluated on a real-world production dataset out of which two proposed ensembles outperformed the existing static rule-based approach with average F1-scores of 86% and 84%, average precision scores of 82% and 77% and average recall scores of 91% and 93% respectively across 6 experiments. The proposed detection ensembles were also evaluated on the Numenta Anomaly Benchmark (NAB) datasets and results show that the proposed method performs better than the Numenta’s standard HTM model score. Research Methodology: We adopted an agile methodology to conduct our research in an incremental and iterative fashion. Conclusion: The two proposed ensembles for anomaly detection perform better than the existing static rule-based approach
    • …
    corecore