293 research outputs found

    Towards Knowledge Based Risk Management Approach in Software Projects

    Get PDF
    All projects involve risk; a zero risk project is not worth pursuing. Furthermore, due to software project uniqueness, uncertainty about final results will always accompany software development. While risks cannot be removed from software development, software engineers instead, should learn to manage them better (Arshad et al., 2009; Batista Webster et al., 2005; Gilliam, 2004). Risk Management and Planning requires organization experience, as it is strongly centred in both experience and knowledge acquired in former projects. The larger experience of the project manager improves his ability in identifying risks, estimating their occurrence likelihood and impact, and defining appropriate risk response plan. Thus risk knowledge cannot remain in an individual dimension, rather it must be made available for the organization that needs it to learn and enhance its performances in facing risks. If this does not occur, project managers can inadvertently repeat past mistakes simply because they do not know or do not remember the mitigation actions successfully applied in the past or they are unable to foresee the risks caused by certain project restrictions and characteristics. Risk knowledge has to be packaged and stored over time throughout project execution for future reuse. Risk management methodologies are usually based on the use of questionnaires for risk identification and templates for investigating critical issues. Such artefacts are not often related each other and thus usually there is no documented cause-effect relation between issues, risks and mitigation actions. Furthermore today methodologies do not explicitly take in to account the need to collect experience systematically in order to reuse it in future projects. To convey these problems, this work proposes a framework based on the Experience Factory Organization (EFO) model (Basili et al., 1994; Basili et al., 2007; Schneider & Hunnius, 2003) and then use of Quality Improvement Paradigm (QIP) (Basili, 1989). The framework is also specialized within one of the largest firms of current Italian Software Market. For privacy reasons, and from here on, we will refer to it as “FIRM”. Finally in order to quantitatively evaluate the proposal, two empirical investigations were carried out: a post-mortem analysis and a case study. Both empirical investigations were carried out in the FIRM context and involve legacy systems transformation projects. The first empirical investigation involved 7 already executed projects while the second one 5 in itinere projects. The research questions we ask are: Does the proposed knowledge based framework lead to a more effective risk management than the one obtained without using it? Does the proposed knowledge based framework lead to a more precise risk management than the one obtained without using it? The rest of the paper is organized as follows: section 2 provides a brief overview of the main research activities presented in literature dealing with the same topics; section 3 presents the proposed framework, while section 4 its specialization in the FIRM context; section 5 describes empirical studies we executed, results and discussions are presented in section 6. Finally, conclusions are drawn in section 7

    Effective Anomaly Detection Using Deep Learning in IoT Systems

    Get PDF
    Anomaly detection in network traffic is a hot and ongoing research theme especially when concerning IoT devices, which are quickly spreading throughout various situations of people's life and, at the same time, prone to be attacked through different weak points. In this paper, we tackle the emerging anomaly detection problem in IoT, by integrating five different datasets of abnormal IoT traffic and evaluating them with a deep learning approach capable of identifying both normal and malicious IoT traffic as well as different types of anomalies. The large integrated dataset is aimed at providing a realistic and still missing benchmark for IoT normal and abnormal traffic, with data coming from different IoT scenarios. Moreover, the deep learning approach has been enriched through a proper hyperparameter optimization phase, a feature reduction phase by using an autoencoder neural network, and a study of the robustness of the best considered deep neural networks in situations affected by Gaussian noise over some of the considered features. The obtained results demonstrate the effectiveness of the created IoT dataset for anomaly detection using deep learning techniques, also in a noisy scenario

    Gait recognition using FMCW radar and temporal convolutional deep neural netowrks

    Get PDF
    The capability of human identification in specific scenarios and in a quickly and accurately manner, is a critical aspect in various surveillance applications. In particular, in this context, classical surveillance systems are based on video cameras, requiring high computational/storing resources, which are very sensitive to light and weather conditions. In this paper, an efficient classifier based on deep learning is used for the purpose of identifying individuals features by resorting to the micro-Doppler data extracted from low-power frequency-modulated continuous-wave radar measurements. Results obtained through the application of a deep temporal convolutional neural networks confirms the applicability of deep learning to the problem at hand. Best obtained identification accuracy is 0.949 with an F-measure of 0.88 using a temporal window of four second

    Software Analytics to Support Students in Object-Oriented Programming Tasks: An Empirical Study

    Get PDF
    The computing education community has shown a long-time interest in how to analyze the Object-Oriented (OO) source code developed by students to provide them with useful formative tips. Instructors need to understand the student's difficulties to provide precise feedback on most frequent mistakes and to shape, design and effectively drive the course. This paper proposes and evaluates an approach allowing to analyze student's source code and to automatically generate feedback about the more common violations of the produced code. The approach is implemented through a cloud-based tool allowing to monitor how students use language constructs based on the analysis of the most common violations of the Object-Oriented paradigm in the student source code. Moreover, the tool supports the generation of reports about student's mistakes and misconceptions that can be used to improve the students' education. The paper reports the results of a quasi-experiment performed in a class of a CS1 course to investigate the effects of the provided reports in terms of coding ability (concerning the correctness and the quality of the produced source code). Results show that after the course the treatment group obtained higher scores and produced better source code than the control group following the feedback provided by the teachers

    Thyroid disease treatment prediction with machine learning approaches

    Get PDF
    The thyroid is an endocrine gland located in the anterior region of the neck: its main task is to produce thyroid hormones, which are functional to our entire body. Its possible dysfunction can lead to the production of an insufficient or excessive amount of thyroid hormone. Therefore, the thyroid can become inflamed or swollen due to one or more swellings forming inside it. Some of these nodules can be the site of malignant tumors. One of the most used treatments is sodium levothyroxine, also known as LT4, a synthetic thyroid hormone used in the treatment of thyroid disorders and diseases. Predictions about the treatment can be important for supporting endocrinologists' activities and improve the quality of the patients' life. To date, there are numerous studies in the literature that focus on the prediction of thyroid diseases on the trend of the hormonal parameters of people. This work, differently, aims to predict the LT4 treatment trend for patients suffering from hypothyroidism. To this end, a dedicated dataset was built that includes medical information related to patients being treated in the”AOU Federico II” hospital of Naples. For each patient, the clinical history is available over time, and therefore on the basis of the trend of the hormonal parameters and other attributes considered it was possible to predict the course of each patient's treatment in order to understand if this should be increased or decreased. To conduct this study, we used different machine learning algorithms. In particular, we compared the results of 10 different classifiers. The performances of the different algorithms show good results, especially in the case of the Extra-Tree Classifier, where the accuracy reaches 84%

    Model Checking to Improve Precision of Design Pattern Instances Identification in OO Systems

    Get PDF
    In the last two decades some methods and tools have been proposed to identify the Design Pattern (DP) instances implemented in an existing Object Oriented (OO) software system. This allows to know which OO components are involved in each DP instance. Such a knowledge is useful to better understand the system thus reducing the effort to modify and evolve it. The results obtained by the existing methods and tools can suffer a lack of completeness or precision due to the presence of false positive/negative. Model Checking (MC) algorithms can be used to improve the precision of DP's instances detected by a tool by automatically refining the results it produces. In this paper a MC based technique is defined and applied to the results of an existing DPs mining tool, called Design Pattern Finder (DPF), to improve the precision by verifying automatically the DPs instances it detects. To verify and assess the feasibility and the effectiveness of the proposed technique, we carried out a case study where it was applied on some open source OO systems. The results showed that the proposed technique allowed to improve the precision of the DPs instances detected by the DPF tool

    Super-resolution of synthetic aperture radar complex data by deep-learning

    Get PDF
    One of the greatest limitations of Synthetic Aperture Radar imagery is the capability to obtain an arbitrarily high spatial resolution. Indeed, despite optical sensors, this capability is not just limited by the sensor technology. Instead, improving the SAR spatial resolution requires large transmitted bandwidth and relatively long synthetic apertures that for regulatory and practical reasons are impossible to be met. This issue gets particularly relevant when dealing with Stripmap mode acquisitions and with relatively low carrier frequency sensors (where relatively large bandwidth signals are more difficult to be transmitted). To overcome this limitation, in this paper a deep learning based framework is proposed to enhance the SAR image spatial resolution while retaining the complex image accuracy. Results on simuated and real SAR data demonstrate the effectiveness of the proposed framework

    Assessing the impact of global variables on program dependence and dependence clusters

    Get PDF
    This paper presents results of a study of the effect of global variables on the quantity of dependence in general and on the presence of dependence clusters in particular. The paper introduces a simple transformation-based analysis algorithm for measuring the impact of globals on dependence. It reports on the application of this approach to the detailed assessment of dependence in an empirical study of 21 programs consisting of just over 50K lines of code. The technique is used to identify global variables that have a significant impact upon program dependence and to identify and characterize the ways in which global variable dependence may lead to dependence clusters. In the study, over half of the programs include such a global variable and a quarter have one that is solely responsible for a dependence cluster
    corecore