35 research outputs found

    Towards Knowledge Based Risk Management Approach in Software Projects

    Get PDF
    All projects involve risk; a zero risk project is not worth pursuing. Furthermore, due to software project uniqueness, uncertainty about final results will always accompany software development. While risks cannot be removed from software development, software engineers instead, should learn to manage them better (Arshad et al., 2009; Batista Webster et al., 2005; Gilliam, 2004). Risk Management and Planning requires organization experience, as it is strongly centred in both experience and knowledge acquired in former projects. The larger experience of the project manager improves his ability in identifying risks, estimating their occurrence likelihood and impact, and defining appropriate risk response plan. Thus risk knowledge cannot remain in an individual dimension, rather it must be made available for the organization that needs it to learn and enhance its performances in facing risks. If this does not occur, project managers can inadvertently repeat past mistakes simply because they do not know or do not remember the mitigation actions successfully applied in the past or they are unable to foresee the risks caused by certain project restrictions and characteristics. Risk knowledge has to be packaged and stored over time throughout project execution for future reuse. Risk management methodologies are usually based on the use of questionnaires for risk identification and templates for investigating critical issues. Such artefacts are not often related each other and thus usually there is no documented cause-effect relation between issues, risks and mitigation actions. Furthermore today methodologies do not explicitly take in to account the need to collect experience systematically in order to reuse it in future projects. To convey these problems, this work proposes a framework based on the Experience Factory Organization (EFO) model (Basili et al., 1994; Basili et al., 2007; Schneider & Hunnius, 2003) and then use of Quality Improvement Paradigm (QIP) (Basili, 1989). The framework is also specialized within one of the largest firms of current Italian Software Market. For privacy reasons, and from here on, we will refer to it as “FIRM”. Finally in order to quantitatively evaluate the proposal, two empirical investigations were carried out: a post-mortem analysis and a case study. Both empirical investigations were carried out in the FIRM context and involve legacy systems transformation projects. The first empirical investigation involved 7 already executed projects while the second one 5 in itinere projects. The research questions we ask are: Does the proposed knowledge based framework lead to a more effective risk management than the one obtained without using it? Does the proposed knowledge based framework lead to a more precise risk management than the one obtained without using it? The rest of the paper is organized as follows: section 2 provides a brief overview of the main research activities presented in literature dealing with the same topics; section 3 presents the proposed framework, while section 4 its specialization in the FIRM context; section 5 describes empirical studies we executed, results and discussions are presented in section 6. Finally, conclusions are drawn in section 7

    Effective Anomaly Detection Using Deep Learning in IoT Systems

    Get PDF
    Anomaly detection in network traffic is a hot and ongoing research theme especially when concerning IoT devices, which are quickly spreading throughout various situations of people's life and, at the same time, prone to be attacked through different weak points. In this paper, we tackle the emerging anomaly detection problem in IoT, by integrating five different datasets of abnormal IoT traffic and evaluating them with a deep learning approach capable of identifying both normal and malicious IoT traffic as well as different types of anomalies. The large integrated dataset is aimed at providing a realistic and still missing benchmark for IoT normal and abnormal traffic, with data coming from different IoT scenarios. Moreover, the deep learning approach has been enriched through a proper hyperparameter optimization phase, a feature reduction phase by using an autoencoder neural network, and a study of the robustness of the best considered deep neural networks in situations affected by Gaussian noise over some of the considered features. The obtained results demonstrate the effectiveness of the created IoT dataset for anomaly detection using deep learning techniques, also in a noisy scenario

    Software Analytics to Support Students in Object-Oriented Programming Tasks: An Empirical Study

    Get PDF
    The computing education community has shown a long-time interest in how to analyze the Object-Oriented (OO) source code developed by students to provide them with useful formative tips. Instructors need to understand the student's difficulties to provide precise feedback on most frequent mistakes and to shape, design and effectively drive the course. This paper proposes and evaluates an approach allowing to analyze student's source code and to automatically generate feedback about the more common violations of the produced code. The approach is implemented through a cloud-based tool allowing to monitor how students use language constructs based on the analysis of the most common violations of the Object-Oriented paradigm in the student source code. Moreover, the tool supports the generation of reports about student's mistakes and misconceptions that can be used to improve the students' education. The paper reports the results of a quasi-experiment performed in a class of a CS1 course to investigate the effects of the provided reports in terms of coding ability (concerning the correctness and the quality of the produced source code). Results show that after the course the treatment group obtained higher scores and produced better source code than the control group following the feedback provided by the teachers

    Gait recognition using FMCW radar and temporal convolutional deep neural netowrks

    Get PDF
    The capability of human identification in specific scenarios and in a quickly and accurately manner, is a critical aspect in various surveillance applications. In particular, in this context, classical surveillance systems are based on video cameras, requiring high computational/storing resources, which are very sensitive to light and weather conditions. In this paper, an efficient classifier based on deep learning is used for the purpose of identifying individuals features by resorting to the micro-Doppler data extracted from low-power frequency-modulated continuous-wave radar measurements. Results obtained through the application of a deep temporal convolutional neural networks confirms the applicability of deep learning to the problem at hand. Best obtained identification accuracy is 0.949 with an F-measure of 0.88 using a temporal window of four second

    Model Checking to Improve Precision of Design Pattern Instances Identification in OO Systems

    Get PDF
    In the last two decades some methods and tools have been proposed to identify the Design Pattern (DP) instances implemented in an existing Object Oriented (OO) software system. This allows to know which OO components are involved in each DP instance. Such a knowledge is useful to better understand the system thus reducing the effort to modify and evolve it. The results obtained by the existing methods and tools can suffer a lack of completeness or precision due to the presence of false positive/negative. Model Checking (MC) algorithms can be used to improve the precision of DP's instances detected by a tool by automatically refining the results it produces. In this paper a MC based technique is defined and applied to the results of an existing DPs mining tool, called Design Pattern Finder (DPF), to improve the precision by verifying automatically the DPs instances it detects. To verify and assess the feasibility and the effectiveness of the proposed technique, we carried out a case study where it was applied on some open source OO systems. The results showed that the proposed technique allowed to improve the precision of the DPs instances detected by the DPF tool

    Super-resolution of synthetic aperture radar complex data by deep-learning

    Get PDF
    One of the greatest limitations of Synthetic Aperture Radar imagery is the capability to obtain an arbitrarily high spatial resolution. Indeed, despite optical sensors, this capability is not just limited by the sensor technology. Instead, improving the SAR spatial resolution requires large transmitted bandwidth and relatively long synthetic apertures that for regulatory and practical reasons are impossible to be met. This issue gets particularly relevant when dealing with Stripmap mode acquisitions and with relatively low carrier frequency sensors (where relatively large bandwidth signals are more difficult to be transmitted). To overcome this limitation, in this paper a deep learning based framework is proposed to enhance the SAR image spatial resolution while retaining the complex image accuracy. Results on simuated and real SAR data demonstrate the effectiveness of the proposed framework

    Water stress classification using Convolutional Deep Neural Networks

    No full text
    In agriculture, given the global water scarcity, optimizing the irrigation system have become a key requisite of any semi-automatic irrigation scheduling system. Using efficient assessment methods for crop water stress allows reduced water consumption as well as improved quality and quantity of the production. The adoption of Neural Network can support the automatic in situ continuous monitoring and irrigation through the real-time classification of the plant water stress. This study proposes an end-to-end automatic irrigation system based on the adoption of Deep Neural Networks for the multinomial classification of tomato plants’ water stress based on thermal and optical aerial images. This paper proposes a novel approach that cover three important aspects: (i) joint usage of optical and thermal camera, captured by un-manned aerial vehicles (UAVs); (ii) strategies of image segmentation in both thermal imaging used to obtain images that can remove noise and parts not useful for classifying water stress; (iii) the adoption of deep pre-trained neural ensembles to perform effective classification of field water stress. Firstly, we used a multi-channel approach based on both thermal and optical images gathered by a drone to obtain a more robust and broad image extraction. Moreover, looking at the image processing, a segmentation and background removal step is performed to improve the image quality. Then, the proposed VGG-based architecture is designed as a combination of two different VGG instances (one for each channel). To validate the proposed approach a large real dataset is built. It is com- posed of 6000 images covering all the lifecycle of the tomato crops captured with a drone thermal and optical photocamera. Specifically, our approach, looking mainly at leafs and fruits status and patterns, is designed to be applied after the plants has been transplanted and have reached, at least, early growth stage (covering vegetative, flowering, friut-formation and mature fruiting stages)

    Empirical Investigation of the Efficacy and Efficiency of Tools for Transferring Software Engineering Knowledge

    No full text
    Continuous pressure on behalf of enterprises leads to a constant need for innovation. This involves exchanging results of knowledge and innovation among research groups and enterprises in accordance to the Open Innovation paradigm. The technologies that seem to be apparently attractive for exchanging knowledge are the Internet and its search engines. Literature provides many discordant opinions on their efficacy, and no empirical evidence on the topic. This work starts from the definition of a Knowledge Acquisition Process, and presents a rigorous empirical investigation that evaluates the efficacy of the previous technologies within the Exploratory Search of Knowledge and of Relevant Knowledge according to specific knowledge requirements. The investigation has pointed out that these technologies are not effective for Explorative Search. The paper concludes with a brief analysis of other technologies to develop and analyse in order to overcome the weaknesses that this investigation has pointed out within the Knowledge Acquisition Process.Internet, search engine, knowledge transferring, knowledge acquisition, knowledge acquisition process
    corecore