589,014 research outputs found

    Supporting Defect Causal Analysis in Practice with Cross-Company Data on Causes of Requirements Engineering Problems

    Full text link
    [Context] Defect Causal Analysis (DCA) represents an efficient practice to improve software processes. While knowledge on cause-effect relations is helpful to support DCA, collecting cause-effect data may require significant effort and time. [Goal] We propose and evaluate a new DCA approach that uses cross-company data to support the practical application of DCA. [Method] We collected cross-company data on causes of requirements engineering problems from 74 Brazilian organizations and built a Bayesian network. Our DCA approach uses the diagnostic inference of the Bayesian network to support DCA sessions. We evaluated our approach by applying a model for technology transfer to industry and conducted three consecutive evaluations: (i) in academia, (ii) with industry representatives of the Fraunhofer Project Center at UFBA, and (iii) in an industrial case study at the Brazilian National Development Bank (BNDES). [Results] We received positive feedback in all three evaluations and the cross-company data was considered helpful for determining main causes. [Conclusions] Our results strengthen our confidence in that supporting DCA with cross-company data is promising and should be further investigated.Comment: 10 pages, 8 figures, accepted for the 39th International Conference on Software Engineering (ICSE'17

    User's operating procedures. Volume 3: Projects directorate information programs

    Get PDF
    A review of the user's operating procedures for the scout project automatic data system, called SPADS is presented. SPADS is the results of the past seven years of software development on a prime mini-computer. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, three of three, provides the instructions to operate the projects directorate information programs in data retrieval and file maintenance via the user friendly menu drivers

    User's operating procedures. Volume 2: Scout project financial analysis program

    Get PDF
    A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers

    Class Imbalance Reduction and Centroid based Relevant Project Selection for Cross Project Defect Prediction

    Get PDF
    Cross-Project Defect Prediction (CPDP) is the process of predicting defects in a target project using information from other projects. This can assist developers in prioritizing their testing efforts and finding flaws. Transfer Learning (TL) has been frequently used at CPDP to improve prediction performance by reducing the disparity in data distribution between the source and target projects. Software Defect Prediction (SDP) is a common study topic in software engineering that plays a critical role in software quality assurance. To address the cross-project class imbalance problem, Centroid-based PF-SMOTE for Imbalanced data is used. In this paper, we used a Centroid-based PF-SMOTE to balance the datasets and Centroid based relevant data selection for Cross Project Defect Prediction. These methods use the mean of all attributes in a dataset and calculating the difference between mean of all datasets. For experimentation, the open source software defect datasets namely, AEEM, Re-Link, and NASA, are considered

    Measuring volume of stockpile using imaging station

    Get PDF
    It is crucial to know cutting and filling volumes in many surveys, mining, quarry and engineering field of works like dredging and embankment project. Generally, volume calculation is completed using conventional surveying methods. The trapezoidal method and classical cross sectioning have been presented in the literature. In other way around, by using conventional surveying methods, the volume calculation required a lot of time, laborers and risky as the big machineries running around the work areas. Digital close range photogrammetry has been insufficient for the volume calculation of the material need to calculation of volume in risk areas or in short time. In this case, long range surveying and scanning method is an alternative method to volume calculation. By the development of scanning and imaging technologies, Topcon Imaging Station (IS) used for three dimensional modeling (3D) surveying of objects in many field such as topographic survey, mining, construction and as-built survey, etc disciplines has become a productive, faster and accurate method. This study concern is getting the stockpile volume by using Topcon IS known as advanced technology instrument which promotes both scanning and long-range surveying. The instrument, a highly-developed technology specialized with Image Master Software; distinctive software that provides capabilities to reconstructed 3D modeling after the volume data was processed. Three dimensional (3D) surfaces are created through Triangulated Irregular Network (TIN) method that supports time saving and more accurate volume calculation. The volume calculated by Image Master (IM) then compared with the volume calculated by 12D software which the data obtained by using total station and prism. The results have been analyzed with respect to different volumes, density factor, three dimensional (3D) models of stockpile and time taken for data acquisition and data processin

    Towards an Energy-Aware Framework for Application Development and Execution in Heterogeneous Parallel Architectures

    Get PDF
    The Transparent heterogeneous hardware Architecture deployment for eNergy Gain in Operation (TANGO) project’s goal is to characterise factors which affect power consumption in software development and operation for Heterogeneous Parallel Hardware (HPA) environments. Its main contribution is the combination of requirements engineering and design modelling for self-adaptive software systems, with power consumption awareness in relation to these environments. The energy efficiency and application quality factors are integrated into the application lifecycle (design, implementation and operation). To support this, the key novelty of the project is a reference architecture and its implementation. Moreover, a programming model with built-in support for various hardware architectures including heterogeneous clusters, heterogeneous chips and programmable logic devices is provided. This leads to a new cross-layer programming approach for heterogeneous parallel hardware architectures featuring software and hardware modelling. Application power consumption and performance, data location and time-criticality optimization, as well as security and dependability requirements on the target hardware architecture are supported by the architecture

    Simulating an engineering workplace: a new approach to prototype-based team project

    Get PDF
    This paper documents the remote management of a first-year foundations of engineering course with special focus on students’ learning by completing a prototypebased project in an online course. The COVID-19 pandemic brought on unprecedented challenges to the teaching and learning communities around the world. Educators made purposeful changes in their teaching approaches, shifting rapidly from in-person to online mode of instruction. This study documents a project-based course that adopted an asynchronous mode of instruction as a part of the general engineering curriculum at a large Southeast university in the United States during the pandemic. This asynchronous course – through implementing necessary changes and adaptations – simulated the experience of a cross-border engineering workplace. The course content focuses on engineering design and problem-solving, physical prototyping, simulated data collection and analysis, contemporary software tools, and professional practices and expectations (e.g., communication, teamwork, and ethics). Learning activities are designed to introduce students to the types of work that engineers do daily and to challenge students’ knowledge and abilities as they explore the different elements of engineering by completing an aesthetic wind turbine project. Our paper reports on the development of the course site as informed by recent national developments in scholarship and practice for online teaching and learning. The principles of course design alignment as well as instructor presence and learner interaction as suggested by these national standards are discussed. Further, the study records strategies adapted to enable students to complete a successful prototypebased project working in geographically distributed and virtual, international teams

    Improving the Robustness to Data Inconsistency between Training and Testing for Code Completion by Hierarchical Language Model

    Full text link
    In the field of software engineering, applying language models to the token sequence of source code is the state-of-art approach to build a code recommendation system. The syntax tree of source code has hierarchical structures. Ignoring the characteristics of tree structures decreases the model performance. Current LSTM model handles sequential data. The performance of LSTM model will decrease sharply if the noise unseen data is distributed everywhere in the test suite. As code has free naming conventions, it is common for a model trained on one project to encounter many unknown words on another project. If we set many unseen words as UNK just like the solution in natural language processing, the number of UNK will be much greater than the sum of the most frequently appeared words. In an extreme case, just predicting UNK at everywhere may achieve very high prediction accuracy. Thus, such solution cannot reflect the true performance of a model when encountering noise unseen data. In this paper, we only mark a small number of rare words as UNK and show the prediction performance of models under in-project and cross-project evaluation. We propose a novel Hierarchical Language Model (HLM) to improve the robustness of LSTM model to gain the capacity about dealing with the inconsistency of data distribution between training and testing. The newly proposed HLM takes the hierarchical structure of code tree into consideration to predict code. HLM uses BiLSTM to generate embedding for sub-trees according to hierarchies and collects the embedding of sub-trees in context to predict next code. The experiments on inner-project and cross-project data sets indicate that the newly proposed Hierarchical Language Model (HLM) performs better than the state-of-art LSTM model in dealing with the data inconsistency between training and testing and achieves averagely 11.2\% improvement in prediction accuracy
    • …
    corecore