687,600 research outputs found

    Investigating Automatic Static Analysis Results to Identify Quality Problems: an Inductive Study

    Get PDF
    Background: Automatic static analysis (ASA) tools examine source code to discover "issues", i.e. code patterns that are symptoms of bad programming practices and that can lead to defective behavior. Studies in the literature have shown that these tools find defects earlier than other verification activities, but they produce a substantial number of false positive warnings. For this reason, an alternative approach is to use the set of ASA issues to identify defect prone files and components rather than focusing on the individual issues. Aim: We conducted an exploratory study to investigate whether ASA issues can be used as early indicators of faulty files and components and, for the first time, whether they point to a decay of specific software quality attributes, such as maintainability or functionality. Our aim is to understand the critical parameters and feasibility of such an approach to feed into future research on more specific quality and defect prediction models. Method: We analyzed an industrial C# web application using the Resharper ASA tool and explored if significant correlations exist in such a data set. Results: We found promising results when predicting defect-prone files. A set of specific Resharper categories are better indicators of faulty files than common software metrics or the collection of issues of all issue categories, and these categories correlate to different software quality attributes. Conclusions: Our advice for future research is to perform analysis on file rather component level and to evaluate the generalizability of categories. We also recommend using larger datasets as we learned that data sparseness can lead to challenges in the proposed analysis proces

    Agile and Lean Systems Engineering: Kanban in Systems Engineering

    Get PDF
    This is the 2nd of two reports that were created for research on this topic funded through SERC. The first report, SERC-TR-032-1 dated March 13, 2012, constituted the 2011-2012 Annual Technical Report and the Final Technical Report of the SERC Research Task RT-6: Software Intensive Systems Data Quality and Estimation Research In Support of Future Defense Cost Analysis. The overall objectives of RT-6 were to use data submitted to DoD in the Software Resources Data Report (SRDR) forms to provide guidance for DoD projects in estimating software costs for future DoD projects. In analyzing the data, the project found variances in productivity data that made such SRDR-based estimates highly variable. The project then performed additional analyses that provided better bases of estimate, but also identified ambiguities in the SRDR data definitions that enabled the project to help the DoD DCARC organization to develop better SRDR data definitions. In SERC-TR-2012-032-1, the resulting Manual provided the guidance elements for software cost estimation performers and users. Several appendices provide further related information on acronyms, sizing, nomograms, work breakdown structures, and references. SERC-TR-2013-032-2 (current report), included the “Software Cost Estimation Metrics Manual.” This constitutes the 2012-2013 Annual Technical Report and the Final Technical Report of the SERC Research Task Order 0024, RT-6: Software Intensive Systems Cost and Schedule Estimation Estimating the cost to develop a software application is different from almost any other manufacturing process. In other manufacturing disciplines, the product is developed once and replicated many times using physical processes. Replication improves physical process productivity (duplicate machines produce more items faster), reduces learning curve effects on people and spreads unit cost over many items. Whereas a software application is a single production item, i.e. every application is unique. The only physical processes are the documentation of ideas, their translation into computer instructions and their validation and verification. Production productivity reduces, not increases, when more people are employed to develop the software application. Savings through replication are only realized in the development processes and on the learning curve effects on the management and technical staff. Unit cost is not reduced by creating the software application over and over again. This manual helps analysts and decision makers develop accurate, easy and quick software cost estimates for different operating environments such as ground, shipboard, air and space. It was developed by the Air Force Cost Analysis Agency (AFCAA) in conjunction with DoD Service Cost Agencies, and assisted by the SERC through involving the University of Southern California and the Naval Postgraduate School. The intent is to improve quality and consistency of estimating methods across cost agencies and program offices through guidance, standardization, and knowledge sharing. The manual consists of chapters on metric definitions, e.g., what is meant by equivalent lines of code, examples of metric definitions from commercially available cost models, the data collection and repository form, guidelines for preparing the data for analysis, analysis results, cost estimating relationships found in the data, productivity benchmarks, future cost estimation challenges and a very large appendix.SERCU.S. Department of DefenseSystems Engineering Research Center (SERC)Systems Engineering Research Center (SERC) Contract H98230-08-D-0171

    Challenges Facing Sustainable Real Estate Marketing and Practice in Emerging Economy: Case Study of Nigeria

    Get PDF
    The challenges facing estate surveying and valuation practice across the world are enormous, and the future of the profession is being questioned, especially in Nigeria. There are pressures for practitioners to secure instructions and at same time meet increasingly complex and stringent standards of professional practice. This study provides a perspective of issues confronting the profession across the globe relying on a review of literature, while data on the Nigerian situation were obtained through the primary source. Three thousand Estate Surveyors and Valuers across the country were surveyed using the internet-based SurveyMonkey software. The analysis indicated that “topping up”, “gazumping”, “gazundering”, low level of salaries and misdemeanors by the ever-increasing number of charlatans are major challenges facing the profession. In addition, majority of practitioners confessed to involvement in mounting of multiple signboards, collection of double fees, and soliciting for jobs with financial inducements. It was therefore recommended that prosecution of erring members, encouragement of non-professionals to attend formal training, and establishment of a college for such training, enforcement of professional standards, and adoption of proactive stance on laws that are inimical to sustainable real estate practice to ensure an enduring professional practice

    Enhanced Productivity Using the Cray Performance Analysis Toolset

    Get PDF
    Abstract The purpose of an application performance analysis tool is to help the user identify whether or not their application is running efficiently on the computing resources available. However, the scale of current and future high end systems, as well as increasing system software and architecture complexity, brings a new set of challenges to todays performance tools. In order to achieve high performance on these peta-scale computing systems, users need a new infrastructure for performance analysis that can handle the challenges associated with multiple levels of parallelism, hundreds of thousands of computing elements, and novel programming paradigms that result in the collection of massive sets of performance data. In this paper we present the Cray Performance Analysis Toolset, which is set on an evolutionary path to address the application performance analysis challenges associated with these massive computing systems by highlighting relevant data and by bringing Cray optimization knowledge to a wider set of users

    Privacy-Preserving Graph Machine Learning from Data to Computation: A Survey

    Full text link
    In graph machine learning, data collection, sharing, and analysis often involve multiple parties, each of which may require varying levels of data security and privacy. To this end, preserving privacy is of great importance in protecting sensitive information. In the era of big data, the relationships among data entities have become unprecedentedly complex, and more applications utilize advanced data structures (i.e., graphs) that can support network structures and relevant attribute information. To date, many graph-based AI models have been proposed (e.g., graph neural networks) for various domain tasks, like computer vision and natural language processing. In this paper, we focus on reviewing privacy-preserving techniques of graph machine learning. We systematically review related works from the data to the computational aspects. We first review methods for generating privacy-preserving graph data. Then we describe methods for transmitting privacy-preserved information (e.g., graph model parameters) to realize the optimization-based computation when data sharing among multiple parties is risky or impossible. In addition to discussing relevant theoretical methodology and software tools, we also discuss current challenges and highlight several possible future research opportunities for privacy-preserving graph machine learning. Finally, we envision a unified and comprehensive secure graph machine learning system.Comment: Accepted by SIGKDD Explorations 2023, Volume 25, Issue

    Developmental Flight Test Lessons Learned from Open Architecture Software in the Mission Computer of the U.S. Navy E-2C Group II Aircraft

    Get PDF
    The Naval Air Systems Command commissioned the E-2C Hawkeye Group II Mission Computer Replacement Program and tasked Air Test and Evaluation Squadron Two-Zero and the E-2C Integrated Test Team to evaluate the integration of the form, fit, and function of the OL-698/ASQ Mission Computer Replacement (MCR) for replacement of the Litton L-304 Mission Computer in the E-2C Group II configured aircraft. As part of the life cycle support of the E-2C aircraft, the MCR configuration fields a new, more reliable Commercial-off-the-Shelf (COTS) hardware system and preserves the original software investment by emulating the existing Litton Instructional Set Architecture (LISA) legacy code. Incorporating Northrop Grumman Space Technology’s Reconfigurable Processor for Legacy Applications Code Execution (RePLACE) software re-hosting technique, the investment in the LISA software is maintained. Conducting developmental test of robust software systems, such as the MCR and its associated software, provided dramatically different challenges than traditional developmental testing. A series of lessons were learned through particular discrepancies and deficiencies discovered through the six month flight test period. Specific deficiencies illustrate where proper planning could ease the difficulties encountered in software testing. Keys to successful developmental software tests include having the appropriate personnel on the test team with the proper equipment and capability. Equally important, inadequate configuration management creates more problems than fixes. Software re-programming can provide faster fixes than traditional developmental test. The flexibility of software programming makes configuration management a challenge as multiple versions become available in a short amount of time. Multiple versions of software heighten the risk of configuration management breakdown during limited amount of available flight tests. Each re-programmed version potentially fixes targeted deficiencies, but can also cause new issues in functional areas already tested. Inherently, regression testing impacts the schedule. Software testing requires a realistic schedule that the author believes should compensate for anticipated problems. Data collection, reduction, and analysis always prove to be valuable in developmental testing. A solid instrumentation plan for data collection from all parties involved in flight tests, especially data link network tests, are critical for trouble shooting discovered deficiencies. Software testing is relatively new to the developmental test world and can be seen as the way of the future. Software upgrades lure program managers into a potentially cost effective option in the face of aging avionics systems. With realistic planning and configuration management, the cost and performance effectiveness of software upgrades and development is more likely to become realized

    Development of Rainfall Model using Meteorological Data for Hydrological Use

    Get PDF
    Abstract At present, research on forecasting unpredictable weather such as heavy rainfall is one of the most important challenges for equipped meteorological center. In addition, the incidence of significant weather events is estimated to rise in the near future due to climate change, and this situation inspires more studies to be done. This study introduces a rainfall model that has been developed using selected rainfall parameters with the aim to recognize rainfall depth in a catchment area. This study proposes a rainfall model that utilizes the amount of rainfall, temperature, humidity and pressure records taken from selected stations in Peninsular Malaysia and they are analyzed using SPSS multiple regression model. Seven meteorological stations are selected for data collection from 1997 until 2007 in Peninsular Malaysia which are Senai, Kuantan, Melaka, Subang, Ipoh, Bayan Lepas, and Chuping. Multiple Regression analysis in Statistical Package for Social Science (SPSS) software has been used to analyze a set of eleven years (1997 – 2007) meteorological data. Senai rainfall model gives an accurate result compared to observation rainfall data and this model were validating with data from Kota Tinggi station. The analysis shows that the selected meteorological parameters influence the rainfall development. As a result, the rainfall model developed for Senai proves that it can be used in Kota Tinggi catchment area within the limit boundaries, as the two stations are close from one another. Then, the amounts of rainfall at the Senai and Kota Tinggi stations are compared and the calibration analysis shows that the proposed rainfall model can be used in both areas.&nbsp

    Concept drift from 1980 to 2020: a comprehensive bibliometric analysis with future research insight

    Get PDF
    In nonstationary environments, high-dimensional data streams have been generated unceasingly where the underlying distribution of the training and target data may change over time. These drifts are labeled as concept drift in the literature. Learning from evolving data streams demands adaptive or evolving approaches to handle concept drifts, which is a brand-new research affair. In this effort, a wide-ranging comparative analysis of concept drift is represented to highlight state-of-the-art approaches, embracing the last four decades, namely from 1980 to 2020. Considering the scope and discipline; the core collection of the Web of Science database is regarded as the basis of this study, and 1,564 publications related to concept drift are retrieved. As a result of the classification and feature analysis of valid literature data, the bibliometric indicators are revealed at the levels of countries/regions, institutions, and authors. The overall analyses, respecting the publications, citations, and cooperation of networks, are unveiled not only the highly authoritative publications but also the most prolific institutions, influential authors, dynamic networks, etc. Furthermore, deep analyses including text mining such as; the burst detection analysis, co-occurrence analysis, timeline view analysis, and bibliographic coupling analysis are conducted to disclose the current challenges and future research directions. This paper contributes as a remarkable reference for invaluable further research of concept drift, which enlightens the emerging/trend topics, and the possible research directions with several graphs, visualized by using the VOS viewer and Cite Space software
    • …
    corecore