24,441 research outputs found

    Implementation of Software Process Improvement Through TSPi in Very Small Enterprises

    Get PDF
    This article shows an experience in a very small enterprise related to improving software quality in terms of test and process productivity. A customized process from the current organizational process based on TSPi was defined and the team was trained on it. The pilot project had schedule and budget constraints. The process began by gathering historical data from previous projects in order to get a measurement repository. Then the project was launched and some metrics were collected. Finally, results were analyzed and the improvements verified

    Large-Scale Detection of Non-Technical Losses in Imbalanced Data Sets

    Get PDF
    Non-technical losses (NTL) such as electricity theft cause significant harm to our economies, as in some countries they may range up to 40% of the total electricity distributed. Detecting NTLs requires costly on-site inspections. Accurate prediction of NTLs for customers using machine learning is therefore crucial. To date, related research largely ignore that the two classes of regular and non-regular customers are highly imbalanced, that NTL proportions may change and mostly consider small data sets, often not allowing to deploy the results in production. In this paper, we present a comprehensive approach to assess three NTL detection models for different NTL proportions in large real world data sets of 100Ks of customers: Boolean rules, fuzzy logic and Support Vector Machine. This work has resulted in appreciable results that are about to be deployed in a leading industry solution. We believe that the considerations and observations made in this contribution are necessary for future smart meter research in order to report their effectiveness on imbalanced and large real world data sets.Comment: Proceedings of the Seventh IEEE Conference on Innovative Smart Grid Technologies (ISGT 2016

    Understanding object-oriented source code from the behavioural perspective

    Get PDF
    Comprehension is a key activity that underpins a variety of software maintenance and engineering tasks. The task of understanding object-oriented systems is hampered by the fact that the code segments that are related to a user-level function tend to be distributed across the system. We introduce a tool-supported code extraction technique that addresses this issue. Given a minimal amount of information about a behavioural element of the system that is of interest (such as a use-case), it extracts a trail of the methods (and method invocations) through the system that are needed in order to achieve an understanding of the implementation of the element of interest. We demonstrate the feasibility of our approach by implementing it as part of a code extraction tool, presenting a case study and evaluating the approach and tool against a set of established criteria for program comprehension tools

    Supporting Defect Causal Analysis in Practice with Cross-Company Data on Causes of Requirements Engineering Problems

    Full text link
    [Context] Defect Causal Analysis (DCA) represents an efficient practice to improve software processes. While knowledge on cause-effect relations is helpful to support DCA, collecting cause-effect data may require significant effort and time. [Goal] We propose and evaluate a new DCA approach that uses cross-company data to support the practical application of DCA. [Method] We collected cross-company data on causes of requirements engineering problems from 74 Brazilian organizations and built a Bayesian network. Our DCA approach uses the diagnostic inference of the Bayesian network to support DCA sessions. We evaluated our approach by applying a model for technology transfer to industry and conducted three consecutive evaluations: (i) in academia, (ii) with industry representatives of the Fraunhofer Project Center at UFBA, and (iii) in an industrial case study at the Brazilian National Development Bank (BNDES). [Results] We received positive feedback in all three evaluations and the cross-company data was considered helpful for determining main causes. [Conclusions] Our results strengthen our confidence in that supporting DCA with cross-company data is promising and should be further investigated.Comment: 10 pages, 8 figures, accepted for the 39th International Conference on Software Engineering (ICSE'17

    Improving Knowledge-Based Systems with statistical techniques, text mining, and neural networks for non-technical loss detection

    Get PDF
    Currently, power distribution companies have several problems that are related to energy losses. For example, the energy used might not be billed due to illegal manipulation or a breakdown in the customer’s measurement equipment. These types of losses are called non-technical losses (NTLs), and these losses are usually greater than the losses that are due to the distribution infrastructure (technical losses). Traditionally, a large number of studies have used data mining to detect NTLs, but to the best of our knowledge, there are no studies that involve the use of a Knowledge-Based System (KBS) that is created based on the knowledge and expertise of the inspectors. In the present study, a KBS was built that is based on the knowledge and expertise of the inspectors and that uses text mining, neural networks, and statistical techniques for the detection of NTLs. Text mining, neural networks, and statistical techniques were used to extract information from samples, and this information was translated into rules, which were joined to the rules that were generated by the knowledge of the inspectors. This system was tested with real samples that were extracted from Endesa databases. Endesa is one of the most important distribution companies in Spain, and it plays an important role in international markets in both Europe and South America, having more than 73 million customers

    Software development: A paradigm for the future

    Get PDF
    A new paradigm for software development that treats software development as an experimental activity is presented. It provides built-in mechanisms for learning how to develop software better and reusing previous experience in the forms of knowledge, processes, and products. It uses models and measures to aid in the tasks of characterization, evaluation and motivation. An organization scheme is proposed for separating the project-specific focus from the organization's learning and reuse focuses of software development. The implications of this approach for corporations, research and education are discussed and some research activities currently underway at the University of Maryland that support this approach are presented

    Quality and standards in primary initial teacher training : inspected 1998/2002

    Get PDF

    Moving forward with teacher professional learning

    Get PDF
    corecore