897 research outputs found

    Deep Incremental Learning of Imbalanced Data for Just-In-Time Software Defect Prediction

    Full text link
    This work stems from three observations on prior Just-In-Time Software Defect Prediction (JIT-SDP) models. First, prior studies treat the JIT-SDP problem solely as a classification problem. Second, prior JIT-SDP studies do not consider that class balancing processing may change the underlying characteristics of software changeset data. Third, only a single source of concept drift, the class imbalance evolution is addressed in prior JIT-SDP incremental learning models. We propose an incremental learning framework called CPI-JIT for JIT-SDP. First, in addition to a classification modeling component, the framework includes a time-series forecast modeling component in order to learn temporal interdependent relationship in the changesets. Second, the framework features a purposefully designed over-sampling balancing technique based on SMOTE and Principal Curves called SMOTE-PC. SMOTE-PC preserves the underlying distribution of software changeset data. In this framework, we propose an incremental deep neural network model called DeepICP. Via an evaluation using \numprojs software projects, we show that: 1) SMOTE-PC improves the model's predictive performance; 2) to some software projects it can be beneficial for defect prediction to harness temporal interdependent relationship of software changesets; and 3) principal curves summarize the underlying distribution of changeset data and reveals a new source of concept drift that the DeepICP model is proposed to adapt to

    Joint production, quality control and maintenance policies subject to quality-dependant demand

    Get PDF
    This thesis is a strive to find a proper solution, using the stochastic optimal control means for an unreliable production system with product quality control and quality-dependent demand. The system consists of a single machine producing a single product type (M1P1) subject to breakdowns and random repairs and must satisfy a non-constant rate of customer demand, which response to the quality of parts received. Since the machine produces with a rate of noncompliant products, an inspection of the products is made to reduce the number of bad parts that would deliver to the customer. It is done continuously and consists of controlling a fraction of the production. Approved products are put back on the production line, while bad products are discarded. The intended objective of this study is to provide optimal quality control and production policy, which maximize the net revenue consisting of the gross revenue, the cost of inventory, the cost of shortage, the cost of the inspection, the cost of maintenance and the cost of no-quality parts. Main decision variables are the sampling rate of the quality control system as well as the threshold of finished product inventory. The demand function reacts to the average outgoing quality level (AOQ) of finished products. In the third chapter of this study, preventive maintenance and dynamic pricing policies are added up to the optimal policy, cited above. To achieve the optimal points of the policy, which maximize our net production revenue, a simulation approach is implemented as an experimental design and its results were used in response surface methodology. To implement the experiment design (simulation approach) which thoroughly reflects model considerations such as its continuous nature and the variety, first, a continuous variable for the probability of defectiveness was introduced, functioning with the age of machine up until its next breakdown maintenance. Second, so as to reflect the effect of quality control process that results in Average Outgoing Quality rather than simple defectiveness possibility, this function (AOQ) was built based on instant behavior of mentioned function above as its independent variable. Third, due to the use of prospect theory assumptions in building a demand function that responds to the level of client delivered defectiveness (AOQ), a responsive continuous function was created for the demand, reacting to the level of product quality by determining it's needed per time amount. Finally. To illustrate the machine’s manufacturing policy based on Hedging Point, finished product inventory variable was introduced in the experiment design. In a nutshell, we have a production system that has been designed in a way that by raising its age (At), leads to more possibility of defectiveness and less demand in time units. This manner continuous up until the next maintenance action of the system, which restores all factors to their initial conditions. By use of the simulation approach of optimization an experiment is designed and implemented to control decision variables of the policy and maximize the objective function of average net revenue (ANR). Decision variables are statistically and practically in the matter of consideration such as finished product inventory threshold (Z), the proportion of inspection (F) and PM thresholds (Mk or Pk)

    Towards quality programming in the automated testing of distributed applications

    Get PDF
    PhD ThesisSoftware testing is a very time-consuming and tedious activity and accounts for over 25% of the cost of software development. In addition to its high cost, manual testing is unpopular and often inconsistently executed. Software Testing Environments (STEs) overcome the deficiencies of manual testing through automating the test process and integrating testing tools to support a wide range of test capabilities. Most prior work on testing is in single-thread applications. This thesis is a contribution to testing of distributed applications, which has not been well explored. To address two crucial issues in testing, when to stop testing and how good the software is after testing, a statistics-based integrated test environment which is an extension of the testing concept in Quality Programming for distributed applications is presented. It provides automatic support for test execution by the Test Driver, test development by the SMAD Tree Editor and the Test Data Generator, test failure analysis by the Test Results Validator and the Test Paths Tracer, test measurement by the Quality Analyst, test management by the Test Manager and test planning by the Modeller. These tools are integrated around a public, shared data model describing the data entities and relationships which are manipulable by these tools. It enables early entry of the test process into the life cycle due to the definition of the quality planning and message-flow routings in the modelling. After well-prepared modelling and requirements specification are undertaken, the test process and the software design and implementation can proceed concurrently. A simple banking application written using Java Remote Method Invocation (RMI) and Java DataBase Connectivity (JDBC) shows the testing process of fitting it into the integrated test environment. The concept of the automated test execution through mobile agents across multiple platforms is also illustrated on this 3-tier client/server application.The National Science Council, Taiwan: The Ministry of National Defense, Taiwan

    Pattern formation in convective media

    Get PDF
    The several models of convection in a thin layer of liquid (gas) with poorly heat conducting boundaries are considered. These models demonstrate a rich dynamics of pattern formation and structural phase transitions. The primary analysis of pattern formation in such a system is performed with using of the well-studied Swift-Hohenberg model. The more advanced Proctor-Sivashinsky model is examined in order to study the second-order structural phase transitions both between patterns with translational invariance and between structures with broken translational invariance but keeping a long-range order. The spatial spectrum of arising structures and visual estimation of the number of defects are analyzed. The relation between the density of defects and the spectral characteristics of the structure is found. We also discuss the effect of noise on the formation of structural defects. It is shown that within the framework of the Proctor-Sivashinsky model with additional term, taking into account the inertial effects, the large-scale vortex structures arise as a result of the secondary modulation instability. Рассмотрено несколько моделей конвекции в тонком слое жидкости (газа) в условиях слабой теплопроводности на его границах. Эти модели демонстрируют разнообразную динамику формирования пространственных структур и структурнофазовых переходов между ними. Первоначальный анализ формирования ячеек в таких системах был представлен при использовании модели Свифта-Хоенберга. Более развитая и корректная модель Проктора-Сивашинского исследована для нескольких фазовых переходов между структурами с трансляционной инвариантностью и структурами с нарушенной трансляционной инвариантностью, но с сохраненным дальним порядком. Изучается связь между пространственным спектром структур и количеством дефектов. Найдено соотношение между плотностью дефектов и спектральными характеристиками структуры. Обсуждается эффект влияние шума на развитие фазовых переходов. Показано, что обобщенная модель Проктора-Сивашинского, учитывающая инерциальные эффекты, способна описывать формирование крупномасштабных вихревых структур, как результат вторичной модуляционной неустойчивости. Розглянуто декілька моделей конвекції у тонкому шарі рідини (газу) в умовах слабкої теплопровідності на його межах. Ці моделі демонструють різнобарвну динаміку формування просторових структур та структурово - фазових переходів між ними. Попередній аналіз формування чарунок в таких системах було розроблено при використанні моделі СвіфтаХоенберга. Більш розвинута та коректна модель Проктора-Сівашинського вивчена для декількох фазових переходів між структурами з трансляційною інваріантністю та структурами, які мали порушену трансляційну інваріантність, та дальній порядок. Вивчається зв'язок між просторовим спектром та кількістю дефектів. Знайдено відношення між густиной дефектів та характеристиками просторового спектру. Розглянуто ефект впливу шуму на розвиток фазових переходів. Показано, що більш загальна модель Проктора-Сівашинського, яка враховує інерціальні ефекти, дозволяє розглядати формування крупно масштабних вихорів, я к результат вторинної модуляційної нестійкості

    Liability Rules for the Digital Age

    Get PDF
    With legislative proposals for two directives published in September 2022, the European Commission aims to adapt the existing liability system to the challenges posed by digitalisation. One of the proposals is related and limited to liability for artificial intelligent (AI) systems, but the other contains nothing less than a full revision of the 1985 Product Liability Directive, which lies at the heart of European tort law. Whereas the current Product Liability Directive largely followed the model of US law, the revised version breaks new ground. It does not limit itself to the expansion of the concept of product to include intangible digital goods, such as software and data as well as related services, important enough in itself, but also targets the new intermediaries of e-commerce as liable parties. As such, the proposal for a new product liability directive is a great leap forward and has the potential to grow into a worldwide benchmark in the field. In comparison, the proposal of a directive on AI liability is much harder to assess. It remains questionable whether a second directive is actually needed at this stage of the technological development.Peer Reviewe

    Development of yttria-stabilized zirconia and graphene coatings obtained by suspension plasma spraying: Thermal stability and influence on mechanical properties

    Get PDF
    This study investigated the feasibility of depositing graphene nanoplatelet (GNP)-reinforced yttria-stabilized zirconia (YSZ) composite coatings. The coatings were deposited from an ethanol-based mixed YSZ and GNP suspension using suspension plasma spraying (SPS). Raman spectroscopy confirmed the presence of GNPs in the YSZ matrix, and scanning electron microscopy (SEM) analysis revealed a desired columnar microstructure with GNPs distributed predominantly in the inter-columnar spacing of the YSZ matrix. The as-deposited YSZ-GNP coatings were subjected to different isothermal treatments—400, 500, and 600 \ub0C for 8 h—to study the thermal stability of the GNPs in the composite coatings. Raman analysis showed the retention of GNPs in specimens exposed to temperatures up to 500 \ub0C, although the defect concentration in the graphitic structure increased with increasing temperature. Only a marginal effect on the mechanical properties (i.e., hardness and fracture toughness) was observed for the isothermally treated coatings

    Snoring: A Noise Defect Prediction Datasets

    Get PDF
    Defect prediction aims at identifying software artifacts that are likely to exhibit a defect. The main purpose of defect prediction is to reduce the cost of testing and code review, by letting developers focus on specific artifacts. Several researchers have worked on improving the accuracy of defect estimation models using techniques such as tuning, re-balancing, or feature selection. Ultimately, the reliability of a prediction model depends on the quality of the dataset. Therefore effort has been spent in identifying sources of noise in the datasets, and how to deal with them, including defect misclassification and defect origin. A key component of defect prediction approaches is the attribution of a defect to a projects release. Although developers might be able to attribute a defect to a specific release, in most cases a defect is attributed to the release after which the defect has been discovered. However, in many circumstances, it can happen that a defect is only discovered several releases after its introduction. This might introduce a bias in the dataset, i.e., treating the intermediate releases as defect-free and the latter as defect-prone. We call this phenomenon a “sleeping defect”. We call “snoring” the phenomenon in which classes are affected by sleeping defects only, that would be treated as defect-free until the defect is discovered. In this work, we analyze, on data from more than 4,000 bugs and 600 releases of 20 open source projects from the Apache ecosystem for investigating: 1)the magnitude of the sleeping defects, 2) the magnitude of the snoring classes, 3)if snoring impacts the evaluation of classifiers, 4)if snoring impacts classifier accuracy, and 5)if removing the last releases of data is beneficial in reducing the negative impact of the snoring noise on classifiers accuracy. Our results show that, on average across projects: 1)most of the defects in a project slept for more than 19% of the existing releases, 2)the missing rate is more than 50% unless we remove more than 20% of the releases, 3) the relative error in measuring the classifier accuracy achieved by using a dataset with snoring is about 100% in all accuracy metrics other than AUC, 4) the presence of snoring decreases the accuracy in each of the 15 classifiers, in each of the 6 accuracy metrics. For instance, Recall, F1, Kappa and Matthews decreases by about 80%, and 5) removing one release of data is better than removing no data in all accuracy metrics. For instance, Recall, F1, Kappa and Matthews increase by about 30%

    Automatizuotų transporto priemonių valdymas: civilinės atsakomybės reglamentavimas Europos Sąjungoje

    Get PDF
    The aim of this article is to provide with the option of civil liability regulation of connected autonomous vehicles (CAVs) and autonomous vehicles (AVs) at the European Union level in the light of introduction of Connected Automated Driving (CAD) on the common market.Šio straipsnio tikslas – pasiūlyti automatizuotų transporto priemonių (ATP) civilinės atsakomybės reglamentavimą Europos Sąjungos dimensijoje atsižvelgiant į sukurtą automatizuotų transporto priemonių valdymą ES vidaus rinkoje
    corecore