1,662 research outputs found

    Towards Effective Bug Triage with Software Data Reduction Techniques

    Get PDF
    International audienceSoftware companies spend over 45 percent of cost in dealing with software bugs. An inevitable step of fixing bugs is bug triage, which aims to correctly assign a developer to a new bug. To decrease the time cost in manual work, text classification techniques are applied to conduct automatic bug triage. In this paper, we address the problem of data reduction for bug triage, i.e., how to reduce the scale and improve the quality of bug data. We combine instance selection with feature selection to simultaneously reduce data scale on the bug dimension and the word dimension. To determine the order of applying instance selection and feature selection, we extract attributes from historical bug data sets and build a predictive model for a new bug data set. We empirically investigate the performance of data reduction on totally 600,000 bug reports of two large open source projects, namely Eclipse and Mozilla. The results show that our data reduction can effectively reduce the data scale and improve the accuracy of bug triage. Our work provides an approach to leveraging techniques on data processing to form reduced and high-quality bug data in software development and maintenance

    Monitoring bioinspired fibrillar grippers by contact observation and machine learning

    Get PDF
    The remarkable properties of bio-inspired microstructures make them extensively accessible for various applications, including industrial, medical, and space applications. However, their implementation especially as grippers for pick-and-place robotics can be compromised by multiple factors. The most common ones are alignment imperfections with the target object, unbalanced stress distribution, contamination, defects, and roughness at the gripping interface. In the present work, three different approaches to assess the contact phenomena between patterned structures and the target object are presented. First, in-situ observation and machine learning are combined to realize accurate real-time predictions of adhesion performance. The trained supervised learning models successfully predict the adhesion performance from the contact signature. Second, two newly developed optical systems are compared to observe the correct grasping of various target objects (rough or transparent) by looking through the microstructures. And last, model experiments are provided for a direct comparison with simulation efforts aiming at a prediction of the contact signature and an analysis of the rate and preload-dependency of the adhesion strength of a soft polymer film in contact with roughness-like surface topography. The results of this thesis open new perspectives for improving the reliability of handling systems using bioinspired microstructures.Durch die besonderen Eigenschaften bioinspirierter Mikrostrukturen können diese für verschiedene Anwendungen genutzt werden, einschließlich industrieller, medizinischer und Weltraumanwendungen. Ihre Implementierung, insbesondere als Greifer für Pick-and-Place-Robotiker, kann jedoch durch mehrere Faktoren beeinträchtigt werden. Am häufigsten sind Ausrichtungsmängel an das Zielobjekt, unausgeglichene Spannungsverteilungen, Defekte und Rauheit an der Greifschnittstelle. Die vorliegende Arbeit zeigt drei verschiedene Ansätze, um den Kontakt zwischen strukturierten Adhäsiven und Zielobjekten zu untersuchen. Zunächst werden in-situ Beobachtungen und maschinelles Lernen kombiniert, um Echtzeitvorhersagen der Adhäsionsleistung zu ermöglichen. Die trainierten Modelle werden verwendet, um die Haftungsleistung anhand der Kontaktsignatur des Pads erfolgreich zu prognostizieren. Anschließend werden zwei neu entwickelte, optische Systeme verglichen, mit denen das korrekte ” Greifen“ von verschiedenen Objekten (mit rauen oder undurchsichtigen Oberflächen) durch die Mikrostrukturen live verfolgt werden kann. Zuletzt werden Modellexperimente durchgeführt, die mit Simulationen der Signatur des Kontakts einer weichen Polymerschicht mit einer idealisierten rauen Gegenfläche direkt verglichen werden können. Die Ergebnisse dieser Arbeit eröffnen neue Perspektiven zur zuverlässigeren Verwendung von Handhabungssystemen mit bioinspirierten Mikrostrukturen.Leibniz Competition Grant MUSIGAND (No. K279/2019) awarded to Eduard Arz

    ADVANCES IN SYSTEM RELIABILITY-BASED DESIGN AND PROGNOSTICS AND HEALTH MANAGEMENT (PHM) FOR SYSTEM RESILIENCE ANALYSIS AND DESIGN

    Get PDF
    Failures of engineered systems can lead to significant economic and societal losses. Despite tremendous efforts (e.g., $200 billion annually) denoted to reliability and maintenance, unexpected catastrophic failures still occurs. To minimize the losses, reliability of engineered systems must be ensured throughout their life-cycle amidst uncertain operational condition and manufacturing variability. In most engineered systems, the required system reliability level under adverse events is achieved by adding system redundancies and/or conducting system reliability-based design optimization (RBDO). However, a high level of system redundancy increases a system's life-cycle cost (LCC) and system RBDO cannot ensure the system reliability when unexpected loading/environmental conditions are applied and unexpected system failures are developed. In contrast, a new design paradigm, referred to as resilience-driven system design, can ensure highly reliable system designs under any loading/environmental conditions and system failures while considerably reducing systems' LCC. In order to facilitate the development of formal methodologies for this design paradigm, this research aims at advancing two essential and co-related research areas: Research Thrust 1 - system RBDO and Research Thrust 2 - system prognostics and health management (PHM). In Research Thrust 1, reliability analyses under uncertainty will be carried out in both component and system levels against critical failure mechanisms. In Research Thrust 2, highly accurate and robust PHM systems will be designed for engineered systems with a single or multiple time-scale(s). To demonstrate the effectiveness of the proposed system RBDO and PHM techniques, multiple engineering case studies will be presented and discussed. Following the development of Research Thrusts 1 and 2, Research Thrust 3 - resilience-driven system design will establish a theoretical basis and design framework of engineering resilience in a mathematical and statistical context, where engineering resilience will be formulated in terms of system reliability and restoration and the proposed design framework will be demonstrated with a simplified aircraft control actuator design problem

    AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD)

    Get PDF
    This book is a collection of the accepted papers presented at the Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD) in conjunction with the 36th AAAI Conference on Artificial Intelligence 2022. During AIBSD 2022, the attendees addressed the existing issues of data bias and scarcity in Artificial Intelligence and discussed potential solutions in real-world scenarios. A set of papers presented at AIBSD 2022 is selected for further publication and included in this book

    Deep learning in food category recognition

    Get PDF
    Integrating artificial intelligence with food category recognition has been a field of interest for research for the past few decades. It is potentially one of the next steps in revolutionizing human interaction with food. The modern advent of big data and the development of data-oriented fields like deep learning have provided advancements in food category recognition. With increasing computational power and ever-larger food datasets, the approach’s potential has yet to be realized. This survey provides an overview of methods that can be applied to various food category recognition tasks, including detecting type, ingredients, quality, and quantity. We survey the core components for constructing a machine learning system for food category recognition, including datasets, data augmentation, hand-crafted feature extraction, and machine learning algorithms. We place a particular focus on the field of deep learning, including the utilization of convolutional neural networks, transfer learning, and semi-supervised learning. We provide an overview of relevant studies to promote further developments in food category recognition for research and industrial applicationsMRC (MC_PC_17171)Royal Society (RP202G0230)BHF (AA/18/3/34220)Hope Foundation for Cancer Research (RM60G0680)GCRF (P202PF11)Sino-UK Industrial Fund (RP202G0289)LIAS (P202ED10Data Science Enhancement Fund (P202RE237)Fight for Sight (24NN201);Sino-UK Education Fund (OP202006)BBSRC (RM32G0178B8

    Texture and Colour in Image Analysis

    Get PDF
    Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews

    Improving Bug Triaging Using Software Analytics

    Get PDF
    RÉSUMÉ La correction de bogues est une activité majeure pendant le développement et maintenance de logiciels. Durant cette activité, le tri de bogues joue un rôle essentiel. Il aide les gestionnaires à allouer leurs ressources limitées et permet aux développeurs de concentrer leurs efforts plus efficacement sur les bogues à haute sévérité. Malheureusement, les techniques du tri de bogues appliquées dans beaucoup d’entreprises ne sont pas toujours efficaces et conduisent à la misclassifications de bogues ou à des retards dans leurs résolutions, qui peuvent mener à la dégradation de la qualité d’un logiciel et à la déception de ses utilisateurs. Une stratégie de tri de bogues améliorée est nécessaire pour aider les gestionnaires à prendre de meilleures décisions, par exemple en accordant des degrés de priorité et sévérité appropriés aux bogues, ce qui permet aux développeurs de corriger les problèmes critiques le plus tôt possible en ignorant les problèmes futiles. Dans ce mémoire, nous utilisons les approches analytiques pour améliorer le tri de bogues. Nous réalisons trois études empiriques. La première étude porte sur la relation entre les corrections de bogues qui ont besoin d’autres corrections ultérieures (corrections supplémentaires) et les bogues qui ont été ouverts plus d’une fois (bogues ré-ouverts). Nous observons que les bogues ré-ouverts occupent entre 21,6% et 33,8% de toutes les corrections supplémentaires. Un grand nombre de bogues ré-ouverts (de 33,0% à 57,5%) n’ont qu’une correction préalable : les bogues originaux ont été fermés prématurément. La deuxième étude concerne les bogues qui provoquent des plantages fréquents, affectant de nombreux utilisateurs. Nous avons observé que ces bogues ne reçoivent pas toujours une attention adéquate même s’ils peuvent sérieusement dégrader la qualité d’un logiciel et même la réputation de l’entreprise. Notre troisième étude concerne les commits qui conduisent à des plantages. Nous avons trouvé que ces commits sont souvent validés par des développeurs moins expérimentés et qu’ils contiennent plus d’additions et de suppressions de lignes de code que les autre commits. Si les entreprises de logiciels pourraient détecter les problèmes susmentionnés pendant la phase du tri de bogues, elles pourraient augmenter l’efficacité de leur correction de bogues et la satisfaction de leurs utilisateurs, réduisant le coût de la maintenance de logiciels. En utilisant plusieurs algorithmes de régression et d’apprentissage automatique, nous avons bâti des modèles statistiques permettant de prédire respectivement des bogues ré-ouverts (avec une précision atteignant 97,0% et un rappel atteignant 65,3%), des bogues affectant un grand nombre d’utilisateurs (avec une précision atteignant 64,2% et un rappel atteignant 98.3%) et des commits induisant des plantages (avec une précision atteignant 61,4% et un rappel atteignant 95,0%). Les entreprises de logiciels peuvent appliquer nos modèles afin d’améliorer leur stratégie de tri de bogues, éviter les misclassifications de bogues et réduire la insatisfaction des utilisateurs due aux plantages.----------ABSTRACT Bug fixing has become a major activity in software development and maintenance. In this process, bug triaging plays an important role. It assists software managers in the allocation of their limited resources and allow developers to focus their efforts more efficiently to solve defects with high severity. Current bug triaging techniques applied in many software organisations may lead to misclassification of bugs, thus delay in bug resolution; resulting in degradation of software quality and users’ frustration. An improved bug triaging strategy would help software managers make better decisions by assigning the right priority and severity to bugs, allowing developers to address critical bugs as soon as possible and ignore the trivial ones. In this thesis, we leverage analytic approaches to conduct three empirical studies aimed at improving bug triaging techniques. The first study investigates the relation between bug fixes that need supplementary fixes and bugs that have been re-opened. We found that re-opened bugs account from 21.6% to 33.8% of all supplementary bug fixes. A considerable number of re-opened bugs (from 33.0% to 57.5%) had only one commit associated: their original bug reports were prematurely closed. The second study focuses on bugs that yield frequent crashes and impact large numbers of users. We found that these bugs were not prioritised by software managers albeit they can seriously decrease user-perceived quality and even the reputation of a software organisation. Our third study examines commits that lead to crashes. We found that these commits are often submitted by less experienced developers and that they contain more addition and deletion of lines of code than other commits. If software organisations can detect the aforementioned problems early on in the bug triaging phase, they can effectively increase their development productivity and users’ satisfaction, while decreasing software maintenance overhead. By using multiple regression and machine learning algorithms, we built statistical models to predict re-opened bugs among bugs that required supplementary bug fixes (with a precision up to 97.0% and a recall up to 65.3%), bugs with high crashing impact (with a precision up to 64.2% and a recall up to 98.3%), and commits inducing future crashes (with a precision up to 61.4% and a recall up to 95.0%). Software organisations can apply our proposed models to improve their bug triaging strategy by assigning bugs to the right developers, avoiding misclassification of bugs, reducing the negative impact of crash-related bugs, and addressing fault-prone code early on before they impact a large user base
    • …
    corecore