467 research outputs found

    Dimensionality Reduction of Quality Objectives for Web Services Design Modularization

    Full text link
    With the increasing use of service-oriented Architecture (SOA) in new software development, there is a growing and urgent need to improve current practice in service-oriented design. To improve the design of Web services, the search for Web services interface modularization solutions deals, in general, with a large set of conflicting quality metrics. Deciding about which and how the quality metrics are used to evaluate generated solutions are always left to the designer. Some of these objectives could be correlated or conflicting. In this paper, we propose a dimensionality reduction approach based on Non-dominated Sorting Genetic Algorithm (NSGA-II) to address the Web services re-modularization problem. Our approach aims at finding the best-reduced set of objectives (e.g. quality metrics) that can generate near optimal Web services modularization solutions to fix quality issues in Web services interface. The algorithm starts with a large number of interface design quality metrics as objectives (e.g. coupling, cohesion, number of ports, number of port types, and number of antipatterns) that are reduced based on the nonlinear correlation information entropy (NCIE).The statistical analysis of our results, based on a set of 22 real world Web services provided by Amazon and Yahoo, confirms that our dimensionality reduction Web services interface modularization approach reduced significantly the number of objectives on several case studies to a minimum of 2 objectives and performed significantly better than the state-of-the-art modularization techniques in terms of generating well-designed Web services interface for users.Master of ScienceSoftware Engineering, College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/145687/1/Thesis Report_Hussein Skaf.pdfDescription of Thesis Report_Hussein Skaf.pdf : Thesi

    Intelligent Web Services Architecture Evolution Via An Automated Learning-Based Refactoring Framework

    Full text link
    Architecture degradation can have fundamental impact on software quality and productivity, resulting in inability to support new features, increasing technical debt and leading to significant losses. While code-level refactoring is widely-studied and well supported by tools, architecture-level refactorings, such as repackaging to group related features into one component, or retrofitting files into patterns, remain to be expensive and risky. Serval domains, such as Web services, heavily depend on complex architectures to design and implement interface-level operations, provided by several companies such as FedEx, eBay, Google, Yahoo and PayPal, to the end-users. The objectives of this work are: (1) to advance our ability to support complex architecture refactoring by explicitly defining Web service anti-patterns at various levels of abstraction, (2) to enable complex refactorings by learning from user feedback and creating reusable/personalized refactoring strategies to augment intelligent designers’ interaction that will guide low-level refactoring automation with high-level abstractions, and (3) to enable intelligent architecture evolution by detecting, quantifying, prioritizing, fixing and predicting design technical debts. We proposed various approaches and tools based on intelligent computational search techniques for (a) predicting and detecting multi-level Web services antipatterns, (b) creating an interactive refactoring framework that integrates refactoring path recommendation, design-level human abstraction, and code-level refactoring automation with user feedback using interactive mutli-objective search, and (c) automatically learning reusable and personalized refactoring strategies for Web services by abstracting recurring refactoring patterns from Web service releases. Based on empirical validations performed on both large open source and industrial services from multiple providers (eBay, Amazon, FedEx and Yahoo), we found that the proposed approaches advance our understanding of the correlation and mutual impact between service antipatterns at different levels, revealing when, where and how architecture-level anti-patterns the quality of services. The interactive refactoring framework enables, based on several controlled experiments, human-based, domain-specific abstraction and high-level design to guide automated code-level atomic refactoring steps for services decompositions. The reusable refactoring strategy packages recurring refactoring activities into automatable units, improving refactoring path recommendation and further reducing time-consuming and error-prone human intervention.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/142810/1/Wang Final Dissertation.pdfDescription of Wang Final Dissertation.pdf : Dissertatio

    Mining, Understanding and Integrating User Preferences in Software Refactoring Using Computational Search, Machine Learning, and Dimensionality Reduction

    Full text link
    Search-Based Software Engineering (SBSE) is a software development practice which focuses on couching software engineering problems as optimization problems using metaheuristic techniques to automate the search for near optimal solutions to those problems. While SBSE has been successfully applied to a wide variety of software engineering problems, our understanding of the extent and nature of how software engineering problems can be formulated as automated or semi-automated search is still lacking. The majority of software engineering solutions are very subjective and present difficulties to formally define fitness functions to evaluate them. Current studies focus on guiding the search of optimal solutions rather than performing it. It is unclear yet the degree of interaction required with software engineers during the optimization process and how to reduce it. In this work, we focus on search-based software maintenance and evolution problems including software refactoring and software remodularization to improve the quality of systems. We propose to address the following challenges: • A major challenge in adapting a search-based technique for a software engineering problem is the definition of the fitness function. In most cases, fitness functions are ill-defined or subjective. • Most existing refactoring studies do not include the developer in the loop to analyze suggested refactoring solutions, and give their feedback during the optimization process. In addition, some quality metrics are cost-expensive leading to cost-expensive fitness functions. Moreover, while quality metrics evaluate the structural improvements of the refactored system, it is impossible to evaluate the semantic coherence of the design without user interactions. • Finally, several metrics can be dependent and correlated, thus it may be possible to reduce the number of objectives/dimensions when addressing refactoring problems. To address the above challenges, this work provides new techniques and tools to formulate software refactoring as scalable and learning-based search problem. We proposed novel interactive learning-based techniques using machine learning to incorporate developers knowledge and preferences in the search, resulting in more efficient and cost-effective search-based refactoring recommendation systems. We designed and implemented novel objective reduction SBSE methodologies to support scalable number of objectives. The proposed solutions were empirically evaluated in academic (open-source systems) and industrial settings.Ph.D.College of Engineering and Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/138970/1/Dea Final Dissertation.pdfDescription of Dea Final Dissertation.pdf : DissertationDescription of Troh Josselin Dea Signed Certification Form.pdf : Committee signature fil

    Towards using intelligent techniques to assist software specialists in their tasks

    Full text link
    L’automatisation et l’intelligence constituent des préoccupations majeures dans le domaine de l’Informatique. Avec l’évolution accrue de l’Intelligence Artificielle, les chercheurs et l’industrie se sont orientés vers l’utilisation des modèles d’apprentissage automatique et d’apprentissage profond pour optimiser les tâches, automatiser les pipelines et construire des systèmes intelligents. Les grandes capacités de l’Intelligence Artificielle ont rendu possible d’imiter et même surpasser l’intelligence humaine dans certains cas aussi bien que d’automatiser les tâches manuelles tout en augmentant la précision, la qualité et l’efficacité. En fait, l’accomplissement de tâches informatiques nécessite des connaissances, une expertise et des compétences bien spécifiques au domaine. Grâce aux puissantes capacités de l’intelligence artificielle, nous pouvons déduire ces connaissances en utilisant des techniques d’apprentissage automatique et profond appliquées à des données historiques représentant des expériences antérieures. Ceci permettra, éventuellement, d’alléger le fardeau des spécialistes logiciel et de débrider toute la puissance de l’intelligence humaine. Par conséquent, libérer les spécialistes de la corvée et des tâches ordinaires leurs permettra, certainement, de consacrer plus du temps à des activités plus précieuses. En particulier, l’Ingénierie dirigée par les modèles est un sous-domaine de l’informatique qui vise à élever le niveau d’abstraction des langages, d’automatiser la production des applications et de se concentrer davantage sur les spécificités du domaine. Ceci permet de déplacer l’effort mis sur l’implémentation vers un niveau plus élevé axé sur la conception, la prise de décision. Ainsi, ceci permet d’augmenter la qualité, l’efficacité et productivité de la création des applications. La conception des métamodèles est une tâche primordiale dans l’ingénierie dirigée par les modèles. Par conséquent, il est important de maintenir une bonne qualité des métamodèles étant donné qu’ils constituent un artéfact primaire et fondamental. Les mauvais choix de conception, ainsi que les changements conceptuels répétitifs dus à l’évolution permanente des exigences, pourraient dégrader la qualité du métamodèle. En effet, l’accumulation de mauvais choix de conception et la dégradation de la qualité pourraient entraîner des résultats négatifs sur le long terme. Ainsi, la restructuration des métamodèles est une tâche importante qui vise à améliorer et à maintenir une bonne qualité des métamodèles en termes de maintenabilité, réutilisabilité et extensibilité, etc. De plus, la tâche de restructuration des métamodèles est délicate et compliquée, notamment, lorsqu’il s’agit de grands modèles. De là, automatiser ou encore assister les architectes dans cette tâche est très bénéfique et avantageux. Par conséquent, les architectes de métamodèles pourraient se concentrer sur des tâches plus précieuses qui nécessitent de la créativité, de l’intuition et de l’intelligence humaine. Dans ce mémoire, nous proposons une cartographie des tâches qui pourraient être automatisées ou bien améliorées moyennant des techniques d’intelligence artificielle. Ensuite, nous sélectionnons la tâche de métamodélisation et nous essayons d’automatiser le processus de refactoring des métamodèles. A cet égard, nous proposons deux approches différentes: une première approche qui consiste à utiliser un algorithme génétique pour optimiser des critères de qualité et recommander des solutions de refactoring, et une seconde approche qui consiste à définir une spécification d’un métamodèle en entrée, encoder les attributs de qualité et l’absence des design smells comme un ensemble de contraintes et les satisfaire en utilisant Alloy.Automation and intelligence constitute a major preoccupation in the field of software engineering. With the great evolution of Artificial Intelligence, researchers and industry were steered to the use of Machine Learning and Deep Learning models to optimize tasks, automate pipelines, and build intelligent systems. The big capabilities of Artificial Intelligence make it possible to imitate and even outperform human intelligence in some cases as well as to automate manual tasks while rising accuracy, quality, and efficiency. In fact, accomplishing software-related tasks requires specific knowledge and skills. Thanks to the powerful capabilities of Artificial Intelligence, we could infer that expertise from historical experience using machine learning techniques. This would alleviate the burden on software specialists and allow them to focus on valuable tasks. In particular, Model-Driven Engineering is an evolving field that aims to raise the abstraction level of languages and to focus more on domain specificities. This allows shifting the effort put on the implementation and low-level programming to a higher point of view focused on design, architecture, and decision making. Thereby, this will increase the efficiency and productivity of creating applications. For its part, the design of metamodels is a substantial task in Model-Driven Engineering. Accordingly, it is important to maintain a high-level quality of metamodels because they constitute a primary and fundamental artifact. However, the bad design choices as well as the repetitive design modifications, due to the evolution of requirements, could deteriorate the quality of the metamodel. The accumulation of bad design choices and quality degradation could imply negative outcomes in the long term. Thus, refactoring metamodels is a very important task. It aims to improve and maintain good quality characteristics of metamodels such as maintainability, reusability, extendibility, etc. Moreover, the refactoring task of metamodels is complex, especially, when dealing with large designs. Therefore, automating and assisting architects in this task is advantageous since they could focus on more valuable tasks that require human intuition. In this thesis, we propose a cartography of the potential tasks that we could either automate or improve using Artificial Intelligence techniques. Then, we select the metamodeling task and we tackle the problem of metamodel refactoring. We suggest two different approaches: A first approach that consists of using a genetic algorithm to optimize set quality attributes and recommend candidate metamodel refactoring solutions. A second approach based on mathematical logic that consists of defining the specification of an input metamodel, encoding the quality attributes and the absence of smells as a set of constraints and finally satisfying these constraints using Alloy

    Improving Web Services Design Quality Via Dimensionality Reduction

    Full text link
    https://deepblue.lib.umich.edu/bitstream/2027.42/153329/1/icsoc2017fshortpaper.pd

    Study of variational autoencoders in machine learning

    Get PDF
    Autoencoders are essential in the field of machine learning because of the wide range of applications and distinctive talents they have. The ability of autoencoders to learn condensed and effective representations of complicated input data is one of the main factors in their significance. Autoencoders offer effective data compression by encoding the input data into a lower-dimensional latent space, which is useful in situations with constrained storage or bandwidth. Autoencoders are also frequently employed for unsupervised learning tasks like data generation, dimensionality reduction, and anomaly detection. Without relying on explicit labels or supervision, they enable us to find underlying patterns and structures in the data. Overall, the versatility and utility of autoencoders make them a fundamental tool in the machine learning toolbox, empowering researchers and practitioners to tackle a wide range of problems across diverse domains. Generative models, such as autoencoders, play a fundamental role in machine learning by enabling the creation of new, synthetic data that closely resembles the original input distribution. These models have revolutionised various domains, including image generation, text synthesis, and music composition, among many others. By capturing the underlying patterns and structures of the training data, generative models provide a powerful framework for creative applications, data augmentation, and simulation studies. This project aimed to explore the capabilities and applications of autoencoders, a type of neural network architecture, in the field of machine learning. The main focus of the project was to refactor legacy code used for image identification and transform it into an autoencoder capable of generating MNIST images. Through extensive experimentation and analysis, the project demonstrated the effectiveness of autoencoders in learning representations of input data and generating high-quality synthetic images. The findings of this study contribute to our understanding of autoencoders and their potential for various tasks, including image generation. The project also highlighted the importance of clean code practises, code refactoring, and neural network architectural design principles in adapting existing models for new purposes

    Applications of Multi-view Learning Approaches for Software Comprehension

    Full text link
    Program comprehension concerns the ability of an individual to make an understanding of an existing software system to extend or transform it. Software systems comprise of data that are noisy and missing, which makes program understanding even more difficult. A software system consists of various views including the module dependency graph, execution logs, evolutionary information and the vocabulary used in the source code, that collectively defines the software system. Each of these views contain unique and complementary information; together which can more accurately describe the data. In this paper, we investigate various techniques for combining different sources of information to improve the performance of a program comprehension task. We employ state-of-the-art techniques from learning to 1) find a suitable similarity function for each view, and 2) compare different multi-view learning techniques to decompose a software system into high-level units and give component-level recommendations for refactoring of the system, as well as cross-view source code search. The experiments conducted on 10 relatively large Java software systems show that by fusing knowledge from different views, we can guarantee a lower bound on the quality of the modularization and even improve upon it. We proceed by integrating different sources of information to give a set of high-level recommendations as to how to refactor the software system. Furthermore, we demonstrate how learning a joint subspace allows for performing cross-modal retrieval across views, yielding results that are more aligned with what the user intends by the query. The multi-view approaches outlined in this paper can be employed for addressing problems in software engineering that can be encoded in terms of a learning problem, such as software bug prediction and feature location

    Nautilus: An Interactive Plug-and-Play Search-Based Software Engineering (SBSE) Framework

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/163514/1/IEEE_Software_2020____Nautilus_FV.pdfSEL
    • …
    corecore