1,197 research outputs found

    Dynamic decision networks for decision-making in self-adaptive systems: a case study

    Get PDF
    Bayesian decision theory is increasingly applied to support decision-making processes under environmental variability and uncertainty. Researchers from application areas like psychology and biomedicine have applied these techniques successfully. However, in the area of software engineering and specifically in the area of self-adaptive systems (SASs), little progress has been made in the application of Bayesian decision theory. We believe that techniques based on Bayesian Networks (BNs) are useful for systems that dynamically adapt themselves at runtime to a changing environment, which is usually uncertain. In this paper, we discuss the case for the use of BNs, specifically Dynamic Decision Networks (DDNs), to support the decision-making of self-adaptive systems. We present how such a probabilistic model can be used to support the decision-making in SASs and justify its applicability. We have applied our DDN-based approach to the case of an adaptive remote data mirroring system. We discuss results, implications and potential benefits of the DDN to enhance the development and operation of self-adaptive systems, by providing mechanisms to cope with uncertainty and automatically make the best decision

    Performance evaluation metrics for multi-objective evolutionary algorithms in search-based software engineering: Systematic literature review

    Get PDF
    Many recent studies have shown that various multi-objective evolutionary algorithms have been widely applied in the field of search-based software engineering (SBSE) for optimal solutions. Most of them either focused on solving newly re-formulated problems or on proposing new approaches, while a number of studies performed reviews and comparative studies on the performance of proposed algorithms. To evaluate such performance, it is necessary to consider a number of performance metrics that play important roles during the evaluation and comparison of investigated algorithms based on their best-simulated results. While there are hundreds of performance metrics in the literature that can quantify in performing such tasks, there is a lack of systematic review conducted to provide evidence of using these performance metrics, particularly in the software engineering problem domain. In this paper, we aimed to review and quantify the type of performance metrics, number of objectives, and applied areas in software engineering that reported in primary studies-this will eventually lead to inspiring the SBSE community to further explore such approaches in depth. To perform this task, a formal systematic review protocol was applied for planning, searching, and extracting the desired elements from the studies. After considering all the relevant inclusion and exclusion criteria for the searching process, 105 relevant articles were identified from the targeted online databases as scientific evidence to answer the eight research questions. The preliminary results show that remarkable studies were reported without considering performance metrics for the purpose of algorithm evaluation. Based on the 27 performance metrics that were identified, hypervolume, inverted generational distance, generational distance, and hypercube-based diversity metrics appear to be widely adopted in most of the studies in software requirements engineering, software design, software project management, software testing, and software verification. Additionally, there are increasing interest in the community in re-formulating many objective problems with more than three objectives, yet, currently are dominated in re-formulating two to three objectives

    A systematic literature review on source code similarity measurement and clone detection: techniques, applications, and challenges

    Full text link
    Measuring and evaluating source code similarity is a fundamental software engineering activity that embraces a broad range of applications, including but not limited to code recommendation, duplicate code, plagiarism, malware, and smell detection. This paper proposes a systematic literature review and meta-analysis on code similarity measurement and evaluation techniques to shed light on the existing approaches and their characteristics in different applications. We initially found over 10000 articles by querying four digital libraries and ended up with 136 primary studies in the field. The studies were classified according to their methodology, programming languages, datasets, tools, and applications. A deep investigation reveals 80 software tools, working with eight different techniques on five application domains. Nearly 49% of the tools work on Java programs and 37% support C and C++, while there is no support for many programming languages. A noteworthy point was the existence of 12 datasets related to source code similarity measurement and duplicate codes, of which only eight datasets were publicly accessible. The lack of reliable datasets, empirical evaluations, hybrid methods, and focuses on multi-paradigm languages are the main challenges in the field. Emerging applications of code similarity measurement concentrate on the development phase in addition to the maintenance.Comment: 49 pages, 10 figures, 6 table

    Extended Skin: Designing Interactive Content for Ubiquitous Computing Materials

    Get PDF
    Current research is inspired by the impact of digital media on disciplinary division. Sim- ultaneously, recognizes the difficulty of engineering (applied science) to consider the humani- ties as fundamental contributors in the process of making. Steaming from a design perspective, the intersection between art (design) and science, questions if these relations can open per- spectives on the matter of designing within a U.C. context, and fundamentally, introduces the question on how this can be done Furthermore, the motivation for this research arises from considering that innovation in technology is happening in the fields typically identified as engineering. And, despite this, the in-corporation of these inventions in life, considering some discussed exceptions, has not typ- ically been present in the concerns of design action and methods. Therefore, the challenge of current research is to contribute to the realm of ubiquitous computing, routed by design, to some degree aiming to contribute to the field. A deeper analysis into the subject of U.C., there is the realization that there is minority presence of the humanities in the discussion of U.C. (Dourish and Bell, 2011). Technological disruption offers continuous inspiration for design innovation within U.C. Furthermore, the inquiry labeled as “material turn” contextualizes a dialogue between nano- technology and traditional materials. Nanotechnology is applied to project development, while considering a human centred design approach. This focus is present throughout this disserta- tion. The research proposal describes SuberSkin, as a responsive surface that works as a screen. The exploration of aesthetical effects is focused on visual properties – using high con- trast between natural cork colors, dark and light brown. The proposal is highly experimental, and ultimately, aims to explore potential routes on cork research, linked to that of U.C. Thus, recreating and transforming this material into an intelligent surface. In sum, this thesis discusses displacement of disciplines suggested as having a positive impact in interdisciplinary thought and for future design. Therefore a methodology, "research through techne" is presented that illustrates this intention.A presente pesquisa é inspirada pelo impacto exercido pelos media digitais na divisão disciplinar. Simultaneamente, reconhece a dificuldade da engenharia (ciência aplicada) em considerar as humanidades como contribuintes fundamentais no processo de fazer. Partindo de uma perspectiva de design e da interseção entre arte (design) e ciência, questiona-se se essas relações poderão abrir perspectivas na criação no âmbito da Computação Ubíqua. Fun- damentalmente, introduz a questão de como poderá ser feito. A motivação para esta pesquisa decorre de considerar que a inovação tecnológica acontece nas áreas normalmente identificadas como engenharia. E, apesar disso, a incor- poração dessas invenções na vida, considerando as exceções discutidas, normalmente não está presente nas preocupações, ação e métodos de design. Portanto, o desafio da pesquisa é con- tribuir para o domínio da Computação Ubíqua, orientada pelo design. Uma análise mais pro- funda sobre o tema da Computação Ubiqua, constata que há na sua discussão uma presença minoritária das humanidades (Dourish e Bell, 2011). A disrupção tecnológica oferece inspiração contínua para inovação de design, e o mesmo se aplica no âmbito da Computação Ubíqua. Além disso, a pesquisa intitulada como “material turn” contextualiza um diálogo entre a nanotecnologia e os materiais tradicionais. A nanotecnologia é aplicada ao desenvolvimento de projetos, considerando uma abordagem de design centrada no ser humano. Este foco está presente ao longo desta dissertação. O projecto de pesquisa descreve SuberSkin, uma superfície responsiva. A exploração centra-se nos efeitos estéticos da cortiça, recorrendo a um contraste entre as suas cores natu- rais: castanho escuro e claro. A proposta é experimental e, em última análise, visa explorar potenciais linhas de investigação ligando a cortiça à Computação Ubíqua. E assim, recriar e transformar este material numa superfície inteligente. Em suma, esta tese discute o deslocamento disciplinar como tendo um impacto posi- tivo no pensamento interdisciplinar e no futuro da prática do design. Consequentemente, apresenta uma metodologia, "investigação através da techne" que a exemplifica

    Synergies between machine learning and reasoning - An introduction by the Kay R. Amel group

    Get PDF
    This paper proposes a tentative and original survey of meeting points between Knowledge Representation and Reasoning (KRR) and Machine Learning (ML), two areas which have been developed quite separately in the last four decades. First, some common concerns are identified and discussed such as the types of representation used, the roles of knowledge and data, the lack or the excess of information, or the need for explanations and causal understanding. Then, the survey is organised in seven sections covering most of the territory where KRR and ML meet. We start with a section dealing with prototypical approaches from the literature on learning and reasoning: Inductive Logic Programming, Statistical Relational Learning, and Neurosymbolic AI, where ideas from rule-based reasoning are combined with ML. Then we focus on the use of various forms of background knowledge in learning, ranging from additional regularisation terms in loss functions, to the problem of aligning symbolic and vector space representations, or the use of knowledge graphs for learning. Then, the next section describes how KRR notions may benefit to learning tasks. For instance, constraints can be used as in declarative data mining for influencing the learned patterns; or semantic features are exploited in low-shot learning to compensate for the lack of data; or yet we can take advantage of analogies for learning purposes. Conversely, another section investigates how ML methods may serve KRR goals. For instance, one may learn special kinds of rules such as default rules, fuzzy rules or threshold rules, or special types of information such as constraints, or preferences. The section also covers formal concept analysis and rough sets-based methods. Yet another section reviews various interactions between Automated Reasoning and ML, such as the use of ML methods in SAT solving to make reasoning faster. Then a section deals with works related to model accountability, including explainability and interpretability, fairness and robustness. Finally, a section covers works on handling imperfect or incomplete data, including the problem of learning from uncertain or coarse data, the use of belief functions for regression, a revision-based view of the EM algorithm, the use of possibility theory in statistics, or the learning of imprecise models. This paper thus aims at a better mutual understanding of research in KRR and ML, and how they can cooperate. The paper is completed by an abundant bibliography

    BIG DATA AND ANALYTICS AS A NEW FRONTIER OF ENTERPRISE DATA MANAGEMENT

    Get PDF
    Big Data and Analytics (BDA) promises significant value generation opportunities across industries. Even though companies increase their investments, their BDA initiatives fall short of expectations and they struggle to guarantee a return on investments. In order to create business value from BDA, companies must build and extend their data-related capabilities. While BDA literature has emphasized the capabilities needed to analyze the increasing volumes of data from heterogeneous sources, EDM researchers have suggested organizational capabilities to improve data quality. However, to date, little is known how companies actually orchestrate the allocated resources, especially regarding the quality and use of data to create value from BDA. Considering these gaps, this thesis – through five interrelated essays – investigates how companies adapt their EDM capabilities to create additional business value from BDA. The first essay lays the foundation of the thesis by investigating how companies extend their Business Intelligence and Analytics (BI&A) capabilities to build more comprehensive enterprise analytics platforms. The second and third essays contribute to fundamental reflections on how organizations are changing and designing data governance in the context of BDA. The fourth and fifth essays look at how companies provide high quality data to an increasing number of users with innovative EDM tools, that are, machine learning (ML) and enterprise data catalogs (EDC). The thesis outcomes show that BDA has profound implications on EDM practices. In the past, operational data processing and analytical data processing were two “worlds” that were managed separately from each other. With BDA, these "worlds" are becoming increasingly interdependent and organizations must manage the lifecycles of data and analytics products in close coordination. Also, with BDA, data have become the long-expected, strategically relevant resource. As such data must now be viewed as a distinct value driver separate from IT as it requires specific mechanisms to foster value creation from BDA. BDA thus extends data governance goals: in addition to data quality and regulatory compliance, governance should facilitate data use by broadening data availability and enabling data monetization. Accordingly, companies establish comprehensive data governance designs including structural, procedural, and relational mechanisms to enable a broad network of employees to work with data. Existing EDM practices therefore need to be rethought to meet the emerging BDA requirements. While ML is a promising solution to improve data quality in a scalable and adaptable way, EDCs help companies democratize data to a broader range of employees

    Digitalization and Development

    Get PDF
    This book examines the diffusion of digitalization and Industry 4.0 technologies in Malaysia by focusing on the ecosystem critical for its expansion. The chapters examine the digital proliferation in major sectors of agriculture, manufacturing, e-commerce and services, as well as the intermediary organizations essential for the orderly performance of socioeconomic agents. The book incisively reviews policy instruments critical for the effective and orderly development of the embedding organizations, and the regulatory framework needed to quicken the appropriation of socioeconomic synergies from digitalization and Industry 4.0 technologies. It highlights the importance of collaboration between government, academic and industry partners, as well as makes key recommendations on how to encourage adoption of IR4.0 technologies in the short- and long-term. This book bridges the concepts and applications of digitalization and Industry 4.0 and will be a must-read for policy makers seeking to quicken the adoption of its technologies

    An online corpus of UML Design Models : construction and empirical studies

    Get PDF
    We address two problems in Software Engineering. The first problem is how to assess the severity of software defects? The second problem we address is that of studying software designs. Automated support for assessing the severity of software defects helps human developers to perform this task more efficiently and more accurately. We present (MAPDESO) for assessing the severity of software defects based on IEEE Standard Classification for Software Anomalies. The novelty of the approach lies in its use of uses ontologies and ontology-based reasoning which links defects to system level quality properties. One of the main reasons that makes studying of software designs challenging is the lack of their availability. We decided to collect software designs represented by UML models stored in image formats and use image processing techniques to convert them to models. We present the 'UML Repository' which contains UML diagrams (in image and XMI format) and design metrics. We conducted a series of empirical studies using the UML Repository. These empirical studies are a drop in the ocean empirical studies that can be conducted using the repository. Yet these studies show the versatility of useful studies that can be based on this novel repository of UML designs.Erasmus Mundus program (JOSYLEEN)Algorithms and the Foundations of Software technolog
    corecore