41,312 research outputs found

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System

    Multi-agent opportunism

    Get PDF
    The real world is a complex place, rife with uncertainty; and prone to rapid change. Agents operating in a real-world domain need to be capable of dealing with the unexpected events that will occur as they carry out their tasks. While unexpected events are often related to failures in an agent\u27s plan, or inaccurate knowledge in an agent\u27s memory, they can also be opportunities for the agent. For example, an unexpected event may present the opportunity to achieve a goal that was previously unattainable. Similarly, real-world multi-agent systems (MASs) can benefit from the ability to exploit opportunities. These benefits include the ability for the MAS itself to better adapt to its changing environment, the ability to ensure agents obtain critical information in a timely fashion, and improvements in the overall performance of the system. In this dissertation we present a framework for multi-agent opportunism that is applicable to open systems of heterogeneous planning agents. The contributions of our research are both theoretical and practical. On the theoretical side, we provide an analysis of the critical issues that must be addressed in order to successfully exploit opportunities in a multi-agent system. This analysis can provide MAS designers and developers important guidance to incorporate multi-agent opportunism into their own systems. It also provides the fundamental underpinnings of our own specific approach to multi-agent opportunism. On the practical side, we have developed, implemented, and evaluated a specific approach to multi-agent opportunism for a particular class of multi-agent system. Our evaluation demonstrates that multi-agent opportunism can indeed be effective in systems of heterogeneous agents even when the amount of knowledge the agents share is severely limited. Our evaluation also demonstrates that agents that are capable of exploiting opportunities for their own goals are also able, using the same mechanisms, to recognize and respond to potential opportunities for the goals of other agents. Further and perhaps more interesting, we show that under some circumstances, multi-agent opportunism can be effective even when the agents are not themselves capable of single-agent opportunism

    Lost in translation: Exposing hidden compiler optimization opportunities

    Get PDF
    Existing iterative compilation and machine-learning-based optimization techniques have been proven very successful in achieving better optimizations than the standard optimization levels of a compiler. However, they were not engineered to support the tuning of a compiler's optimizer as part of the compiler's daily development cycle. In this paper, we first establish the required properties which a technique must exhibit to enable such tuning. We then introduce an enhancement to the classic nightly routine testing of compilers which exhibits all the required properties, and thus, is capable of driving the improvement and tuning of the compiler's common optimizer. This is achieved by leveraging resource usage and compilation information collected while systematically exploiting prefixes of the transformations applied at standard optimization levels. Experimental evaluation using the LLVM v6.0.1 compiler demonstrated that the new approach was able to reveal hidden cross-architecture and architecture-dependent potential optimizations on two popular processors: the Intel i5-6300U and the Arm Cortex-A53-based Broadcom BCM2837 used in the Raspberry Pi 3B+. As a case study, we demonstrate how the insights from our approach enabled us to identify and remove a significant shortcoming of the CFG simplification pass of the LLVM v6.0.1 compiler.Comment: 31 pages, 7 figures, 2 table. arXiv admin note: text overlap with arXiv:1802.0984

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page

    Carving out new business models in a small company through contextual ambidexterity: the case of a sustainable company

    Get PDF
    Business model innovation (BMI) and organizational ambidexterity have been pointed out as mechanisms for companies achieving sustainability. However, especially considering small and medium enterprises (SMEs), there is a lack of studies demonstrating how to combine these mechanisms. Tackling such a gap, this study seeks to understand how SMEs can ambidextrously manage BMI. Our aim is to provide a practical artifact, accessible to SMEs, to operationalize BMI through organizational ambidexterity. To this end, we conducted our study under the design science research to, first, build an artifact for operationalizing contextual ambidexterity for business model innovation. Then, we used an in-depth case study with a vegan fashion small e-commerce to evaluate the practical outcomes of the artifact. Our findings show that the company improves its business model while, at the same time, designs a new business model and monetizes it. Thus, our approach was able to take the first steps in the direction of operationalizing contextual ambidexterity for business model innovation in small and medium enterprises, democratizing the concept. We contribute to theory by connecting different literature strands and to practice by creating an artifact to assist managemen
    corecore