6,964 research outputs found

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    An agile information flow consolidator for delivery of quality software projects: technological perspective from a South African start-up

    Get PDF
    In today’s knowledge-based economy, modern organisations understand the importance of technology in their quest to be considered global leaders. South African markets like others worldwide are regularly flooded with the latest technology trends which can complicate the acquisition, use, management and maintenance of software. To achieve a competitive edge, companies tend to leverage agile methods with the best possible combination of innovative supporting tools as a key differentiator. Software technology firms are in this light faced with determining how to leverage technology and efficient development processes for them to consistently deliver quality software projects and solutions to their customer base. Previous studies have discussed the importance of software development processes from a project management perspective. African academia has immensely contributed in terms of software development and project management research which has focused on modern frameworks, methodologies as well as project management techniques. While the current research continues with this tradition by presenting the pertinence of modern agile methodologies, it additionally further describes modern agile development processes tailored in a sub-Saharan context. The study also aims novelty by showing how innovative sometimes disruptive technology tools can contribute to producing African software solutions to African problems. To this end, the thesis contains an experimental case study where a web portal is prototyped to assist firms with the management of agile project management and engineering related activities. Literature review, semi-structure interviews as well as direct observations from the industry use case are used as data sources. Underpinned by an Activity Theory analytical framework, the qualitative data is analysed by leveraging content and thematic oriented techniques. This study aims to contribute to software engineering as well as the information systems body of knowledge in general. The research hence ambitions to propose a practical framework to promote the delivery of quality software projects and products. For this thesis, such a framework was designed around an information system which helps organizations better manage agile project management and engineering related activities.Information SciencePh. D. (Information Systems

    A pragmatic approach to semantic repositories benchmarking

    Get PDF
    The aim of this paper is to benchmark various semantic repositories in order to evaluate their deployment in a commercial image retrieval and browsing application. We adopt a two-phase approach for evaluating the target semantic repositories: analytical parameters such as query language and reasoning support are used to select the pool of the target repositories, and practical parameters such as load and query response times are used to select the best match to application requirements. In addition to utilising a widely accepted benchmark for OWL repositories (UOBM), we also use a real-life dataset from the target application, which provides us with the opportunity of consolidating our findings. A distinctive advantage of this benchmarking study is that the essential requirements for the target system such as the semantic expressivity and data scalability are clearly defined, which allows us to claim contribution to the benchmarking methodology for this class of applications

    Modular Normalization with Types

    Get PDF
    With the increasing use of software in today’s digital world, software is becoming more and more complex and the cost of developing and maintaining software has skyrocketed. It has become pressing to develop software using effective tools that reduce this cost. Programming language research aims to develop such tools using mathematically rigorous foundations. A recurring and central concept in programming language research is normalization: the process of transforming a complex expression in a language to a canonical form while preserving its meaning. Normalization has compelling benefits in theory and practice, but is extremely difficult to achieve. Several program transformations that are used to optimise programs, prove properties of languages and check program equivalence, for instance, are after all instances of normalization, but they are seldom viewed as such.Viewed through the lens of current methods, normalization lacks the ability to be broken into sub-problems and solved independently, i.e., lacks modularity. To make matters worse, such methods rely excessively on the syntax of the language, making the resulting normalization algorithms brittle and sensitive to changes in the syntax. When the syntax of the language evolves due to modification or extension, as it almost always does in practice, the normalization algorithm may need to be revisited entirely. To circumvent these problems, normalization is currently either abandoned entirely or concrete instances of normalization are achieved using ad hoc means specific to a particular language. Continuing this trend in programming language research poses the risk of building on a weak foundation where languages either lack fundamental properties that follow from normalization or several concrete instances end up repeated in an ad hoc manner that lacks reusability.This thesis advocates for the use of type-directed Normalization by Evaluation (NbE) to develop normalization algorithms. NbE is a technique that provides an opportunity for a modular implementation of normalization algorithms by allowing us to disentangle the syntax of a language from its semantics. Types further this opportunity by allowing us to dissect a language into isolated fragments, such as functions and products, with an individual specification of syntax and semantics. To illustrate type-directed NbE in context, we develop NbE algorithms and show their applicability for typed programming language calculi in three different domains (modal types, static information-flow control and categorical combinators) and for a family of embedded-domain specific languages in Haskell

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System

    E-services in e-business engineering

    Get PDF
    • …
    corecore