18,987 research outputs found

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Ensuring sample quality for biomarker discovery studies - Use of ict tools to trace biosample life-cycle

    Get PDF
    The growing demand of personalized medicine marked the transition from an empirical medicine to a molecular one, aimed at predicting safer and more effective medical treatment for every patient, while minimizing adverse effects. This passage has emphasized the importance of biomarker discovery studies, and has led sample availability to assume a crucial role in biomedical research. Accordingly, a great interest in Biological Bank science has grown concomitantly. In biobanks, biological material and its accompanying data are collected, handled and stored in accordance with standard operating procedures (SOPs) and existing legislation. Sample quality is ensured by adherence to SOPs and sample whole life-cycle can be recorded by innovative tracking systems employing information technology (IT) tools for monitoring storage conditions and characterization of vast amount of data. All the above will ensure proper sample exchangeability among research facilities and will represent the starting point of all future personalized medicine-based clinical trials

    Vertical Integration, Exclusivity and Game Sales Performance in the U.S. Video Game Industry

    Get PDF
    This paper empirically investigates the relation between vertical integration and video game performance in the U.S. video game industry. For this purpose, we use a widely used data set from NPD on video game montly sales from October 2000 to October 2007. We complement these data with handly collected information on video game developers for all games in the sample and the timing of all mergers and acquisitions during that period. By doing this, we are able to separate vertically integrated games from those that are just exclusive to a platform First, we show that vertically integrated games produce higher revenues, sell more units and sell at higher prices than independent games. Second, we explore the causal effect of vertical integration and find that, for the average integrated game, most of the difference in performance comes from better release period and marketing strategies that soften competition. By default, vertical integration does not seem to have an effect on the quality of video game production. We also find that exclusivity is associated with lower demand.No; keywords

    JISC Programme Synthesis Study: Supporting Digital Preservation & Asset Management in Institutions

    Get PDF
    In mid-2006, JISC requested that the Digital Curation Centre (DCC), in its capacity as a centre of excellence on digital preservation and digital curation, undertake a small-scale study to synthesise and help disseminate the results of projects funded under the Supporting Digital Preservation and Asset Management in Institutions (DPAM) programme. This report is the final outcome of that exercise.

    A Geographic Approach to Racial Profiling: The Microanalysis and Macroanalysis of Racial Disparity in Traffic Stops

    Get PDF
    Despite numerous studies explaining racial disparity in traffic stops, the effects of spatial characteristics in patrolling areas have not been widely examined. In this article, the authors analyzed traffic stop data at both micro- and macro- levels. The micro-level analysis of individual stops confirmed racial disparity in the frequency of traffic stops as well as in subsequent police treatments. Blacks were overrepresented and other racial/ethnic groups were underrepresented in traffic stops, with a greater disparity in investigatory stops. The macro-level analysis found that the likelihood of being stopped and being subjected to unfavorable police treatment (e.g. arrest, search, and felony charge) was greater in beats where more blacks or Hispanics resided and/or more police force was deployed, consistent with the “racial threat” or “minority threat” hypothesis. These findings imply that racial disparity at the level of individual stops may be substantially explained by differential policing strategies adopted for different areas based on who resides in those areas

    On Improving (Non)Functional Testing

    Get PDF
    Software testing is commonly classified into two categories, nonfunctional testing and functional testing. The goal of nonfunctional testing is to test nonfunctional requirements, such as performance and reliability. Performance testing is one of the most important types of nonfunctional testing, one goal of which is to detect the phenomena that an Application Under Testing (AUT) exhibits unexpectedly worse performance (e.g., lower throughput) with some input data. During performance testing, a critical challenge is to understand the AUT’s behaviors with large numbers of combinations of input data and find the particular subset of inputs leading to performance bottlenecks. However, enumerating those particular inputs and identifying those bottlenecks are always laborious and intellectually intensive. In addition, for an evolving software system, some code changes may accidentally degrade performance between two software versions, it is even more challenging to find problematic changes (out of a large number of committed changes) may lead to performance regressions under certain test inputs. This dissertation presents a set of approaches to automatically find specific combinations of input data for exposing performance bottlenecks and further analyze execution traces to identify performance bottlenecks. In addition, this dissertation also provides an approach that automatically estimates the impact of code changes on performance degradation between two released software versions to identify the problematic ones likely leading to performance regressions. Functional testing is used to test the functional correctness of AUTs. Developers commonly write test suites for AUTs to test different functionalities and locate functional faults. During functional testing, developers rely on some strategies to order test cases to achieve certain objectives, such as exposing faults faster, which is known as Test Case Prioritization (TCP). TCP techniques are commonly classified into two categories, dynamic and static techniques. A set of empirical studies has been conducted to examine and understand different TCP techniques, but there is a clear gap in existing studies. No study has compared static techniques against dynamic techniques and comprehensively examined the impact of test granularity, program size, fault characteristics, and the similarities in terms of fault detection on TCP techniques. Thus, this dissertation presents an empirical study to thoroughly compare static and dynamic TCP techniques in terms of effectiveness, efficiency, and similarity of uncovered faults at different granularities on a large set of real-world programs, and further analyze the potential impact of program size and fault characteristics on TCP evaluation. Moreover, in the prior work, TCP techniques have been typically evaluated against synthetic software defects, called mutants. For this reason, it is currently unclear whether TCP performance on mutants would be representative of the performance achieved on real faults. to answer this fundamental question, this dissertation presents the first empirical study that investigates TCP performance when applied to both real-world faults and mutation faults for understanding the representativeness of mutants

    E-Debitum: managing software energy debt

    Get PDF
    35th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW ’20) - International Workshop on Sustainable Software Engineering (SUSTAIN-SE)This paper extends previous work on the concept of a new software energy metric: Energy Debt. This metric is a reflection on the implied cost, in terms of energy consumption over time, of choosing an energy flawed software implementation over a more robust and efficient, yet time consuming, approach. This paper presents the implementation a SonarQube tool called E-Debitum which calculates the energy debt of Android applications throughout their versions. This plugin uses a robust, well defined, and extendable smell catalogue based on current green software literature, with each smell defining the potential energy savings. To conclude, an experimental validation of E-Debitum was executed on 3 popular Android applications with various releases, showing how their energy debt fluctuated throughout releases.This work is financed by National Funds through the Portuguese funding agency, FCT -Fundação para a Ciência e a Tecnologia within project UIDB/50014/2020

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges
    corecore