28,094 research outputs found

    A situational approach for the definition and tailoring of a data-driven software evolution method

    Get PDF
    Successful software evolution heavily depends on the selection of the right features to be included in the next release. Such selection is difficult, and companies often report bad experiences about user acceptance. To overcome this challenge, there is an increasing number of approaches that propose intensive use of data to drive evolution. This trend has motivated the SUPERSEDE method, which proposes the collection and analysis of user feedback and monitoring data as the baseline to elicit and prioritize requirements, which are then used to plan the next release. However, every company may be interested in tailoring this method depending on factors like project size, scope, etc. In order to provide a systematic approach, we propose the use of Situational Method Engineering to describe SUPERSEDE and guide its tailoring to a particular context.Peer ReviewedPostprint (author's final draft

    Business Value Is not only Dollars - Results from Case Study Research on Agile Software Projects

    Get PDF
    Business value is a key concept in agile software development. This paper presents results of a case study on how business value and its creation is perceived in the context of agile projects. Our overall conclusion is that the project participants almost never use an explicit and structured approach to guide the value creation throughout the project. Still, the application of agile methods in the studied cases leads to satisfied clients. An interesting result of the study represents the fact that the agile process of many projects differs significantly from what is described in the agile practitioners’ books as best practices. The key implication for research and practice is that we have an incentive to pursue the study of value creation in agile projects and to complement it by providing guidelines for better client’s involvement, as well as by developing structured methods that will enhance the value-creation in a project

    Decision-making through sustainability

    Get PDF
    From immemorial time, dams have contributed significantly for the progress of civilizations. For this reason, nowadays, there is a vast engineering heritage. Over the years, these infrastructures can present some ordinary maintenance issues associated with their normal operation or with ageing processes. Normally, these problems do not represent an important risk for the structure, but they have to be attended. To do it, owners of dams have to finance many ordinary interventions. As it is impossible to carry out all of them at the same time, managers have to make a decision and select the most “important” ones. However, it is not easy because interventions usually have very different natures (for example: repair a bottom outlet, change gates, seal a crack...) and they cannot use a classical risk analysis for these type of interventions. The authors, who are aware this problem, present, in this paper, a multi-criteria decision-making system to prioritize these interventions with the aim of providing engineers a useful tool, with which they can prioritize the interventions from the most important to the least. To do it, the authors have used MIVES. This tool defines the Prioritization Index for the Management of Hydraulic Structures (PIMHS), which assesses, in two phases, the contribution to sustainability of each intervention. The first phase measures the damage of the dam, and the second measures the social, environmental and economic impacts. At the end of the paper, a case of study is presented where some interventions are evaluated with PIMHS.Postprint (published version

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Visualizing test diversity to support test optimisation

    Full text link
    Diversity has been used as an effective criteria to optimise test suites for cost-effective testing. Particularly, diversity-based (alternatively referred to as similarity-based) techniques have the benefit of being generic and applicable across different Systems Under Test (SUT), and have been used to automatically select or prioritise large sets of test cases. However, it is a challenge to feedback diversity information to developers and testers since results are typically many-dimensional. Furthermore, the generality of diversity-based approaches makes it harder to choose when and where to apply them. In this paper we address these challenges by investigating: i) what are the trade-off in using different sources of diversity (e.g., diversity of test requirements or test scripts) to optimise large test suites, and ii) how visualisation of test diversity data can assist testers for test optimisation and improvement. We perform a case study on three industrial projects and present quantitative results on the fault detection capabilities and redundancy levels of different sets of test cases. Our key result is that test similarity maps, based on pair-wise diversity calculations, helped industrial practitioners identify issues with their test repositories and decide on actions to improve. We conclude that the visualisation of diversity information can assist testers in their maintenance and optimisation activities

    The Unfulfilled Potential of Data-Driven Decision Making in Agile Software Development

    Get PDF
    With the general trend towards data-driven decision making (DDDM), organizations are looking for ways to use DDDM to improve their decisions. However, few studies have looked into the practitioners view of DDDM, in particular for agile organizations. In this paper we investigated the experiences of using DDDM, and how data can improve decision making. An emailed questionnaire was sent out to 124 industry practitioners in agile software developing companies, of which 84 answered. The results show that few practitioners indicated a widespread use of DDDM in their current decision making practices. The practitioners were more positive to its future use for higher-level and more general decision making, fairly positive to its use for requirements elicitation and prioritization decisions, while being less positive to its future use at the team level. The practitioners do see a lot of potential for DDDM in an agile context; however, currently unfulfilled

    PRIORITY MANAGEMENT – A DIRECTION TOWARDS COMPETITIVENESS

    Get PDF
    In a time when the most of us have to cope with globalization, the key for surpassing the negative effects produced by it, resides in choosing the right strategy. This necessary involves a performance management. Through this paper we propose priority management as an efficient way of thinking about gaining the vital competitive advantage.priority management, the Pareto law, 1-3-6 method, competitiveness, efficiency

    Video Game Development in a Rush: A Survey of the Global Game Jam Participants

    Full text link
    Video game development is a complex endeavor, often involving complex software, large organizations, and aggressive release deadlines. Several studies have reported that periods of "crunch time" are prevalent in the video game industry, but there are few studies on the effects of time pressure. We conducted a survey with participants of the Global Game Jam (GGJ), a 48-hour hackathon. Based on 198 responses, the results suggest that: (1) iterative brainstorming is the most popular method for conceptualizing initial requirements; (2) continuous integration, minimum viable product, scope management, version control, and stand-up meetings are frequently applied development practices; (3) regular communication, internal playtesting, and dynamic and proactive planning are the most common quality assurance activities; and (4) familiarity with agile development has a weak correlation with perception of success in GGJ. We conclude that GGJ teams rely on ad hoc approaches to development and face-to-face communication, and recommend some complementary practices with limited overhead. Furthermore, as our findings are similar to recommendations for software startups, we posit that game jams and the startup scene share contextual similarities. Finally, we discuss the drawbacks of systemic "crunch time" and argue that game jam organizers are in a good position to problematize the phenomenon.Comment: Accepted for publication in IEEE Transactions on Game

    Risk Assessment of a Wind Turbine: A New FMECA-Based Tool With RPN Threshold Estimation

    Get PDF
    A wind turbine is a complex system used to convert the kinetic energy of the wind into electrical energy. During the turbine design phase, a risk assessment is mandatory to reduce the machine downtime and the Operation & Maintenance cost and to ensure service continuity. This paper proposes a procedure based on Failure Modes, Effects, and Criticality Analysis to take into account every possible criticality that could lead to a turbine shutdown. Currently, a standard procedure to be applied for evaluation of the risk priority number threshold is still not available. Trying to fill this need, this paper proposes a new approach for the Risk Priority Number (RPN) prioritization based on a statistical analysis and compares the proposed method with the only three quantitative prioritization techniques found in literature. The proposed procedure was applied to the electrical and electronic components included in a Spanish 2 MW on-shore wind turbine
    corecore