15,980 research outputs found

    Agile Testing: Improving the Process : Case Descom

    Get PDF
    The thesis was assigned by Descom, a marketing and technology company based in Jyväskylä. The aim of the thesis was to research the current state of testing inside the organization, and to improve on the existing processes and practices. The thesis was carried out as a design research (applied action research), because the focus was improving already existing processes inside a company. The theory base contains a wide range of subjects from agile development models, the testing process, and process improvement models to agile testing. Without a solid base of multiple aspects it would have been impossible to understand how the testing works as a process and how it could have been improved. As Descom uses agile development it was necessary to follow the same principles throughout the writing of the thesis and on results. As a result information was provided for the company about the current state of testing procedures at Descom and how to improve the testing and processes in the future. The documentation already existing for testing such as the test plan and test report were updated. New documents such as a process improvement plan based on Critical Testing Processes, test strategy and testing policy were also created. Figures of the testing process, and the processes for all test types in use were created to be used as a visual aid for understanding the testing as whole at Descom.Opinnäytetyön toimeksianto tuli Descomilta, joka on Jyväskylästä lähtöisin oleva markkinointi ja teknologia yritys. Työn tavoitteena oli tutkia testauksen tilaa organisaatiossa ja kehittää olemassa olevia prosesseja ja käytäntöjä. Tutkimusmenetelmäksi valikoitui kehittämistutkimus, koska painotus oli olemassa olevien prosessien kehityksessä yrityksen sisällä. Teoriapohjassa käsiteltiin monia aiheita ketterästä sovelluskehityksestä, testausprosessista ja prosessi kehityksestä aina ketterään testaukseen asti. Ilman kattavaa pohjaa monille osa-alueille, olisi ollut mahdotonta ymmärtää miten testaus toimii prosessina ja miten sitä pystyy kehittämään. Descom toimii ketterän sovelluskehityksen mukaisesti projekteissaan, joten oli tärkeää seurata samoja ketteriä periaatteita läpi opinnäytetyön kirjoittamisen ja tuloksissa. Tuloksena saatiin tietoa yritykselle, siitä miten testaus on toiminut Descomilla ja kuinka testausta ja prosesseja tulisi kehittää tulevaisuudessa. Myös aiemmin olemassa olleet testausdokumentit päivitettiin. Uusina dokumentteina laadittiin suunnitelma prosessikehitykseen, joka perustui Critical Testing Processes –malliin, testausstrategia ja testauspolitiikka. Prosessikuvaus tehtiin kaavioita käyttäen, joilla kuvattiin prosessi kokonaisuutena sekä käytettävät testaustasot

    A Scenario-Based Parametric Analysis of Stable Marriage Approaches to the Army Officer Assignment Problem

    Get PDF
    This paper compares linear programming and stable marriage approaches to the assignment problem under conditions of uncertainty. Robust solutions should exhibit reduced variability in the presence of one or more additional constraints. Several variations of each approach are compared with respect to solution quality, as measured by the overall social welfare among Officers and Assignments, and robustness as measured by the number of changes after a number of randomized perturbations. We examine the contrasts between these methods in the context of assigning Army Officers among a set of identified assignments. Additional constraints are modeled after realistic scenarios faced by Army assignment managers, with parameters randomized. The Pareto efficient approaches, relative to these measures of quality and robustness, are identified and subjected to a regression analysis. The coefficients of these models provide insight into the impact the different scenarios under study, as well as inform any trade-off decisions between Pareto-optimal approaches

    Rethinking Productivity in Software Engineering

    Get PDF
    Get the most out of this foundational reference and improve the productivity of your software teams. This open access book collects the wisdom of the 2017 "Dagstuhl" seminar on productivity in software engineering, a meeting of community leaders, who came together with the goal of rethinking traditional definitions and measures of productivity. The results of their work, Rethinking Productivity in Software Engineering, includes chapters covering definitions and core concepts related to productivity, guidelines for measuring productivity in specific contexts, best practices and pitfalls, and theories and open questions on productivity. You'll benefit from the many short chapters, each offering a focused discussion on one aspect of productivity in software engineering. Readers in many fields and industries will benefit from their collected work. Developers wanting to improve their personal productivity, will learn effective strategies for overcoming common issues that interfere with progress. Organizations thinking about building internal programs for measuring productivity of programmers and teams will learn best practices from industry and researchers in measuring productivity. And researchers can leverage the conceptual frameworks and rich body of literature in the book to effectively pursue new research directions. What You'll Learn Review the definitions and dimensions of software productivity See how time management is having the opposite of the intended effect Develop valuable dashboards Understand the impact of sensors on productivity Avoid software development waste Work with human-centered methods to measure productivity Look at the intersection of neuroscience and productivity Manage interruptions and context-switching Who Book Is For Industry developers and those responsible for seminar-style courses that include a segment on software developer productivity. Chapters are written for a generalist audience, without excessive use of technical terminology. ; Collects the wisdom of software engineering thought leaders in a form digestible for any developer Shares hard-won best practices and pitfalls to avoid An up to date look at current practices in software engineering productivit

    Data analytics 2016: proceedings of the fifth international conference on data analytics

    Get PDF

    Static Analysis in Practice

    Get PDF
    Static analysis tools search software looking for defects that may cause an application to deviate from its intended behavior. These include defects that compute incorrect values, cause runtime exceptions or crashes, expose applications to security vulnerabilities, or lead to performance degradation. In an ideal world, the analysis would precisely identify all possible defects. In reality, it is not always possible to infer the intent of a software component or code fragment, and static analysis tools sometimes output spurious warnings or miss important bugs. As a result, tool makers and researchers focus on developing heuristics and techniques to improve speed and accuracy. But, in practice, speed and accuracy are not sufficient to maximize the value received by software makers using static analysis. Software engineering teams need to make static analysis an effective part of their regular process. In this dissertation, I examine the ways static analysis is used in practice by commercial and open source users. I observe that effectiveness is hampered, not only by false warnings, but also by true defects that do not affect software behavior in practice. Indeed, mature production systems are often littered with true defects that do not prevent them from functioning, mostly correctly. To understand why this occurs, observe that developers inadvertently create both important and unimportant defects when they write software, but most quality assurance activities are directed at finding the important ones. By the time the system is mature, there may still be a few consequential defects that can be found by static analysis, but they are drowned out by the many true but low impact defects that were never fixed. An exception to this rule is certain classes of subtle security, performance, or concurrency defects that are hard to detect without static analysis. Software teams can use static analysis to find defects very early in the process, when they are cheapest to fix, and in so doing increase the effectiveness of later quality assurance activities. But this effort comes with costs that must be managed to ensure static analysis is worthwhile. The cost effectiveness of static analysis also depends on the nature of the defect being sought, the nature of the application, the infrastructure supporting tools, and the policies governing its use. Through this research, I interact with real users through surveys, interviews, lab studies, and community-wide reviews, to discover their perspectives and experiences, and to understand the costs and challenges incurred when adopting static analysis tools. I also analyze the defects found in real systems and make observations about which ones are fixed, why some seemingly serious defects persist, and what considerations static analysis tools and software teams should make to increase effectiveness. Ultimately, my interaction with real users confirms that static analysis is well received and useful in practice, but the right environment is needed to maximize its return on investment

    Clinical decision-making: midwifery students' recognition of, and response to, post partum haemorrhage in the simulation environment

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This paper reports the findings of a study of how midwifery students responded to a simulated post partum haemorrhage (PPH). Internationally, 25% of maternal deaths are attributed to severe haemorrhage. Although this figure is far higher in developing countries, the risk to maternal wellbeing and child health problem means that all midwives need to remain vigilant and respond appropriately to early signs of maternal deterioration.</p> <p>Methods</p> <p>Simulation using a patient actress enabled the research team to investigate the way in which 35 midwifery students made decisions in a dynamic high fidelity PPH scenario. The actress wore a birthing suit that simulated blood loss and a flaccid uterus on palpation. The scenario provided low levels of uncertainty and high levels of relevant information. The student's response to the scenario was videoed. Immediately after, they were invited to review the video, reflect on their performance and give a commentary as to what affected their decisions. The data were analysed using Dimensional Analysis.</p> <p>Results</p> <p>The students' clinical management of the situation varied considerably. Students struggled to prioritise their actions where more than one response was required to a clinical cue and did not necessarily use mnemonics as heuristic devices to guide their actions. Driven by a response to single cues they also showed a reluctance to formulate a diagnosis based on inductive and deductive reasoning cycles. This meant they did not necessarily introduce new hypothetical ideas against which they might refute or confirm a diagnosis and thereby eliminate fixation error.</p> <p>Conclusions</p> <p>The students response demonstrated that a number of clinical skills require updating on a regular basis including: fundal massage technique, the use of emergency standing order drugs, communication and delegation of tasks to others in an emergency and working independently until help arrives. Heuristic devices helped the students to evaluate their interventions to illuminate what else could be done whilst they awaited the emergency team. They did not necessarily serve to prompt the students' or help them plan care prospectively. The limitations of the study are critically explored along with the pedagogic implications for initial training and continuing professional development.</p

    How Much Method-in-Use Matters? A Case Study of Agile and Waterfall Software Projects and their Design Routine Variation

    Get PDF
    Development methods are rarely followed to the letter, and, consequently, their effects are often in doubt. At the same time, information systems scholars know little about the extent to which a given method truly influences software design and its outcomes. In this paper, we approach this gap by adopting a routine lens and using a novel methodological approach. Theoretically, we treat methods as (organizational) ostensive routine specifications and deploy routine construct as a feasible unit of analysis to analyze the effects of a method on actual, “performed” design routines. We formulated a research framework that identifies method, situation fitness, agency, and random noise as main sources of software design routine variation. Empirically, we applied the framework to examine the extent to which waterfall and agile methods induce variation in software design routines. We trace-enacted design activities in three software projects in a large IT organization that followed an object-oriented waterfall method and three software projects that followed an agile method and then analyzed these traces using a mixed-methods approach involving gene sequencing methods, Markov models, and qualitative content analysis. Our analysis shows that, in both cases, method-induced variation using agile and waterfall methods accounts for about 40% of all activities, while the remaining 60% can be explained by a designer’s personal habits, the project’s fitness conditions, and environmental noise. Generally, the effect of method on software design activities is smaller than assumed and the impact of designer and project conditions on software processes and outcomes should thus not be understated
    corecore