9,167 research outputs found

    Designing IS service strategy: an information acceleration approach

    Get PDF
    Information technology-based innovation involves considerable risk that requires insight and foresight. Yet, our understanding of how managers develop the insight to support new breakthrough applications is limited and remains obscured by high levels of technical and market uncertainty. This paper applies a new experimental method based on “discrete choice analysis” and “information acceleration” to directly examine how decisions are made in a way that is behaviourally sound. The method is highly applicable to information systems researchers because it provides relative importance measures on a common scale, greater control over alternate explanations and stronger evidence of causality. The practical implications are that information acceleration reduces the levels of uncertainty and generates a more accurate rationale for IS service strategy decisions

    Maintenance of Automated Test Suites in Industry: An Empirical study on Visual GUI Testing

    Full text link
    Context: Verification and validation (V&V) activities make up 20 to 50 percent of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge/experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project whilst also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation

    ERIGrid Holistic Test Description for Validating Cyber-Physical Energy Systems

    Get PDF
    Smart energy solutions aim to modify and optimise the operation of existing energy infrastructure. Such cyber-physical technology must be mature before deployment to the actual infrastructure, and competitive solutions will have to be compliant to standards still under development. Achieving this technology readiness and harmonisation requires reproducible experiments and appropriately realistic testing environments. Such testbeds for multi-domain cyber-physical experiments are complex in and of themselves. This work addresses a method for the scoping and design of experiments where both testbed and solution each require detailed expertise. This empirical work first revisited present test description approaches, developed a newdescription method for cyber-physical energy systems testing, and matured it by means of user involvement. The new Holistic Test Description (HTD) method facilitates the conception, deconstruction and reproduction of complex experimental designs in the domains of cyber-physical energy systems. This work develops the background and motivation, offers a guideline and examples to the proposed approach, and summarises experience from three years of its application.This work received funding in the European Community’s Horizon 2020 Program (H2020/2014–2020) under project “ERIGrid” (Grant Agreement No. 654113)

    Negotiating disciplinary boundaries in engineering problem-solving practice

    Get PDF
    Includes bibliographical referencesThe impetus for this research is the well-documented current inability of Higher Education to facilitate the level of problem solving required in 21st century engineering practice. The research contends that there is insufficient understanding of the nature of and relationship between the significantly different forms of disciplinary knowledge underpinning engineering practice. Situated in the Sociology of Education, and drawing on the social realist concepts of knowledge structures (Bernstein, 2000) and epistemic relations (Maton, 2014), the research maps the topology of engineering problem-solving practice in order to illuminate how novice problem solvers engage in epistemic code shifting in different industrial contexts. The aim in mapping problem-solving practices from an epistemological perspective is to make an empirical contribution to rethinking the theory/practice relationship in multidisciplinary engineering curricula and pedagogy, particularly at the level of technician. A novel and pragmatic problem-solving model - integrated from a range of disciplines - forms the organising framework for a methodologically pluralist case-study approach. The research design draws on a metaphor from the empirical site (modular automation systems) and sees the analysis of twelve matched cases in three categories. Case-study data consist of questionnaire texts, re-enactment interviews, expert verification interviews, and industry literature. The problem-solving model components (problem solver, problem environment, problem structure and problem-solving process) were analysed using, primarily, the Legitimation Code Theory concept of epistemic relations. This is a Cartesian plane-based instrument describing the nature of and relations between a phenomenon (what) and ways of approaching the phenomenon (how). Data analyses are presented as graphical relational maps of different practitioner knowledge practices in different contexts across three problem solving stages: approach, analysis and synthesis. Key findings demonstrate a symbiotic, structuring relationship between the 'what' and the 'how' of the problem in relation to the problem-solving components. Successful problem solving relies on the recognition of these relationships and the realisation of appropriate practice code conventions, as held to be legitimate both epistemologically and contextually. Successful practitioners engage in explicit code-shifting, generally drawing on a priori physics and mathematics-based knowledge, while acquiring a posteriori context-specific logic-based knowledge. High-achieving practitioners across these disciplinary domains demonstrate iterative code-shifting practices and discursive sensitivity. Recommendations for engineering education include the valuing of disciplinary differences and the acknowledgement of contextual complexity. It is suggested that the nature of engineering mathematics as currently taught and the role of mathematical thinking in enabling successful engineering problem-solving practice be investigated

    Using Bad Learners to find Good Configurations

    Full text link
    Finding the optimally performing configuration of a software system for a given setting is often challenging. Recent approaches address this challenge by learning performance models based on a sample set of configurations. However, building an accurate performance model can be very expensive (and is often infeasible in practice). The central insight of this paper is that exact performance values (e.g. the response time of a software system) are not required to rank configurations and to identify the optimal one. As shown by our experiments, models that are cheap to learn but inaccurate (with respect to the difference between actual and predicted performance) can still be used rank configurations and hence find the optimal configuration. This novel \emph{rank-based approach} allows us to significantly reduce the cost (in terms of number of measurements of sample configuration) as well as the time required to build models. We evaluate our approach with 21 scenarios based on 9 software systems and demonstrate that our approach is beneficial in 16 scenarios; for the remaining 5 scenarios, an accurate model can be built by using very few samples anyway, without the need for a rank-based approach.Comment: 11 pages, 11 figure

    Model-based specification of safety compliance needs for critical systems : A holistic generic metamodel

    Get PDF
    Abstract Context: Many critical systems must comply with safety standards as a way of providing assurance that they do not pose undue risks to people, property, or the environment. Safety compliance is a very demanding activity, as the standards can consist of hundreds of pages and practitioners typically have to show the fulfilment of thousands of safety-related criteria. Furthermore, the text of the standards can be ambiguous, inconsistent, and hard to understand, making it difficult to determine how to effectively structure and manage safety compliance information. These issues become even more challenging when a system is intended to be reused in another application domain with different applicable standards. Objective: This paper aims to resolve these issues by providing a metamodel for the specification of safety compliance needs for critical systems. Method: The metamodel is holistic and generic, and abstracts common concepts for demonstrating safety compliance from different standards and application domains. Its application results in the specification of “reference assurance frameworks” for safety-critical systems, which correspond to a model of the safety criteria of a given standard. For validating the metamodel with safety standards, parts of several standards have been modelled by both academic and industry personnel, and other standards have been analysed. We further augment this with feedback from practitioners, including feedback during a workshop. Results: The results from the validation show that the metamodel can be used to specify safety compliance needs for aerospace, automotive, avionics, defence, healthcare, machinery, maritime, oil and gas, process industry, railway, and robotics. Practitioners consider that the metamodel can meet their needs and find benefits in its use. Conclusion: The metamodel supports the specification of safety compliance needs for most critical computer-based and software-intensive systems. The resulting models can provide an effective means of structuring and managing safety compliance information

    Increasing information feed in the process of structural steel design

    Get PDF
    Research initiatives throughout history have shown how a designer typically makes associations and references to a vast amount of knowledge based on experiences to make decisions. With the increasing usage of information systems in our everyday lives, one might imagine an information system that provides designers access to the ‘architectural memories’ of other architectural designers during the design process, in addition to their own physical architectural memory. In this paper, we discuss how the increased adoption of semantic web technologies might advance this idea. We investigate to what extent information can be described with these technologies in the context of structural steel design. This investigation indicates significant possibilities regarding information reuse in the process of structural steel design and, by extent, in other design contexts as well. However, important obstacles and question remarks can still be outlined as well
    • 

    corecore