9,167 research outputs found
Designing IS service strategy: an information acceleration approach
Information technology-based innovation involves considerable risk that requires insight and foresight. Yet, our understanding of how managers develop the insight to support new breakthrough applications is limited and remains obscured by high levels of technical and market uncertainty. This paper applies a new experimental method based on âdiscrete choice analysisâ and âinformation accelerationâ to directly examine how decisions are made in a way that is behaviourally sound. The method is highly applicable to information systems researchers because it provides relative importance measures on a common scale, greater control over alternate explanations and stronger evidence of causality. The practical implications are that information acceleration reduces the levels of uncertainty and generates a more accurate rationale for IS service strategy decisions
Recommended from our members
Practitioner Track Proceedings of the 6th International Learning Analytics & Knowledge Conference (LAK16)
Practitioners spearhead a significant portion of learning analytics, relying on implementation and experimentation rather than on traditional academic research. Both approaches help to improve the state of the art. The LAK conference has created a practitioner track for submissions, which first ran in 2015 as an alternative to the researcher track.
The primary goal of the practitioner track is to share thoughts and findings that stem from learning analytics project implementations. While both large and small implementations are considered, all practitioner track submissions are required to relate to initiatives that are designed for large-scale and/or long-term use (as opposed to research-focused initiatives). Other guidelines include:
âą Implementation track record The project should have been used by an institution or have been deployed on a learning site. There are no hard guidelines about user numbers or how long the project has been running.
âą Learning/education related Submissions have to describe work that addresses learning/academic analytics, either at an educational institution or in an area (such as corporate training, health care or informal learning) where the goal is to improve the learning environment or learning outcomes.
âą Institutional involvement Neither submissions nor presentations have to include a named person from an academic institution. However, all submissions have to include information collected from people who have used the tool or initiative in a learning environment (such as faculty, students, administrators and trainees).
âą No sales pitches While submissions from commercial suppliers are welcome; reviewers do not accept overt (or covert) sales pitches. Reviewers look for evidence that a presentation will take into account challenges faced, problems that have arisen, and/or user feedback that needs to be addressed.
Submissions are limited to 1,200 words, including an abstract, a summary of deployment with end users, and a full description. Most papers in the proceedings are therefore short, and often informal, although some authors chose to extend their papers once they had been accepted.
Papers accepted in 2016 fell into two categories.
âą Practitioner Presentations Presentation sessions are designed to focus on deployment of a single learning analytics tool or initiative.
âą Technology Showcase The Technology Showcase event enables practitioners to demonstrate new and emerging learning analytics technologies that they are piloting or deploying.
Both types of paper are included in these proceedings
Maintenance of Automated Test Suites in Industry: An Empirical study on Visual GUI Testing
Context: Verification and validation (V&V) activities make up 20 to 50
percent of the total development costs of a software system in practice. Test
automation is proposed to lower these V&V costs but available research only
provides limited empirical data from industrial practice about the maintenance
costs of automated tests and what factors affect these costs. In particular,
these costs and factors are unknown for automated GUI-based testing.
Objective: This paper addresses this lack of knowledge through analysis of
the costs and factors associated with the maintenance of automated GUI-based
tests in industrial practice.
Method: An empirical study at two companies, Siemens and Saab, is reported
where interviews about, and empirical work with, Visual GUI Testing is
performed to acquire data about the technique's maintenance costs and
feasibility.
Results: 13 factors are observed that affect maintenance, e.g. tester
knowledge/experience and test case complexity. Further, statistical analysis
shows that developing new test scripts is costlier than maintenance but also
that frequent maintenance is less costly than infrequent, big bang maintenance.
In addition a cost model, based on previous work, is presented that estimates
the time to positive return on investment (ROI) of test automation compared to
manual testing.
Conclusions: It is concluded that test automation can lower overall software
development costs of a project whilst also having positive effects on software
quality. However, maintenance costs can still be considerable and the less time
a company currently spends on manual testing, the more time is required before
positive, economic, ROI is reached after automation
ERIGrid Holistic Test Description for Validating Cyber-Physical Energy Systems
Smart energy solutions aim to modify and optimise the operation of existing energy infrastructure. Such cyber-physical technology must be mature before deployment to the actual infrastructure, and competitive solutions will have to be compliant to standards still under development. Achieving this technology readiness and harmonisation requires reproducible experiments and appropriately realistic testing environments. Such testbeds for multi-domain cyber-physical experiments are complex in and of themselves. This work addresses a method for the scoping and design of experiments where both testbed and solution each require detailed expertise. This empirical work first revisited present test description approaches, developed a newdescription method for cyber-physical energy systems testing, and matured it by means of user involvement. The new Holistic Test Description (HTD) method facilitates the conception, deconstruction and reproduction of complex experimental designs in the domains of cyber-physical energy systems. This work develops the background and motivation, offers a guideline and examples to the proposed approach, and summarises experience from three years of its application.This work received funding in the European Communityâs Horizon 2020 Program (H2020/2014â2020)
under project âERIGridâ (Grant Agreement No. 654113)
Negotiating disciplinary boundaries in engineering problem-solving practice
Includes bibliographical referencesThe impetus for this research is the well-documented current inability of Higher Education to facilitate the level of problem solving required in 21st century engineering practice. The research contends that there is insufficient understanding of the nature of and relationship between the significantly different forms of disciplinary knowledge underpinning engineering practice. Situated in the Sociology of Education, and drawing on the social realist concepts of knowledge structures (Bernstein, 2000) and epistemic relations (Maton, 2014), the research maps the topology of engineering problem-solving practice in order to illuminate how novice problem solvers engage in epistemic code shifting in different industrial contexts. The aim in mapping problem-solving practices from an epistemological perspective is to make an empirical contribution to rethinking the theory/practice relationship in multidisciplinary engineering curricula and pedagogy, particularly at the level of technician. A novel and pragmatic problem-solving model - integrated from a range of disciplines - forms the organising framework for a methodologically pluralist case-study approach. The research design draws on a metaphor from the empirical site (modular automation systems) and sees the analysis of twelve matched cases in three categories. Case-study data consist of questionnaire texts, re-enactment interviews, expert verification interviews, and industry literature. The problem-solving model components (problem solver, problem environment, problem structure and problem-solving process) were analysed using, primarily, the Legitimation Code Theory concept of epistemic relations. This is a Cartesian plane-based instrument describing the nature of and relations between a phenomenon (what) and ways of approaching the phenomenon (how). Data analyses are presented as graphical relational maps of different practitioner knowledge practices in different contexts across three problem solving stages: approach, analysis and synthesis. Key findings demonstrate a symbiotic, structuring relationship between the 'what' and the 'how' of the problem in relation to the problem-solving components. Successful problem solving relies on the recognition of these relationships and the realisation of appropriate practice code conventions, as held to be legitimate both epistemologically and contextually. Successful practitioners engage in explicit code-shifting, generally drawing on a priori physics and mathematics-based knowledge, while acquiring a posteriori context-specific logic-based knowledge. High-achieving practitioners across these disciplinary domains demonstrate iterative code-shifting practices and discursive sensitivity. Recommendations for engineering education include the valuing of disciplinary differences and the acknowledgement of contextual complexity. It is suggested that the nature of engineering mathematics as currently taught and the role of mathematical thinking in enabling successful engineering problem-solving practice be investigated
Using Bad Learners to find Good Configurations
Finding the optimally performing configuration of a software system for a
given setting is often challenging. Recent approaches address this challenge by
learning performance models based on a sample set of configurations. However,
building an accurate performance model can be very expensive (and is often
infeasible in practice). The central insight of this paper is that exact
performance values (e.g. the response time of a software system) are not
required to rank configurations and to identify the optimal one. As shown by
our experiments, models that are cheap to learn but inaccurate (with respect to
the difference between actual and predicted performance) can still be used rank
configurations and hence find the optimal configuration. This novel
\emph{rank-based approach} allows us to significantly reduce the cost (in terms
of number of measurements of sample configuration) as well as the time required
to build models. We evaluate our approach with 21 scenarios based on 9 software
systems and demonstrate that our approach is beneficial in 16 scenarios; for
the remaining 5 scenarios, an accurate model can be built by using very few
samples anyway, without the need for a rank-based approach.Comment: 11 pages, 11 figure
Recommended from our members
Comparative study of design: application to Engineering Design
A recent exploratory study examines design processes across domains and compares them. This is achieved through a series of interdisciplinary, participative workshops. A systematic framework is used to collect data from expert witnesses who are practising designers across domains from engineering through architecture to product design and fashion, including film production, pharmaceutical drugs, food, packaging, graphics and multimedia and software. Similarities and differences across domains are described which indicate the types of comparative analysis we have been able to do from our data. The paper goes further and speculates on possible lessons for selected areas of engineering design which can be drawn from comparison with processes in other domains. As such this comparative design study offers the potential for improving engineering design processes. More generally it is a first step in creating a discipline of comparative design which aims to provide a new rich picture of design processes
Model-based specification of safety compliance needs for critical systems : A holistic generic metamodel
Abstract Context: Many critical systems must comply with safety standards as a way of providing assurance that they do not pose undue risks to people, property, or the environment. Safety compliance is a very demanding activity, as the standards can consist of hundreds of pages and practitioners typically have to show the fulfilment of thousands of safety-related criteria. Furthermore, the text of the standards can be ambiguous, inconsistent, and hard to understand, making it difficult to determine how to effectively structure and manage safety compliance information. These issues become even more challenging when a system is intended to be reused in another application domain with different applicable standards. Objective: This paper aims to resolve these issues by providing a metamodel for the specification of safety compliance needs for critical systems. Method: The metamodel is holistic and generic, and abstracts common concepts for demonstrating safety compliance from different standards and application domains. Its application results in the specification of âreference assurance frameworksâ for safety-critical systems, which correspond to a model of the safety criteria of a given standard. For validating the metamodel with safety standards, parts of several standards have been modelled by both academic and industry personnel, and other standards have been analysed. We further augment this with feedback from practitioners, including feedback during a workshop. Results: The results from the validation show that the metamodel can be used to specify safety compliance needs for aerospace, automotive, avionics, defence, healthcare, machinery, maritime, oil and gas, process industry, railway, and robotics. Practitioners consider that the metamodel can meet their needs and find benefits in its use. Conclusion: The metamodel supports the specification of safety compliance needs for most critical computer-based and software-intensive systems. The resulting models can provide an effective means of structuring and managing safety compliance information
Increasing information feed in the process of structural steel design
Research initiatives throughout history have shown how a designer typically makes associations and references to a vast amount of knowledge based on experiences to make decisions. With the increasing usage of information systems in our everyday lives, one might imagine an information system that provides designers access to the âarchitectural memoriesâ of other architectural designers during the design process, in addition to their own physical architectural memory. In this paper, we discuss how the increased adoption of semantic web technologies might advance this idea. We investigate to what extent information can be described with these technologies in the context of structural steel design. This investigation indicates significant possibilities regarding information reuse in the process of structural steel design and, by extent, in other design contexts as well. However, important obstacles and question remarks can still be outlined as well
- âŠ