91,952 research outputs found

    Boundary Objects and their Use in Agile Systems Engineering

    Full text link
    Agile methods are increasingly introduced in automotive companies in the attempt to become more efficient and flexible in the system development. The adoption of agile practices influences communication between stakeholders, but also makes companies rethink the management of artifacts and documentation like requirements, safety compliance documents, and architecture models. Practitioners aim to reduce irrelevant documentation, but face a lack of guidance to determine what artifacts are needed and how they should be managed. This paper presents artifacts, challenges, guidelines, and practices for the continuous management of systems engineering artifacts in automotive based on a theoretical and empirical understanding of the topic. In collaboration with 53 practitioners from six automotive companies, we conducted a design-science study involving interviews, a questionnaire, focus groups, and practical data analysis of a systems engineering tool. The guidelines suggest the distinction between artifacts that are shared among different actors in a company (boundary objects) and those that are used within a team (locally relevant artifacts). We propose an analysis approach to identify boundary objects and three practices to manage systems engineering artifacts in industry

    User-Developer Communication in Large-Scale IT Projects

    Get PDF
    User participation and involvement in software development has been studied for a long time and is considered essential for a successful software system. The positive effects of involving users in software development include improving quality in light of information about precise requirements, avoiding unnecessarily expensive features through enhanced aligment between developers and users, creating a positive attitude toward the system among users, and enabling effective use of the system. However, large-scale IT (LSI) projects that use traditional development methods tend to involve the user only at the beginning of the development process (i.e., in the specification phase) and at the end (i.e., in the verification and validation phases) or not to involve users at all. However, even if developers involve users at the beginning and the end, there are important decisions that affect users in the phases in between (i.e., design and implementation), which are rarely communicated to the users. This lack of communication between the users and developers in the design and implementation phase results in users who do not feel integrated into the project, are little motivated to participate, and do not see their requirements manifested in the resulting system. Therefore, it is important to study how user-developer communication (UDC) in the design and implementation phases can be enhanced in LSI projects in order to increase system success. The thesis follows the technical action research (TAR) approach with the four phases of problem investigation, treatment design, design validation, and implementation evaluation. In the problem investigation phase we conducted a systematic mapping study and assessed the state of UDC practice with experts. In the treatment design phase, we designed the UDC–LSI method with experts, and we validated its design with experts in the design validation phase. Finally, in the implementation evaluation phase we evaluated the implementation of the method using a case study. This thesis first presents a meta-analysis of evidence of the effects of UPI on system success in general and explore the methods in the literature that aim to increase UPI in software development in the literature. Second, we investigate the state of UDC practice with experts, analyzing current practices and obstacles of UDC in LSI projects. Third, we propose the UDC–LSI method, which supports the enhancement of UDC in LSI projects, and present a descriptive classification containing user-relevant decisions (and, therefore, trigger points) to start UDC that can be used with our method. We also show the validity of the method through an assessment of the experts who see potential for the UDC–LSI method. Fourth, we demonstrate the results of a retrospective validation of the method in the real-life context of a large-scale IT project. The evaluation showed that the method is feasible to implement, has a positive effect on system success, and is efficient to implement from the perspective of project participants. Furthermore, project participants consider the UDC-LSI method to be usable and are likely to use in future projects

    Understanding citizen science and environmental monitoring: final report on behalf of UK Environmental Observation Framework

    Get PDF
    Citizen science can broadly be defined as the involvement of volunteers in science. Over the past decade there has been a rapid increase in the number of citizen science initiatives. The breadth of environmental-based citizen science is immense. Citizen scientists have surveyed for and monitored a broad range of taxa, and also contributed data on weather and habitats reflecting an increase in engagement with a diverse range of observational science. Citizen science has taken many varied approaches from citizen-led (co-created) projects with local community groups to, more commonly, scientist-led mass participation initiatives that are open to all sectors of society. Citizen science provides an indispensable means of combining environmental research with environmental education and wildlife recording. Here we provide a synthesis of extant citizen science projects using a novel cross-cutting approach to objectively assess understanding of citizen science and environmental monitoring including: 1. Brief overview of knowledge on the motivations of volunteers. 2. Semi-systematic review of environmental citizen science projects in order to understand the variety of extant citizen science projects. 3. Collation of detailed case studies on a selection of projects to complement the semi-systematic review. 4. Structured interviews with users of citizen science and environmental monitoring data focussing on policy, in order to more fully understand how citizen science can fit into policy needs. 5. Review of technology in citizen science and an exploration of future opportunities

    Simple identification tools in FishBase

    Get PDF
    Simple identification tools for fish species were included in the FishBase information system from its inception. Early tools made use of the relational model and characters like fin ray meristics. Soon pictures and drawings were added as a further help, similar to a field guide. Later came the computerization of existing dichotomous keys, again in combination with pictures and other information, and the ability to restrict possible species by country, area, or taxonomic group. Today, www.FishBase.org offers four different ways to identify species. This paper describes these tools with their advantages and disadvantages, and suggests various options for further development. It explores the possibility of a holistic and integrated computeraided strategy

    Accounts from developers of generic health state utility instruments explain why they produce different QALYs: a qualitative study

    Get PDF
    Purpose and setting: Despite the label generic health state utility instruments (HSUIs), empirical evidence shows that different HSUIs generate different estimates of Health-Related Quality of Life (HRQoL) in the same person. Once a HSUI is used to generate a QALY, the difference between HSUIs is often ignored, and decision-makers act as if \u27a QALY is a QALY is a QALY\u27. Complementing evidence that different generic HSUIs produce different empirical values, this study addresses an important gap by exploring how HSUIs differ, and processes that produced this difference. 15 developers of six generic HSUIs used for estimating the QOL component of QALYs: Quality of Well-Being (QWB) scale; 15 Dimension instrument (15D); Health Utilities Index (HUI); EuroQol EQ-5D; Short Form-6 Dimension (SF-6D), and the Assessment of Quality of Life (AQoL) were interviewed in 2012-2013. Principal findings: We identified key factors involved in shaping each instrument, and the rationale for similarities and differences across measures. While HSUIs have a common purpose, they are distinctly discrete constructs. Developers recalled complex developmental processes, grounded in unique histories, and these backgrounds help to explain different pathways taken at key decision points during the HSUI development. The basis for the HSUIs was commonly not equivalent conceptually: differently valued concepts and goals drove instrument design and development, according to each HSUI\u27s defined purpose. Developers drew from different sources of knowledge to develop their measure depending on their conceptualisation of HRQoL. Major conclusions/contribution to knowledge: We generated and analysed first-hand accounts of the development of the HSUIs to provide insight, beyond face value, about how and why such instruments differ. Findings enhance our understanding of why the six instruments developed the way they did, from the perspective of key developers of those instruments. Importantly, we provide additional, original explanation for why a QALY is not a QALY is not a QALY

    Using Ontologies for the Design of Data Warehouses

    Get PDF
    Obtaining an implementation of a data warehouse is a complex task that forces designers to acquire wide knowledge of the domain, thus requiring a high level of expertise and becoming it a prone-to-fail task. Based on our experience, we have detected a set of situations we have faced up with in real-world projects in which we believe that the use of ontologies will improve several aspects of the design of data warehouses. The aim of this article is to describe several shortcomings of current data warehouse design approaches and discuss the benefit of using ontologies to overcome them. This work is a starting point for discussing the convenience of using ontologies in data warehouse design.Comment: 15 pages, 2 figure

    An Empirical Study on Decision making for Quality Requirements

    Full text link
    [Context] Quality requirements are important for product success yet often handled poorly. The problems with scope decision lead to delayed handling and an unbalanced scope. [Objective] This study characterizes the scope decision process to understand influencing factors and properties affecting the scope decision of quality requirements. [Method] We studied one company's scope decision process over a period of five years. We analyzed the decisions artifacts and interviewed experienced engineers involved in the scope decision process. [Results] Features addressing quality aspects explicitly are a minor part (4.41%) of all features handled. The phase of the product line seems to influence the prevalence and acceptance rate of quality features. Lastly, relying on external stakeholders and upfront analysis seems to lead to long lead-times and an insufficient quality requirements scope. [Conclusions] There is a need to make quality mode explicit in the scope decision process. We propose a scope decision process at a strategic level and a tactical level. The former to address long-term planning and the latter to cater for a speedy process. Furthermore, we believe it is key to balance the stakeholder input with feedback from usage and market in a more direct way than through a long plan-driven process

    Towards a theoretical foundation of IT governance: the COBIT 5 case

    Get PDF
    Abstract: COBIT, (Control Objectives for Information and Information related Technologies) as an IT governance framework is well-known in IS practitioners communities. It would impair the virtues of COBIT to present it only as an IT governance framework. COBIT analyses the complete IS function and offers descriptive and normative support to manage, govern and audit IT in organizations. Although the framework is well accepted in a broad range of IS communities, it is created by practitioners and therefore it holds only a minor amount of theoretical supported claims. Thus critic rises from the academic community. This work contains research focusing on the theoretical fundamentals of the ISACA framework, COBIT 5 released in 2012. We implemented a reverse engineering work and tried to elucidate as much as possible propositions from COBIT 5 as an empiricism. We followed a qualitative research method to develop inductively derived theoretical statements. However our approach differs from the original work on grounded theory by Glaser and Strauss (1967) since we started from a general idea where to begin and we made conceptual descriptions of the empirical statements. So our data was only restructured to reveal theoretical findings. We looked at three candidate theories: 1) Stakeholder Theory (SHT), 2) Principal Agent Theory (PAT), and 3) Technology Acceptance Model (TAM). These three theories are categorized and from each theory, several testable propositions were deduced. We considered the five COBIT 5 principles, five processes (APO13, BAI06, DSS05, MEA03 and EDM03) mainly situated in the area of IS security and four IT-related goals (IT01, IT07, IT10 and IT16). The choice of the processes and IT-related goals are based on an experienced knowledge of COBIT as well of the theories. We constructed a mapping table to find matching patterns. The mapping was done separately by several individuals to increase the internal validity. Our findings indicate that COBIT 5 holds theoretical supported claims. The lower theory types such as PAT and SHT contribute the most. The presence and contribution of a theory is significantly constituted by IT-related goals as compared to the processes. We also make some suggestions for further research. First of all, the work has to be extended to all COBIT 5 processes and IT-related goals. This effort is currently going on. Next we ponder the question what other theories could be considered as candidates for this theoretical reverse engineering labour? During our work we listed already some theories with good potential. Our used pattern matching process can also be refined by bringing in other assessment models. Finally an alternative and more theoretic framework could be designed by using design science research methods and starting with the most relevant IS theories. That could lead to a new IT artefact that eventually could be reconciled with COBIT 5
    • …
    corecore