12 research outputs found

    INVESTIGATING FEATURE CYCLES OF INFORMATION SYSTEM PRODUCTS

    Get PDF
    The theory of attractive quality describes how product features cause user satisfaction in fundamentally different ways. Some feature types have potential to only cause satisfaction on implementation into the product but no dissatisfaction on non-implementation, others only dissatisfaction on non-implementation by no satisfaction on implementation and still others both satisfaction on implementation and dissatisfaction on non-implementation. But the theory also suggests that feature types are not static but change over time for the same feature. Two multi-year studies were conducted to empirically investigate if this change in feature type can be observed for information systems (IS) product features. The results of both the studies show that IS product features do transition from one type to another over time. These findings have implications for product feature selection and can help IS product managers make strategic product feature upgrade decisions. This article describes the design and results obtained in one of the studies

    An Experiment For Estimating User Satisfaction

    Get PDF
    User satisfaction has been found to be an adequate proxy for contribution of Information Systems (IS) to organizational performance (Gelderman, 1998). However, although IS projects plan and estimate cost and schedule, quality, the third leg of the iron triangle, defined as the degree to which a system, component, or process meets specified requirements and customer/user needs and expectations (The IEEE Standard 610.12-1990) is seldom estimated. Assessing user satisfaction after the IS product is developed has limited value. Often the situation at this stage is non-remediable resulting in wasted efforts and loss of scarce resources. This paper investigates the feasibility of estimating user satisfaction even before the IS development has commenced. The method developed in the study was tested empirically and can be used in practice to estimate user satisfaction for a given requirement set and to obtain changes in estimates if requirement sub-sets are either included or excluded from it

    OF THE USER, BY THE USER, FOR THE USER: ENGAGING USERS IN INFORMATION SYSTEMS PRODUCT EVOLUTION

    Get PDF
    Collectively users constitute a source of massive amounts of product innovation (Vonn Hippel, Ogawa and de Jong, 2012). When users are viewed merely as recipients of innovation, the firm does not have access to user knowledge and experience developed through product use (Sawhney, Verona and Prandelli, 2005). Additionally, it has been suggested that the product evolution should be innovative in the users’ frame of mind not the developers’ (Fellows and Hooks, 1998). New product features that do not resonate with the users create wasted development effort, delay in time-to-market, increased complexity and operational costs of the product. Keeping this context in view, this empirical study assesses existing promising methods for selecting new product features through involvement of users. The results of this study show that the Kano survey method demonstrated potential in not only identifying those product features that add value to the user but also those which do not

    Lean Software Development: Evaluating Techniques for Parsimonious Feature Selection of Evolving Information Systems Products

    Get PDF
    Lean software development is a product development paradigm with focus on creating value for the customer and eliminating waste from all phases of the development life cycle. Applying lean principles, empirical studies were conducted focusing on identifying and assessing methods that parsimoniously select features from a given set of user feature requests. The results of the studies show that the Kano survey method has potential. It demonstrated efficacy in not only identifying the feature subset, from a given set of feature requests, that maximizes value to the users but also in eliminating waste by identifying the subset of features which does not provide significant value to the users when implemented into the software product. The design and results of one study is elaborated in this article. The findings obtained in the study have useful implications for practice and opens up new avenues of research for evolving market-driven software products

    Can we Take User Responses at Face Value? Exploring Users’ “Self-stated” and “Derived” Importance of Utilitarian versus Hedonic Software Features

    Get PDF
    Empirical studies in the product development literature have shown that the users’ self-reported importance of product attributes differs from the derived importance of product attributes obtained through the attributes’ correlation with an external criterion such as user satisfaction. However, no study has examined this phenomenon in the context of software products. This investigation is important because the present-day software requirement-prioritization techniques are based on capturing users’ self-reported importance of new software product features. As such, I develop a method in the study to capture the derived user importance of new features. The findings show that the implicitly derived importance of software attributes differs from the importance rankings assigned to them using requirement prioritization techniques. Further, I found that the implicitly derived user importance to identify the determinants of user satisfaction more accurately than the prioritization techniques based on self-stated user importance. I discuss the implications of this promising new approach for practice and future research in requirements prioritization

    Impact estimation: IT priority decisions

    Get PDF
    Given resource constraints, prioritization is a fundamental process within systems engineering to decide what to implement. However, there is little guidance about this process and existing IT prioritization methods have several problems, including failing to adequately cater for stakeholder value. In response to these issues, this research proposes an extension to an existing prioritization method, Impact Estimation (IE) to create Value Impact Estimation (VIE). VIE extends IE to cater for multiple stakeholder viewpoints and to move towards better capture of explicit stakeholder value. The use of metrics offers VIE the means of expressing stakeholder value that relates directly to real world data and so is informative to stakeholders and decision makers. Having been derived from prioritization factors found in the literature, stakeholder value has been developed into a multi-dimensional, composite concept, associated with other fundamental system concepts: objectives, requirements, designs, increment plans, increment deliverables and system contexts. VIE supports the prioritization process by showing where the stakeholder value resides for the proposed system changes. The prioritization method was proven to work by exposing it to three live projects, which served as case studies to this research. The use of the extended prioritization method was seen as very beneficial. Based on the three case studies, it is possible to say that the method produces two major benefits: the calculation of the stakeholder value to cost ratios (a form of ROI) and the system understanding gained through creating the VIE table

    Prioritisation of requests, bugs and enhancements pertaining to apps for remedial actions. Towards solving the problem of which app concerns to address initially for app developers

    Get PDF
    Useful app reviews contain information related to the bugs reported by the app’s end-users along with the requests or enhancements (i.e., suggestions for improvement) pertaining to the app. App developers expend exhaustive manual efforts towards the identification of numerous useful reviews from a vast pool of reviews and converting such useful reviews into actionable knowledge by means of prioritisation. By doing so, app developers can resolve the critical bugs and simultaneously address the prominent requests or enhancements in short intervals of apps’ maintenance and evolution cycles. That said, the manual efforts towards the identification and prioritisation of useful reviews have limitations. The most common limitations are: high cognitive load required to perform manual analysis, lack of scalability associated with limited human resources to process voluminous reviews, extensive time requirements and error-proneness related to the manual efforts. While prior work from the app domain have proposed prioritisation approaches to convert reviews pertaining to an app into actionable knowledge, these studies have limitations and lack benchmarking of the prioritisation performance. Thus, the problem to prioritise numerous useful reviews still persists. In this study, initially, we conducted a systematic mapping study of the requirements prioritisation domain to explore the knowledge on prioritisation that exists and seek inspiration from the eminent empirical studies to solve the problem related to the prioritisation of numerous useful reviews. Findings of the systematic mapping study inspired us to develop automated approaches for filtering useful reviews, and then to facilitate their subsequent prioritisation. To filter useful reviews, this work developed six variants of the Multinomial Naïve Bayes method. Next, to prioritise the order in which useful reviews should be addressed, we proposed a group-based prioritisation method which initially classified the useful reviews into specific groups using an automatically generated taxonomy, and later prioritised these reviews using a multi-criteria heuristic function. Subsequently, we developed an individual prioritisation method that directly prioritised the useful reviews after filtering using the same multi-criteria heuristic function. Some of the findings of the conducted systematic mapping study not only provided the necessary inspiration towards the development of automated filtering and prioritisation approaches but also revealed crucial dimensions such as accuracy and time that could be utilised to benchmark the performance of a prioritisation method. With regards to the proposed automated filtering approach, we observed that the performance of the Multinomial Naïve Bayes variants varied based on their algorithmic structure and the nature of labelled reviews (i.e., balanced or imbalanced) that were made available for training purposes. The outcome related to the automated taxonomy generation approach for classifying useful review into specific groups showed a substantial match with the manual taxonomy generated from domain knowledge. Finally, we validated the performance of the group-based prioritisation and individual prioritisation methods, where we found that the performance of the individual prioritisation method was superior to that of the group-based prioritisation method when outcomes were assessed for the accuracy and time dimensions. In addition, we performed a full-scale evaluation of the individual prioritisation method which showed promising results. Given the outcomes, it is anticipated that our individual prioritisation method could assist app developers in filtering and prioritising numerous useful reviews to support app maintenance and evolution cycles. Beyond app reviews, the utility of our proposed prioritisation solution can be evaluated on software repositories tracking bugs and requests such as Jira, GitHub and so on
    corecore