54,256 research outputs found

    Feature Performance Metrics for Software as a Service Offering: The Case of HubSpot

    Get PDF
    This paper provides an industry case study for measuring the performance of software as a service (SaaS) product features in order to prioritize development efforts. The case is based on empirical data from HubSpot and it is generalized to provide a framework applicable to other companies with large scale software offerings and distributed development. Firstly, relative value is measured by the impact that each feature has on customer acquisition and retention. Secondly, feature value is compared to feature cost and specifically development investment to determine feature profitability. Thirdly, feature sensitivity is measured. Feature sensitivity is defined as the effect a fixed amount of development investment has on value in a given time. Fourthly, features are segmented according to their location relative to the value to cost trend line into: most valuable features, outperforming, underperforming and fledglings. Finally, results are analyzed to determine future action. Maintenance and bug fixes are prioritized according to feature value. Product enhancements are prioritized according to sensitivity with special attention to fledglings. Underperforming features are either put on “lifesupport”, terminated or overhauled

    Building an Expert System for Evaluation of Commercial Cloud Services

    Full text link
    Commercial Cloud services have been increasingly supplied to customers in industry. To facilitate customers' decision makings like cost-benefit analysis or Cloud provider selection, evaluation of those Cloud services are becoming more and more crucial. However, compared with evaluation of traditional computing systems, more challenges will inevitably appear when evaluating rapidly-changing and user-uncontrollable commercial Cloud services. This paper proposes an expert system for Cloud evaluation that addresses emerging evaluation challenges in the context of Cloud Computing. Based on the knowledge and data accumulated by exploring the existing evaluation work, this expert system has been conceptually validated to be able to give suggestions and guidelines for implementing new evaluation experiments. As such, users can conveniently obtain evaluation experiences by using this expert system, which is essentially able to make existing efforts in Cloud services evaluation reusable and sustainable.Comment: 8 page, Proceedings of the 2012 International Conference on Cloud and Service Computing (CSC 2012), pp. 168-175, Shanghai, China, November 22-24, 201

    On Evaluating Commercial Cloud Services: A Systematic Review

    Full text link
    Background: Cloud Computing is increasingly booming in industry with many competing providers and services. Accordingly, evaluation of commercial Cloud services is necessary. However, the existing evaluation studies are relatively chaotic. There exists tremendous confusion and gap between practices and theory about Cloud services evaluation. Aim: To facilitate relieving the aforementioned chaos, this work aims to synthesize the existing evaluation implementations to outline the state-of-the-practice and also identify research opportunities in Cloud services evaluation. Method: Based on a conceptual evaluation model comprising six steps, the Systematic Literature Review (SLR) method was employed to collect relevant evidence to investigate the Cloud services evaluation step by step. Results: This SLR identified 82 relevant evaluation studies. The overall data collected from these studies essentially represent the current practical landscape of implementing Cloud services evaluation, and in turn can be reused to facilitate future evaluation work. Conclusions: Evaluation of commercial Cloud services has become a world-wide research topic. Some of the findings of this SLR identify several research gaps in the area of Cloud services evaluation (e.g., the Elasticity and Security evaluation of commercial Cloud services could be a long-term challenge), while some other findings suggest the trend of applying commercial Cloud services (e.g., compared with PaaS, IaaS seems more suitable for customers and is particularly important in industry). This SLR study itself also confirms some previous experiences and reveals new Evidence-Based Software Engineering (EBSE) lessons

    Revealing the Vicious Circle of Disengaged User Acceptance: A SaaS Provider's Perspective

    Get PDF
    User acceptance tests (UAT) are an integral part of many different software engineering methodologies. In this paper, we examine the influence of UATs on the relationship between users and Software-as-a-Service (SaaS) applications, which are continuously delivered rather than rolled out during a one-off signoff process. Based on an exploratory qualitative field study at a multinational SaaS provider in Denmark, we show that UATs often address the wrong problem in that positive user acceptance may actually indicate a negative user experience. Hence, SaaS providers should be careful not to rest on what we term disengaged user acceptance. Instead, we outline an approach that purposefully queries users for ambivalent emotions that evoke constructive criticism, in order to facilitate a discourse that favors the continuous innovation of a SaaS system. We discuss theoretical and practical implications of our approach for the study of user engagement in testing SaaS applications
    corecore