49 research outputs found

    A Generalized Norton–Bass Model for Multigeneration Diffusion

    Get PDF
    The Norton–Bass (NB) model is often credited as the pioneering multigeneration diffusion model in marketing. However, as acknowledged by the authors, when counting the number of adopters who substitute an old product generation with a new generation, the NB model does not differentiate those who have already adopted the old generation from those who have not. In this study, we develop a generalized Norton–Bass (GNB) model that separates the two different types of substitutions. The GNB model provides closed-form expressions for both the number of units in use and the adoption rate, and offers greater flexibility in parameter estimation, forecasting, and revenue projection. An appealing aspect of the GNB model is that it uses exactly the same set of parameters as the NB model and is mathematically consistent with the later. Empirical results show that the GNB model delivers better overall performance than previous models both in terms of model fit and forecasting performance. The analyses also show that differentiating leapfrogging and switching adoptions based on the GNB model can help gain additional insights into the process of multigeneration diffusion. Furthermore, we demonstrate that the GNB model can incorporate the effect of marketing mix variables on the speed of diffusion for all product generations

    How to Deal with Liars? Designing Intelligent Rule-Based Expert Systems to Increase Accuracy or Reduce Cost

    Get PDF
    Input distortion is a common problem faced by expert systems, particularly those deployed with a Web interface. In this study, we develop novel methods to distinguish liars from truth-tellers, and redesign rule-based expert systems to address such a problem. The four proposed methods are termed split tree (ST), consolidated tree (CT), value-based split tree (VST), and value-based consolidated tree (VCT), respectively. Among them, ST and CT aim to increase an expert system’s accuracy of recommendations, and VST and VCT attempt to reduce the misclassification cost resulting from incorrect recommendations. We observe that ST and VST are less efficient than CT and VCT in that ST and VST always require selected attribute values to be verified, whereas CT and VCT do not require value verification under certain input scenarios. We conduct experiments to compare the performances of the four proposed methods and two existing methods, i.e., the traditional true tree (TT) method that ignores input distortion and the knowledge modification (KM) method proposed in prior research. The results show that CT and ST consistently rank first and second, respectively, in maximizing the recommendation accuracy, and VCT and VST always lead to the lowest and second lowest misclassification cost. Therefore, CT and VCT should be the methods of choice in dealing with users’ lying behaviors. Furthermore, we find that KM is outperformed by not only the four proposed methods, but sometimes even by the TT method. This result further confirms the advantage necessity of differentiating liars from truth-tellers when both types of users exist in the population

    Reconciling Continuous Attribute Values from Multiple Data Sources

    Get PDF
    Because of the heterogeneous nature of different data sources, data integration is often one of the most challenging tasks in managing modern information systems. The challenges exist at three different levels: schema heterogeneity, entity heterogeneity, and data heterogeneity. The existing literature has largely focused on schema heterogeneity and entity heterogeneity; and the very limited work on data heterogeneity either avoid attribute value conflicts or resolve them in an ad-hoc manner. The focus of this research is on data heterogeneity. We propose a decision-theoretical framework that enables attribute value conflicts to be resolved in a cost-efficient manner. The framework takes into consideration the consequences of incorrect data values and selects the value that minimizes the total expected error costs for all application problems. Numerical results show that significant savings can be achieved by adopting the proposed framework instead of other ad-hoc approaches

    A Markov-Based Update Policy for Constantly Changing Database Systems

    Get PDF
    In order to maximize the value of an organization\u27s data assets, it is important to keep data in its databases up-to-date. In the era of big data, however, constantly changing data sources make it a challenging task to assure data timeliness in enterprise systems. For instance, due to the high frequency of purchase transactions, purchase data stored in an enterprise resource planning system can easily become outdated, affecting the accuracy of inventory data and the quality of inventory replenishment decisions. Despite the importance of data timeliness, updating a database as soon as new data arrives is typically not optimal because of high update cost. Therefore, a critical problem in this context is to determine the optimal update policy for database systems. In this study, we develop a Markov decision process model, solved via dynamic programming, to derive the optimal update policy that minimizes the sum of data staleness cost and update cost. Based on real-world enterprise data, we conduct experiments to evaluate the performance of the proposed update policy in relation to benchmark policies analyzed in the prior literature. The experimental results show that the proposed update policy outperforms fixed interval update policies and can lead to significant cost savings

    The Economics of Public Beta Testing

    Get PDF
    A growing number of software firms now rely on public beta testing to improve the quality of their products before commercial release. While the benefits resulting from improved software reliability are well recognized, some important market-related benefits have not been studied. Through word-of-mouth, public beta testers can accelerate the diffusion of a software product after its release. Additionally, because of network effects, public beta testers can increase users’ valuation of a product. In this study, we consider both reliability-related and market-related benefits, and develop models to determine the optimal number of public beta testers and the optimal duration of testing. Our analyses show that public beta testing can be profitable even if word-of-mouth and network effects are the only benefits. Furthermore, when both benefits are considered, there is significant “economies of scope”—the net profit increases at a faster rate when both word-of-mouth and network effects are significant than when only one benefit is present. Finally, our sensitivity analyses demonstrate that public beta testing remains highly valuable to software firms over a wide range of testing and market conditions. In particular, firms will realize greater profits when recruiting public beta testers who are interested in the software but unable to afford it

    Free Software Offer and Software Diffusion: The Monopolist Case

    Get PDF
    An interesting phenomenon often observed is the availability of free software. The benefits resulting from network externality have been discussed in the related literature. However, the effect of a free software offer on new software diffusion has not been formally analyzed. We show in this study that even if other benefits do not exist, a software firm can still benefit from giving away fully functional software at the beginning period of the marketing process. This is due to the accelerated diffusion process and subsequently the increased NPV of future cash flows. The analysis is based on the well-known Bass diffusion model

    Now or Never Revisited: An Analysis of Market Entry Timing for Successive Product Generation

    Get PDF
    Determining the optimal market entry timing for successive technological innovations is a critical decision for firms. Pioneering studies dealing with this issue have focused one-time sale (e.g., HDTV), and concluded that a new product should be introduced to the market either now or never, or now or at maturity. However, these prior studies do not examine another commonly seen business practice —revenue is generated from continuous services (e.g., Office 365). In this research, we derive the optimal market entry timing under both one-time sale and continuous service, and check whether the prior findings remain valid under today’s diverse market landscape. We find that under one-time sale, the optimal entry timing is not limited to now, maturity, or never; but it can also lie between now and maturity. More interestingly, our results show that the now or never rule holds only under a scenario not considered in the prior studies

    Designing Intelligent Expert Systems to Cope with Liars

    Get PDF
    To cope with the problem of input distortion by users of Web-based expert systems, we develop methods to distinguish liars from truth-tellers based on verifiable attributes, and redesign the expert systems to control the impact of input distortion. The four methods we propose are termed split tree, consolidated tree, value based split tree, and value based consolidated tree. They improve the performance of expert systems by improving accuracy or reduce misclassification cost. Numerical examples confirm that the most possible accurate recommendation is not always the most economical one. The recommendations based on minimizing misclassification costs are more moderate compared to that based on accuracy. In addition, the consolidated tree methods are more efficient than the split tree methods, since they do not always require the verification of attribute values

    Micro-Fulfillment Center Inventory Policies for Digital Grocery Ecosystem

    Get PDF
    As a new phenomenon of grocery business digital transition, the micro-fulfillment center (MFC) requires further exploration of its management issues. This study aims to address the MFC assortment and inventory decision problem for the digital grocery ecosystem. With the goal of maximizing the profit, we first propose an MFC inventory decision framework based on the Markov decision process. Under this decision framework, we analyze various inventory decision scenarios, including single-period, multi-period, deterministic demand, stationary demand distribution, and varying demand distribution cases. To solve the problem under these scenarios, we propose several effective heuristics and algorithms. Experimental results show that the proposed heuristic policies outperform the benchmark significantly. Meanwhile, based on the critical findings, we also provide management insights for the MFC inventory problem. This study contributes to the research and practice in the field of grocery business digital transformation and digital ecosystems
    corecore