31 research outputs found

    Essays on value creation in online marketplaces

    Get PDF
    This dissertation consists of three essays that study the transformative impact of new information technologies under three specific contexts using both empirical and theoretical approaches. Chapter 2 examines the online review system, which is the new type of information technology that replaces the traditional word-of-mouth communication. Particularly, we study the practice of the platform owner that uses monetary incentives to attract reviewers. The research problem is important as firms, which seek to strengthen their online review platforms, have considered various forms of incentives, including extrinsic rewards, to encourage users to write reviews. We encountered a natural experiment design where one review platform suddenly started offering monetary incentives for writing reviews. Along with data from Amazon.com and using the difference-in-differences approach, we compare the quantity and quality of reviews before and after rewards were introduced in the treated platform. We find that reviews are significantly more positive but the quality decreases. Taking advantage of the panel data, we also evaluate the effect of rewards on existing reviewers. We find that their level of participation after monetary incentives decreases, but not their quality of participation. Lastly, even though the platform enjoys an increase in the number of new reviewers, disproportionately more reviews appear to be written for highly rated products. In Chapter 3, we investigate the economic implications of the new online communication system that has become increasing popular in recent years. This system allows consumers to ask and answer questions regarding the products that are available on the platform. It typically co-exists with the standard online review system where consumers share their own experience of the products. Although several websites adopt this Q&A system or even replace the standard review system with it, the economic implications of such a Q&A system have not been studied in the previous literature. We collected the data from two online shopping platforms and employed the difference-in-differences approach to empirically examine the effect of question & answer elements, which exist only on one platform, on product sales. Interestingly, we find that, controlling for everything else, question elements negatively affect product sales while answer elements, particularly the depth of the answers, have a positive impact on sales. However, as we focus on the initial sales, it turns out that the number of questions and the fraction of questions that have at least one answer positively influence the sales. We also find that there is an interaction between Q&A elements and review elements, in that an increase in the number of questions seems to be positively correlated with an increase in the number of reviews in the following period. Meanwhile, an increase in the number of answers appears to reduce the average review length in the subsequent period. Our findings suggest that incorporating the question & answer system could be a potential approach to drive sales. However, it is crucially important for managers to develop appropriate policies to gather necessary answers to questions asked on the platform in order to capitalize on such a system. In Chapter 4, we provide an analysis of a two-sided platform, which becomes a dominant framework adopted by new Information Technology platforms such as Uber and Airbnb. We develop a game-theoretic model featuring a platform owner who acts as an intermediary that services two types of users to examine the influence of incentive policies the platform owner enforces. Specifically, our main interest is to study the implication of the incentive policy on user behavior and welfare metrics. We find that although the seller welfare always increases with the amount of incentives given by the platform, an adjustment of the incentive allocation policy can also yield similar results in many scenarios. In addition, there exists a case where the platform can increase both the seller welfare and its own welfare without increasing the amount of incentives

    Using Platform-Generated Content to Stimulate User-Generated Content

    Get PDF
    This work intends to study the implication of an editorial review program where a review platform starts to supplement the user-generated reviews on its website with editorial review articles that are written by the platform. Our research question is whether platform-generated content (i.e., editorial reviews) influence subsequent user- generated content (i.e., online reviews) both in terms of the quantity and quality of those reviews. We obtain the dataset through a partnership with a restaurant review platform in Asia. Our preliminary analysis suggests that platform-generated content has a positive net effect on subsequent user-generated content. Specifically, users post more reviews for restaurants that have editorial reviews and these reviews tend to be longer on average

    Status Regain and Validator Performance: Evidence from Blockchain Platform

    Get PDF
    The paper examines the effects of providing validators with status on transaction verification performance in blockchains (e.g., Delegated Proof-of-Stake). In particular, the paper focuses on how status regain (i.e., validators having lost status but regaining it) affects their performance and whether the effect of status regain diminishes over time. It is argued that losing status may induce behavioral changes once they regain status. However, it is not clear how losing and regaining status might affect, positively or negatively, validators’ performance in transaction verification. The results indicate that status regain positively impacts their performance in transaction verifications. Further, we find that the length of status loss negatively moderates the impact of status regain on performance. Also, we find that the status position before status loss positively moderates the impact of status regain on performance. Implications of the results for practice and research are discussed

    Using Context-Based Password Strength Meter to Nudge Users\u27 Password Generating Behavior: A Randomized Experiment

    Get PDF
    Encouraging users to create stronger passwords is one of the key issues in password-based authentication. It is particularly important as prior works have highlighted that most passwords are weak. Yet, passwords are still the most commonly used authentication method. This paper seeks to mitigate the issue of weak passwords by proposing a context-based password strength meter. We conduct a randomized experiment on Amazon MTurk and observe the change in users’ behavior. The results show that our proposed method is significantly effective. Users exposed to our password strength meter are more likely to change their passwords after seeing the warning message, and those new passwords are stronger. Furthermore, users are willing to invest their time to learn about creating a stronger password, even in a traditional password strength meter setting. Our findings suggest that simply incorporating contextual information to password strength meters could be an effective method in promoting more secure behaviors among end users

    Using Context-Based Password Strength Meter to Nudge Users' Password Generating Behavior: A Randomized Experiment

    Get PDF
    Encouraging users to create stronger passwords is one of the key issues in password-based authentication. It is particularly important as prior works have highlighted that most passwords are weak. Yet, passwords are still the most commonly used authentication method. This paper seeks to mitigate the issue of weak passwords by proposing a context-based password strength meter. We conduct a randomized experiment on Amazon MTurk and observe the change in users’ behavior. The results show that our proposed method is significantly effective. Users exposed to our password strength meter are more likely to change their passwords after seeing the warning message, and those new passwords are stronger. Furthermore, users are willing to invest their time to learn about creating a stronger password, even in a traditional password strength meter setting. Our findings suggest that simply incorporating contextual information to password strength meters could be an effective method in promoting more secure behaviors among end users

    Enhancing security behaviour by supporting the user

    Get PDF
    Although the role of users in maintaining security is regularly emphasized, this is often not matched by an accompanying level of support. Indeed, users are frequently given insufficient guidance to enable effective security choices and decisions, which can lead to perceived bad behaviour as a consequence. This paper discusses the forms of support that are possible, and seeks to investigate the effect of doing so in practice. Specifically, it presents findings from two experimental studies that investigate how variations in password meter usage and feedback can positively affect the resulting password choices. The first experiment examines the difference between passwords selected by unguided users versus those receiving guidance and alternative forms of feedback (ranging from a traditional password meter through to an emoji-based approach). The findings reveal a 30% drop in weak password choices between unguided and guided usage, with the varying meters then delivering up to 10% further improvement. The second experiment then considers variations in the form of feedback message that users may receive in addition to a meter-based rating. It is shown that by providing richer information (e.g. based upon the time required to crack a password, its relative ranking against other choices, or the probability of it being cracked), users are more motivated towards making strong choices and changing initially weak ones. While the specifics of the experimental findings were focused upon passwords, the discussion also considers the benefits that may be gained by applying the same principles of nudging and guidance to other areas of security in which users are often found to have weak behaviours

    The Value of Editorial Reviews for UGC Platform

    No full text
    We investigate an editorial review program where a review platform supplements the user reviews with editorial ones written by professional writers. Specifically, we examine whether and how editorial reviews influence subsequent user reviews (reviews written by non-editor reviewers). The empirical evidence from a quasi-experiment on a leading review platform in Asia based on several econometric and natural language processing techniques shows an overall positive effect of editorial reviews on subsequent user reviews from the platform’s perspective. For restaurants that receive editorial reviews, reviewers not only post more frequently, but also write longer and more neutral feedback. Further analysis of the mechanism reveals that the subsequent reviews of the restaurants that receive editorial reviews become more similar to their editorial reviews regarding the topics, sentiment, and readability, indicating a herding effect as the main driver of the change in the subsequent reviews. The findings suggest that review platforms could use an editorial review program to not only boost the review quantity, but also manage the content’s quality. By supplementing high-quality editorial reviews with user reviews, the platform can improve the overall content quality of user reviews through a herding effect

    Are Review Helpfulness Score and Review Unhelpfulness Score Two Sides of The Same Coin or Different Coins?

    Get PDF
    Online review platforms have increasingly incorporated the review evaluating system (i.e., a system that allows users to evaluate whether reviews are helpful/unhelpful) to assist review readers and encourage review contributors. However, although we have extensive knowledge about the review helpfulness score, our insights regarding its counterpart, the review unhelpfulness score, are lacking. Addressing this limitation is important because many researchers have adopted the review unhelpfulness score assuming that it is driven by intrinsic review characteristics while practitioners also implicitly assume that the unhelpfulness score can identify low-quality reviews. The primary objective of this work is to verify whether the review unhelpfulness score is influenced by intrinsic review characteristics that drive review helpfulness score. We find that unlike review helpfulness score, unhelpfulness score is not driven by intrinsic review characteristics, and that helpfulness voters behave significantly different than unhelpfulness voters. Further implications and future directions are also discussed

    The impact of performance-contingent monetary incentives on user-generated content contribution

    No full text
    User-generated content (UGC) has become increasingly important in both individuals\u27 daily life and business application. To encourage contributions, online platforms have utilized completion-contingent monetary incentives, wherein financial rewards are equally offered to each contributor who successfully completes a specific task. However, recent studies find that completion-contingent monetary rewards increase the volume of UGC at the expense of the compromise on quality. In this study, we use a natural experiment research design to investigate the effect of an alternative reward structure, performancecontingent monetary incentives, on UGC generation. Since performance-contingent incentives are only rewarded to owners of high-quality content, this design may crowd in individuals\u27 intrinsic motivation via enhancing their perceived competence, and therefore stimulate their contribution of more content without compromising on quality. This research will advance our understanding of how different monetary incentive policies influence UGC contribution in online communities

    Are review helpfulness score and review unhelpfulness score two sides of the same coin or different coins?

    No full text
    Online review platforms have increasingly incorporated the review evaluating system (i.e., a system that allows users to evaluate whether reviews are helpful/unhelpful) to assist review readers and encourage review contributors. However, although we have extensive knowledge about the review helpfulness score, our insights regarding its counterpart, the review unhelpfulness score, are lacking. Addressing this limitation is important because many researchers have adopted the review unhelpfulness score assuming that it is driven by intrinsic review characteristics while practitioners also implicitly assume that the unhelpfulness score can identify low-quality reviews. The primary objective of this work is to verify whether the review unhelpfulness score is influenced by intrinsic review characteristics that drive review helpfulness score. We find that unlike review helpfulness score, unhelpfulness score is not driven by intrinsic review characteristics, and that helpfulness voters behave significantly different than unhelpfulness voters. Further implications and future directions are also discussed
    corecore