11 research outputs found

    Humanizing Digital Transformation Across People and Things: An Empirical Investigation of the Impact of Real-time Feedback Types on Performance

    Get PDF
    Companies worldwide are adopting real-time feedback applications as part of their digital transformation strategies. However, the emphasis on real-time feedback stems from many organizations perceiving that the human factor is the missing link to achieving successful outcomes, specifically regarding listening to associates about operational insights – both challenges and solutions. Furthermore, employees’ feedback towards entities such as operational insights and working processes can have different effects and mechanisms with feedback generated for a person. This research aims to understand whether and how feedback towards an entity (nonperson) versus a person affects the generated feedback ratings and comment quality. Leveraging the field data from two large, global companies, our research found that feedback toward an entity has lower ratings and is shorter, more negative, less subjective, and more specific than feedback toward a person. Additionally, we found managerial position moderates the main impacts. This research has both theoretical and practical implications

    Human versus AI? Investigating the Heterogeneous Effects of Live Streaming E-commerce

    Get PDF
    Live Streaming E-commerce (LSE) refers to a technology-enabled business model that embeds live streaming into e-commerce, where streamers sell products and interact with the viewers in real-time. When stores use human streamers, they benefit from high Synchronicity Interaction (SI), which causes users’ engagement. However, when stores use artificial intelligence (AI) streamers to replace human streamers, it is unclear whether high SI human streamers are more effective than low SI AI streamers at selling products. This study examines drivers of whether AI streamers are more or less effective at selling products than human streamers. We find that human and AI streamers perform differently, and product categories moderate this effect. Our results contribute to the LSE and business value of AI literature and offer insight to platforms and stores seeking to better leverage AI technology and technology designers interested in developing more effective AI streamers

    How Does Anonymizing Crowdsourced Users\u27 Identity Affect Fact-checking on Social Media Platforms? A Regression Discontinuity Analysis

    Get PDF
    The rapid spread of misinformation on social media platforms has affected many facets of society, including presidential elections, public health, the global economy, and human well-being. Crowdsourced fact-checking is an effective method to mitigate the spread of misinformation on social media. A key factor that affects user behavior on crowdsourcing platforms is users\u27 anonymity or identity disclosure. Within the crowdsourced-based fact-checking context, it is also unknown whether and how identity anonymity affects the users\u27 fact-checking contribution performance. Leveraging a natural experiment policy happening on Twitter, we adopt regression discontinuity design to investigate two research questions: Whether and how the identity anonymity affects the crowdsourced fact-checking quantity and quality; how the characteristics of the crowdsourced users moderate the main impact. We find that the identity anonymization policy may not increase fact-checking users\u27 contribution quantity, but the fact-checking quality does increase. Our research has both theoretical and practical implications

    Stimulating Feedback Contributions Using Digital Nudges: A Field Experiment in a Real-time Mobile Feedback Platform

    Get PDF
    In the contemporary remote work environment, the demand for effective and timely feedback has significantly grown. Despite the adoption of feedback systems, many employees still find these platforms lacking in delivering meaningful insights. This study delves into the potential of digital nudges—reminder notifications sent to users—as a strategy to enhance feedback contributions on mobile platforms. A randomized field experiment was conducted in collaboration with a prominent organization, exploring variations in nudge send times and the emphasis on task significance. Spanning five weeks, the experiment evaluated the efficacy of these nudges in fostering feedback engagement among employees. Our findings indicate that the timing, content of nudges (i.e., task significance message), and a combination of these two, can significantly influence feedback behavior. The study\u27s findings have potential implications for organizations aiming to bolster their feedback systems, making them more responsive and effective in the digital age

    Mobile or Desktop? That is the Question: An Empirical Study of the Role of Device Type on Real-time Employee Feedback Quality

    No full text
    Many workplaces have replaced annual reviews with ongoing performance evaluations, often through the integration and use of real-time feedback applications. These applications allow users to provide feedback through different device mediums. However, organizational leaders are not primarily concerned with convenience: Feedback structure and quality are paramount to the application’s effectiveness, and the distinct technological affordances of mobile and desktop devices may yield different feedback comments. This study uniquely investigates the impact of device medium (desktop vs. mobile) usage on textual feedback according to four helpfulness indicators: comment length, subjectivity, specificity, and valence. Analyzing two years of proprietary data obtained through an enterprise real-time feedback application via regression analysis, we interestingly find that the device medium impacts the four distinct helpfulness indicators differently. Our study underscores the impact of device type on evaluation outcomes and derived managerial insights will help organizational leaders looking to optimize real-time feedback app usage

    Inconsistency Investigation between Online Review Content and Ratings

    No full text
    Despite the tremendous role of online consumer reviews (OCRs) in facilitating consumer purchase decision making, the potential inconsistency between product ratings and review content could cause the uncertainty and confusions of prospect consumers toward a product. This research is aimed to investigate such inconsistency so as to better assist potential consumers with making purchase decisions. First, this study extracted a reviewerñ€ℱs sentiments from review text via sentiment analysis. Then, it examined the correlation and inconsistency between product ratings and review sentiments via Pearson correlation coefficients (PCC) and box plots. Next, we compared such inconsistency patterns between fake and authentic reviews. Based on an analysis of 24,539 Yelp reviews, we find that although the ratings and sentiments are highly correlated, the inconsistency between the two is more salient in fake reviews than in authentic reviews. The comparison also reveals different inconsistency patterns between the two types of reviews
    corecore