12 research outputs found

    Predicting Problem-Solving Performance Using Concept Map

    Get PDF
    A growing community of researchers applies the concept map for elicitation and representation individual’s knowledge structure especially within knowledge-intensive processes in organizations. As an extension of prior works on concept map, this study aims to explore a new indicator of structural properties of concept map from an information entropy perspective to predict an individual’s problem-solving performance. From the information processing view of problem-solving, Information Theory provides the framework to formulate a new indicator called EntropyAvg. A controlled experiment was carried out to validate the predictive ability of the new indicator. The results demonstrate that EntropyAvg is able to estimate an individual’s problem-solving performance beyond two other widely adopted indicators, i.e., complexity and integration. The theoretical and practical contributions of this study are also discusse

    Consumers’ Sentiments and Popularity of Brand Posts in Social Media: The Moderating Role of Up-votes

    Get PDF
    User-generated contents (UGC) on online social media plays an important role in the branding and marketing of firms’ products and services. In this study, we examine the effect of consumers’ sentiments embedded in UGC on the popularity of brand posts. We retrieved real-world data from a social media platform and utilized a rigorous data analysis method that exploited state-of-the-art semi-supervised sentiment analysis technique. Our empirical findings confirm that positive and negative sentiments are associated with post popularity to some extent. Also, the customers’ up-votes for negative comments somehow moderates the effect of negative comments on post popularity. To the best of our knowledge, this is the first study that demonstrates the specific role of up-votes in enhancing the popularity of brand posts on online social media. Our findings provide a promising theoretical contribution to the literature. The managerial implication is that firms can apply our findings to develop more effective strategies for marketing through social media brand communities

    Belief revision for adaptive information agents

    Get PDF
    As the richness and diversity of information available to us in our everyday lives has expanded, so the need to manage this information grows. The lack of effective information management tools has given rise to what is colloquially known as the information overload problem. Intelligent agent technologies have been explored to develop personalised tools for autonomous information retrieval (IR). However, these so-called adaptive information agents are still primitive in terms of their learning autonomy, inference power, and explanatory capabilities. For instance, users often need to provide large amounts of direct relevance feedback to train the agents before these agents can acquire the users' specific information requirements. Existing information agents are also weak in dealing with the serendipity issue in IR because they cannot infer document relevance with respect to the possibly related IR contexts. This thesis exploits the theories and technologies from the fields of Information Retrieval (IR), Symbolic Artificial Intelligence and Intelligent Agents for the development of the next generation of adaptive information agents to alleviate the problem of information overload. In particular, the fundamental issues such as representation, learning, and classjfication (e.g., classifying documents as relevant or not) pertaining to these agents are examined. The design of the adaptive information agent model stems from a basic intuition in IR. By way of illustration, given the retrieval context involving a science student, and a query "Java", what information items should an intelligent information agent recommend to its user? The agent should recommend documents about "Computer Programming" if it believes that its user is a computer science student and every computer science student needs to learn programming. However, if the agent later discovers that its user is studying "volcanology", and the agent also believes that volcanists are interested in the volcanos in Java, the agent may recommend documents about "Merapi" (a volcano in Java with a recent eruption in 1994). This scenario illustrates that a retrieval context is not only about a set of terms and their frequencies but also the relationships among terms (e.g., java ? science ? computer, computer ? programming, java ? science ? volcanology ? merapi, etc.) In addition, retrieval contexts represented in information agents should be revised in accordance with the changing information requirements of the users. Therefore, to enhance the adaptive and proactive IR behaviour of information agents, an expressive representation language is needed to represent complex retrieval contexts and an effective learning mechanism is required to revise the agents' beliefs about the changing retrieval contexts. Moreover, a sound reasoning mechanism is essential for information agents to infer document relevance with respect to some retrieval contexts to enhance their proactiveness and learning autonomy. The theory of belief revision advocated by AlchourrĂłn, GĂ€rdenfors, and Makinson (AGM) provides a rigorous formal foundation to model evolving retrieval contexts in terms of changing epistemic states in adaptive information agents. The expressive power of the AGM framework allows sufficient details of retrieval contexts to be captured. Moreover, the AGM framework enforces the principles of minimal and consistent belief changes. These principles coincide with the requirements of modelling changing information retrieval contexts. The AGM belief revision logic has a close connection with the Logical Uncertainty Principle which describes the fundamental approach for logic-based IR models. Accordingly, the AGM belief functions are applied to develop the learning components of adaptive information agents. Expectationinference which is characterised by axioms leading to conservatively monotonic IR behaviour plays a significant role in developing the agents' classification components. Because of the direct connection between the AGM belief functions and the expectation inference relations, seamless integration of the information agents' learning and classification components is made possible. Essentially, the learning functions and the classification functions of adaptive information agents are conceptualised by and q d respectively. This conceptualisation can be interpreted as: (1) learning is the process of revising the representation K of a retrieval context with respect to a user's relevance feedback q which can be seen as a refined query; (2) classification is the process of determining the degree of relevance of a document d with respect to the refined query q given the agent's expectation (i.e., beliefs) K about the retrieval context. At the computational level, how to induce epistemic entrenchment which defines the AGM belief functions, and how to implement the AGM belief functions by means of an effective and efficient computational algorithm are among the core research issues addressed. Automated methods of discovering context sensitive term associations such as (computer ? programming) and preclusion relations such as (volcanology /? programming) are explored. In addition, an effective classification method which is underpinned by expectation inference is developed for adaptive information agents. Last but not least, quantitative evaluations, which are based on well-known IR bench-marking processes, are applied to examine the performance of the prototype agent system. The performance of the belief revision based information agent system is compared with that of a vector space based agent system and other adaptive information filtering systems participated in TREC-7. As a whole, encouraging results are obtained from our initial experiments

    Is There a Relationship between Social Executives and Firms\u27 Mergers and Acquisition Decisions: An Empirical Study Based on Twitter

    No full text
    This study examines the relationship between senior executives’ presence on online social media and mergers and acquisitions (M&A) decisions. Our empirical results derived from the difference-in-difference analysis suggest that the presence of social executives on social media improves M&A decisions (likelihood and frequency). Interestingly, we find that executive age negatively moderates this relationship such that younger social executives are more likely to undertake M&As than their older peers

    ExeAnalyzer: A Deep Generative Adversarial Network for Multimodal Online Impression Analysis and Startup Funding Prediction

    Get PDF
    With the rise of equity crowdfunding platforms, entrepreneurs\u27 online impressions are of great importance to startups\u27 initial funding success. Guided by the design science research methodology, one contribution of our research is to design a novel Generative Adversarial Network, namely ExeAnalyzer, to analyze CEOs\u27 online impressions by using multimodal data collected from social media platforms. More specifically, ExeAnalyzer can detect CEOs\u27 first impressions, personalities, and other sociometric attributes. Based on a dataset of 7,806 startups extracted from AngelList, another contribution of our research is the empirical analysis of the relationship between CEOs\u27 online impressions and startups\u27 funding successes. Our empirical analysis shows that CEOs\u27 impression of dominance is negatively related to startups\u27 funding performance, while the social desirability of CEOs is positively associated with startups\u27 funding success. Our empirical study also confirms that the impression features extracted by ExeAnalyzer have significant predictive power on startups\u27 funding performance

    Web 2.0 Environmental Scanning and Adaptive Decision Support for Business Mergers and Acquisitions

    No full text
    Globalization has triggered a rapid increase in cross-border mergers and acquisitions (M&As). However, research shows that only 17 percent of cross-border M&As create shareholder value. One of the main reasons for this poor track record is top management’s lack of attention to nonfinancial aspects (e.g., sociocultural aspects) of M&As. With the rapid growth of Web 2.0 applications, online environmental scanning provides top executives with unprecedented opportunities to tap into collective web intelligence to develop better insights about the sociocultural and political–economic factors that cross-border M&As face. Grounded in Porter’s five forces model, one major contribution of our research is the design of a novel due diligence scorecard model that leverages collective web intelligence to enhance M&A decision making. Another important contribution of our work is the design and development of an adaptive business intelligence (BI) 2.0 system underpinned by an evolutionary learning approach, domain-specific sentiment analysis, and business relation mining to operationalize the aforementioned scorecard model for adaptive M&A decision support. With Chinese companies’ cross-border M&As as the business context, our experimental results confirm that the proposed adaptive BI 2.0 system can significantly aid decision makers under different M&A scenarios. The managerial implication of our findings is that firms can apply the proposed BI 2.0 technology to enhance their strategic decision making, particularly when making cross-border investments in targeted markets for which private information may not be readily available

    Enhancing Binary Classification by Modeling Uncertain Boundary in Three-Way Decisions

    No full text

    The Role of Media Coverage on Pandemic Containment: Empirical Analysis of the COVID-19 Case

    No full text
    Since December 29, 2019, the world has been suffering from a serious pandemic disease, the COVID-19. Given its universal availability, social media platforms, such as Weibo, provides the public with frequently updated health information to support the virus containment work. The health information posted by the health authorities (government, hospital, and medical experts) is expected to urge individuals to take protective actions. To investigate whether there is a significant impact of media coverage on protective behaviors and further on pandemic transmission, we collect a panel dataset to conduct an empirical analysis. Our preliminary results show that the volume of all media coverage has a significant containment effect on pandemic transmission. In particular, verified publishers have greater containment effect than unverified publishers. In the future, we will use instrument variables or matching methods to examine the causal effects
    corecore