10,682 research outputs found

    A Rude Awakening: Internet Shakeout in 2000

    Get PDF
    This study explores the major value-drivers of business-to-consumer ("B2C") Internet companies' share prices both before and after the "bursting of the Internet bubble" in the spring of 2000. Although many market observers had predicted that the bubble would eventually burst (e.g., Perkins and Perkins 1999), the ultimate and previously unanswered challenge lay identifying which stocks would fall and which ones would survive the shakeout. We develop an empirical valuation model and provide evidence that the Internet stocks that this model suggests were relatively over-valued prior to the Internet stock market correction experienced relatively larger drops in their price-to-sales ratios when the bubble burst. This result is robust to the inclusion of competing explanatory variables suggested by the economics literature related to industry rationalizations. We also investigate a number of additional issues related to the rapidly changing Internet world. First, we provide descriptive evidence of the correlation between monthly stock returns and contemporaneous and lagged Nielsen/Netratings web traffic metrics (both levels and changes). We then undertake a factor analysis on the set of Nielsen/Netratings raw web metrics with a view to synthesizing the data into a parsimonious set of orthogonal web performance measures. Our factor analysis results in the extraction of three factors that capture the most relevant dimensions of website performance: (1) reach, (2) "stickiness", and (3) customer loyalty. Our findings suggest that all three web performance measures are value-relevant to the share prices of Internet companies in each of 1999 and 2000. Our findings of significance for the year 2000 contradict the recent claims of some analysts that web traffic measures are no longer important. We also explore the valuation role of our proxy for B2C companies' ability to sustain their current rate of "cash burn" and find that this proxy is a significant value-driver in each of 1999 and 2000. Finally, our results suggest that investors adopted a more skeptical attitude towards expenditures on intangible investments as the Internet sector began to mature. Consistent with the results of prior studies in other knowledge asset based industries, we find that investors appear to implicitly capitalize product development (R&D) and advertising expenses (customer acquisition costs) during the "bubble" period when the market was more optimistic about the prospects of B2C companies. However, neither marketing expenses nor product development costs are implicitly capitalized into value, on average, subsequent to the shakeout in the spring of 2000. Overall, our study provides a preliminary view of the shakeout and maturation of one of the most important New Economy industries to emerge to date - the Internet

    A Trust Management Framework for Decision Support Systems

    Get PDF
    In the era of information explosion, it is critical to develop a framework which can extract useful information and help people to make “educated” decisions. In our lives, whether we are aware of it, trust has turned out to be very helpful for us to make decisions. At the same time, cognitive trust, especially in large systems, such as Facebook, Twitter, and so on, needs support from computer systems. Therefore, we need a framework that can effectively, but also intuitively, let people express their trust, and enable the system to automatically and securely summarize the massive amounts of trust information, so that a user of the system can make “educated” decisions, or at least not blind decisions. Inspired by the similarities between human trust and physical measurements, this dissertation proposes a measurement theory based trust management framework. It consists of three phases: trust modeling, trust inference, and decision making. Instead of proposing specific trust inference formulas, this dissertation proposes a fundamental framework which is flexible and can be adapted by many different inference formulas. Validation experiments are done on two data sets: the Epinions.com data set and the Twitter data set. This dissertation also adapts the measurement theory based trust management framework for two decision support applications. In the first application, the real stock market data is used as ground truth for the measurement theory based trust management framework. Basically, the correlation between the sentiment expressed on Twitter and stock market data is measured. Compared with existing works which do not differentiate tweets’ authors, this dissertation analyzes trust among stock investors on Twitter and uses the trust network to differentiate tweets’ authors. The results show that by using the measurement theory based trust framework, Twitter sentiment valence is able to reflect abnormal stock returns better than treating all the authors as equally important or weighting them by their number of followers. In the second application, the measurement theory based trust management framework is used to help to detect and prevent from being attacked in cloud computing scenarios. In this application, each single flow is treated as a measurement. The simulation results show that the measurement theory based trust management framework is able to provide guidance for cloud administrators and customers to make decisions, e.g. migrating tasks from suspect nodes to trustworthy nodes, dynamically allocating resources according to trust information, and managing the trade-off between the degree of redundancy and the cost of resources

    Challenges in Complex Systems Science

    Get PDF
    FuturICT foundations are social science, complex systems science, and ICT. The main concerns and challenges in the science of complex systems in the context of FuturICT are laid out in this paper with special emphasis on the Complex Systems route to Social Sciences. This include complex systems having: many heterogeneous interacting parts; multiple scales; complicated transition laws; unexpected or unpredicted emergence; sensitive dependence on initial conditions; path-dependent dynamics; networked hierarchical connectivities; interaction of autonomous agents; self-organisation; non-equilibrium dynamics; combinatorial explosion; adaptivity to changing environments; co-evolving subsystems; ill-defined boundaries; and multilevel dynamics. In this context, science is seen as the process of abstracting the dynamics of systems from data. This presents many challenges including: data gathering by large-scale experiment, participatory sensing and social computation, managing huge distributed dynamic and heterogeneous databases; moving from data to dynamical models, going beyond correlations to cause-effect relationships, understanding the relationship between simple and comprehensive models with appropriate choices of variables, ensemble modeling and data assimilation, modeling systems of systems of systems with many levels between micro and macro; and formulating new approaches to prediction, forecasting, and risk, especially in systems that can reflect on and change their behaviour in response to predictions, and systems whose apparently predictable behaviour is disrupted by apparently unpredictable rare or extreme events. These challenges are part of the FuturICT agenda

    Can Wikipedia Article Traffic Statistics be Used to Verify a Technical Indicator? An Exploration into the Correlation Between Wikipedia Article Traffic Statistics and the Coppock Technical Indicator.

    Get PDF
    Recent studies have shown that, through the quantification of Wikipedia Usage Patterns as a result of information gathering, stock market moves can be predicted (Moat et al 2013). There was also research performed to determine the predictive nature of Wikipedia Data to predict movie box office success (Mestyan et al. 2013). The goal of any investor, in order to maximize the return of their investments, is to have an edge over other participants in the markets. Several tools and techniques have been used over the years to fulfil this, some proving to generate a consistent stream of income (Gillen 2012). With the improvement of technology and communication links, what was once considered a closed door, gentleman’s club operation, can now be tapped into by anybody who has access to a PC and communications link. It is said that approximately only 20% of investors are consistently successful in their investments (Terzo 2013). In order be successful, there needs to be a strategy in place that is strictly adhered to. The objective of these trading systems is to minimize, or ideally cut out, the human emotion factor and naturally, as a consequence, allow the strategy operate at its optimum. An example of this is through the use of technical analysis indicator which, when used correctly, can net the investor considerable, consistent returns. (Gillen 2012). Technical indicators, such as Coppock, are widely used in the field of stock market investment to provide traders and investors with an insight into which direction a stock or index is moving so as to facilitate the optimum time to enter or exit the market. This project investigates whether Wiki Article Traffic Statistics can be used to verify trading signals given by the Coppock technical indicator through the use of a suitable correlation technique

    Detecting and Tracking the Spread of Astroturf Memes in Microblog Streams

    Full text link
    Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We introduce an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events. We describe a Web service that leverages this framework to track political memes in Twitter and help detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We present some cases of abusive behaviors uncovered by our service. Finally, we discuss promising preliminary results on the detection of suspicious memes via supervised learning based on features extracted from the topology of the diffusion networks, sentiment analysis, and crowdsourced annotations

    Citizens and Institutions as Information Prosumers. The Case Study of Italian Municipalities on Twitter

    Get PDF
    The aim of this paper is to address changes in public communication following the advent of Internet social networking tools and the emerging web 2.0 technologies which are providing new ways of sharing information and knowledge. In particular public administrations are called upon to reinvent the governance of public affairs and to update the means for interacting with their communities. The paper develops an analysis of the distribution, diffusion and performance of the official profiles on Twitter adopted by the Italian municipalities (comuni) up to November 2013. It aims to identify the patterns of spatial distribution and the drivers of the diffusion of Twitter profiles; the performance of the profiles through an aggregated index, called the Twitter performance index (Twiperindex), which evaluates the profiles' activity with reference to the gravitational areas of the municipalities in order to enable comparisons of the activity of municipalities with different demographic sizes and functional roles. The results show that only a small portion of innovative municipalities have adopted Twitter to enhance e-participation and e-governance and that the drivers of the diffusion seem to be related either to past experiences and existing conditions (i.e. civic networks, digital infrastructures) developed over time or to strong local community awareness. The better performances are achieved mainly by small and medium-sized municipalities. Of course, the phenomenon is very new and fluid, therefore this analysis should be considered as a first step in ongoing research which aims to grasp the dynamics of these new means of public communication

    Twitter and the US stock market: the influence of micro‑bloggers on share prices

    Get PDF
    With the increased interest in social media over recent years, the role of information disseminated through avenues such as Twitter has become more widely perceived. This paper examines the mention of stocks on the US markets (NYSE and NASDAQ) by a number of financial micro-bloggers to establish whether their posts are reflected in price movements. The Twitter feeds are selected from syndicated and nonsyndicated authors. A substantial number of tweets were linked to the price movements of the mentioned assets and an event study methodology was used to ascertain whether these mentions carry any significant information or whether they are merely noise
    • …
    corecore