246 research outputs found

    A real time leading economic indicator based on text mining for the Spanish economy. Fractional cointegration VAR and Continuous Wavelet Transform analysis.

    Get PDF
    The main aim of this paper is to build a Real Time Leading Economic Indicator (RT-LEI) that improves Composite Leading Indicators (CLI)’s performance to anticipate GDP trends and turning points for the Spanish economy. The indicator has been constructed using a Factor Analysis and is composed of 21 variables concerning motor vehicle activity, financial activity, real estate activity, economic sentiment, and industrial sector. The data sources used are Google Trends and Thomson Reuters Eikon-Datastream. This work contributes to the literature, studying the dynamics of GDP, CLI and RT-LEI using Fractional Cointegration VAR (FCVAR model) and Continuous Wavelet Transform (CWT) for its resolution. The results show that the model does not present mean reversion and it is expected the RT-LEI reveals a bear trend in the next two years, alike IMF and Consensus FUNCAS′ forecasts. The reasons are mostly associated with escalating global protectionism, uncertainty related to Catalonia and faster monetary policy normalization.pre-print990 K

    Social Media Analytics in Food Innovation and Production: a Review

    Get PDF
    Until recently social media and social media analytics (SMA) were basically used only for communication and marketing purposes. However, thanks to advances in digital technologies and big data analytics, potential applications of SMA extend now to production processes and overall business management. As a result, SMA has become an important tool for gaining and sustaining competitive advantage across various sectors, industries and end-markets. Yet, the food industry still lags behind when it comes to the use of digital technologies and advanced data analytics. A part of the explanation lies in the limited knowledge of potential applications of SMA in food innovation and production. The aim of this paper is to provide a review of literature on possible uses of SMA in the food industry sector and to discuss both the benefits, risks, and limitations of SMA in food innovation and production. Based on the literature review, it is concluded that mining social media data for insights can create significant business value for the food industry enterprises and food service sector organizations. On the other hand, many proposals for using SMA in the food domain still await direct experimental tests. More research and insights concerning risks and limitations of SMA in the food sector would be also needed. The issue of responsible data analytics as part of Corporate Digital Responsibility and Corporate Social Responsibility of enterprises using social media data for food innovation and production also requires a greater attention

    The Impact of Positive Online Review Tags on Snacks Sales: A Case of Bestore in Tmall

    Get PDF
    Customers’ reviews in e-commerce sites play a significant role in influencing potential customers’ purchasing decisions which ultimately affects products sales. Chinese e-commerce sites like Tmall, Taobao and JD.com contain a collection of aspect tags that group reviews with similar comments tags to help customers browse reviews and evaluate products more conveniently. To validate whether these tags are useful and actually playing a role in promoting future sales, we collected data including product information and review tags on a regular basis for consecutive 8 weeks from Bestore, a snack seller on Tmall. We classified the collected review tags into 9 types based on their semantic meanings. Finally, we analyzed and performed generalized estimating equations (GEE) modeling on the data set consisting of 234 products with a total of 734 tags. The results show that most of the aspect tags are related to immediate period sales volume and certain tags are more capable of nowcasting next immediate sales

    Essays on Panel Data Prediction Models

    Get PDF
    Forward-looking analysis is valuable for policymakers as they need effective strategies to mitigate imminent risks and potential challenges. Panel data sets contain time series information over a number of cross-sectional units and are known to have superior predictive abilities in comparison to time series only models. This PhD thesis develops novel panel data methods to contribute to the advancement of short-term forecasting and nowcasting of macroeconomic and environmental variables. The two most important highlights of this thesis are the use of cross-sectional dependence in panel data forecasting and to allow for timely predictions and ‘nowcasts’.Although panel data models have been found to provide better predictions in many empirical scenarios, forecasting applications so far have not included cross-sectional dependence. On the other hand, cross-sectional dependence is well-recognised in large panels and has been explicitly modelled in previous causal studies. A substantial portion of this thesis is devoted to developing cross-sectional dependence in panel models suited to diverse empirical scenarios. The second important aspect of this work is to integrate the asynchronous release schedules of data within and across panel units into the panel models. Most of the thesis emphasises the pseudo-real-time predictions with efforts to estimate the model on the data that has been released at the time of predictions, thus trying to replicate the realistic circumstances of delayed data releases.Linear, quantile and non-linear panel models are developed to predict a range of targets both in terms of their meaning and method of measurement. Linear models include panel mixed-frequency vector-autoregression and bridge equation set-ups which predict GDP growth, inflation and CO2 emissions. Panel quantile regressions and latent variable discrete choice models predict growth-at-risk and extreme episodes of cross-border capital flows, respectively. The datasets include both international cross-country panels as well as regional subnational panels. Depending on the nature of the model and the prediction targets, different precision criteria evaluate the accuracy of the models in out-of-sample settings. The generated predictions beat respective standard benchmarks in a more timely fashion

    Maximizing Insight from Modern Economic Analysis

    Full text link
    The last decade has seen a growing trend of economists exploring how to extract different economic insight from "big data" sources such as the Web. As economists move towards this model of analysis, their traditional workflow starts to become infeasible. The amount of noisy data from which to draw insights presents data management challenges for economists and limits their ability to discover meaningful information. This leads to economists needing to invest a great deal of energy in training to be data scientists (a catch-all role that has grown to describe the usage of statistics, data mining, and data management in the big data age), with little time being spent on applying their domain knowledge to the problem at hand. We envision an ideal workflow that generates accurate and reliable results, where results are generated in near-interactive time, and systems handle the "heavy lifting" required for working with big data. This dissertation presents several systems and methodologies that bring economists closer to this ideal workflow, helping them address many of the challenges faced in transitioning to working with big data sources like the Web. To help users generate accurate and reliable results, we present approaches to identifying relevant predictors in nowcasting applications, as well as methods for identifying potentially invalid nowcasting models and their inputs. We show how a streamlined workflow, combined with pruning and shared computation, can help handle the heavy lifting of big data analysis, allowing users to generate results in near-interactive time. We also present a novel user model and architecture for helping users avoid undesirable bias when doing data preparation: users interactively define constraints for transformation code and the data that the code produces, and an explain-and-repair system satisfies these constraints as best it can, also providing an explanation for any problems along the way. These systems combined represent a unified effort to streamline the transition for economists to this new big data workflow.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144007/1/dol_1.pd

    Measuring Social Well Being in The Big Data Era: Asking or Listening?

    Full text link
    The literature on well being measurement seems to suggest that "asking" for a self-evaluation is the only way to estimate a complete and reliable measure of well being. At the same time "not asking" is the only way to avoid biased evaluations due to self-reporting. Here we propose a method for estimating the welfare perception of a community simply "listening" to the conversations on Social Network Sites. The Social Well Being Index (SWBI) and its components are proposed through to an innovative technique of supervised sentiment analysis called iSA which scales to any language and big data. As main methodological advantages, this approach can estimate several aspects of social well being directly from self-declared perceptions, instead of approximating it through objective (but partial) quantitative variables like GDP; moreover self-perceptions of welfare are spontaneous and not obtained as answers to explicit questions that are proved to bias the result. As an application we evaluate the SWBI in Italy through the period 2012-2015 through the analysis of more than 143 millions of tweets.Comment: 40 pages, 2 figures. arXiv admin note: text overlap with arXiv:1512.0156
    • …
    corecore