19 research outputs found

    Business Value of Big Data Analytics:A Systems-Theoretic Approach and Empirical Test

    Get PDF
    Although big data analytics have been widely considered a key driver of marketing and innovation processes, whether and how big data analytics create business value has not been fully understood and empirically validated at a large scale. Taking social media analytics as an example, this paper is among the first attempts to theoretically explain and empirically test the market performance impact of big data analytics. Drawing on the systems theory, we explain how and why social media analytics create super-additive value through the synergies in functional complementarity between social media diversity for gathering big data from diverse social media channels and big data analytics for analyzing the gathered big data. Furthermore, we deepen our theorizing by considering the difference between small and medium enterprises (SMEs) and large firms in the required integration effort that enables the synergies of social media diversity and big data analytics. In line with this theorizing, we empirically test the synergistic effect of social media diversity and big data analytics by using a recent large-scale survey data set from 18,816 firms in Italy. We find that social media diversity and big data analytics have a positive interaction effect on market performance, which is more salient for SMEs than for large firms

    Analysis of Users' Behavior in Structured e-Commerce Websites

    Get PDF
    Online shopping is becoming more and more common in our daily lives. Understanding users'' interests and behavior is essential to adapt e-commerce websites to customers'' requirements. The information about users'' behavior is stored in the Web server logs. The analysis of such information has focused on applying data mining techniques, where a rather static characterization is used to model users'' behavior, and the sequence of the actions performed by them is not usually considered. Therefore, incorporating a view of the process followed by users during a session can be of great interest to identify more complex behavioral patterns. To address this issue, this paper proposes a linear-temporal logic model checking approach for the analysis of structured e-commerce Web logs. By defining a common way of mapping log records according to the e-commerce structure, Web logs can be easily converted into event logs where the behavior of users is captured. Then, different predefined queries can be performed to identify different behavioral patterns that consider the different actions performed by a user during a session. Finally, the usefulness of the proposed approach has been studied by applying it to a real case study of a Spanish e-commerce website. The results have identified interesting findings that have made possible to propose some improvements in the website design with the aim of increasing its efficiency

    Advanced technologies and international business : A multidisciplinary analysis of the literature

    Get PDF
    Publisher Copyright: © 2021 The AuthorsAdvanced digital technologies, such as the Internet of Things, blockchain, big data analytics and augmented reality, are gradually transforming the way multinational firms do business. Due to the extent of this transformation many scholars argue that the integration of these technologies marks the commencement of the fourth industrial revolution (Industry 4.0). However, the question how these advanced technologies impact international business activities needs further attention. To this end, we adopt a multidisciplinary approach to review the related literature in international business (IB), general management, information systems, and operations research. We include the two latter fields, because advanced technologies have received more attention in these bodies of literature. Based on our analysis, we discuss the implications of these technologies for international business. Further, we highlight the drivers of technology utilisation by multinational firms and likely outcomes. We also provide future research avenues.Peer reviewe

    Metric for seleting the number of topics in the LDA Model

    Get PDF
    The latest technological trends are driving a vast and growing amount of textual data. Topic modeling is a useful tool for extracting information from large corpora of text. A topic template is based on a corpus of documents, discovers the topics that permeate the corpus and assigns documents to those topics. The Latent Dirichlet Allocation (LDA) model is the main, or most popular, of the probabilistic topic models. The LDA model is conditioned by three parameters: two Dirichlet hyperparameters (α and β ) and the number of topics (K). Determining the parameter K is extremely important and not extensively explored in the literature, mainly due to the intensive computation and long processing time. Most topic modeling methods implicitly assume that the number of topics is known in advance, thus considering it demands an exogenous parameter. That is annoying, leaving the technique prone to subjectivities. The quality of insights offered by LDA is quite sensitive to the value of the parameter K, and perhaps an excess of subjectivity in its choice might influence the confidence managers put on the techniques results, thus undermining its usage by firms. This dissertation’s main objective is to develop a metric to identify the ideal value for the parameter K of the LDA model that allows an adequate representation of the corpus and within a tolerable elapsed time of the process. We apply the proposed metric alongside existing metrics to two datasets. Experiments show that the proposed method selects a number of topics similar to that of other metrics, but with better performance in terms of processing time. Although each metric has its own method for determining the number of topics, some results are similar for the same database, as evidenced in the study. Our metric is superior when considering the processing time. Experiments show this method is effective.As tendências tecnológicas mais recentes impulsionam uma vasta e crescente quantidade de dados textuais. Modelagem de tópicos é uma ferramenta útil para extrair informações relevantes de grandes corpora de texto. Um modelo de tópico é baseado em um corpus de documentos, descobre os tópicos que permeiam o corpus e atribui documentos a esses tópicos. O modelo de Alocação de Dirichlet Latente (LDA) é o principal, ou mais popular, dos modelos de tópicos probabilísticos. O modelo LDA é condicionado por três parâmetros: os hiperparâmetros de Dirichlet (α and β ) e o número de tópicos (K). A determinação do parâmetro K é extremamente importante e pouco explorada na literatura, principalmente devido à computação intensiva e ao longo tempo de processamento. A maioria dos métodos de modelagem de tópicos assume implicitamente que o número de tópicos é conhecido com antecedência, portanto, considerando que exige um parâmetro exógeno. Isso é um tanto complicado para o pesquisador pois acaba acrescentando à técnica uma subjetividade. A qualidade dos insights oferecidos pelo LDA é bastante sensível ao valor do parâmetro K, e pode-se argumentar que um excesso de subjetividade em sua escolha possa influenciar a confiança que os gerentes depositam nos resultados da técnica, prejudicando assim seu uso pelas empresas. O principal objetivo desta dissertação é desenvolver uma métrica para identificar o valor ideal para o parâmetro K do modelo LDA que permita uma representação adequada do corpus e dentro de um tempo de processamento tolerável. Embora cada métrica possua método próprio para determinação do número de tópicos, alguns resultados são semelhantes para a mesma base de dados, conforme evidenciado no estudo. Nossa métrica é superior ao considerar o tempo de processamento. Experimentos mostram que esse método é eficaz

    A predictive maintenance approach based in big data analysis

    Get PDF
    With the evolution of information systems, the data flow escalated into new boundaries, allowing enterprises to further develop their approach to important sectors, such as production, logistic, IT and especially maintenance. This last field accompanied industry developments hand in hand in each of the four iterations. More specifically, the fourth iteration (Industry 4.0) marked the capability to connect machines and further enhance data extraction, which allowed companies to use a new data-driven approach into their specific problems. Nevertheless, with a wider flow of data being generated, understanding data became a priority for maintenance-related decision-making processes. Therefore, the correct elaboration of a roadmap to apply predictive maintenance (PM) is a key step for companies. A roadmap can allow a safe approach, where resources may be placed strategically with a ratio of low risk and high reward. By analysing multiple approaches to PM, a generic model is proposed, which contains an array of guidelines. This combination aims to assist maintenance departments that wish to understand the feasibility of implementing a predictive maintenance solution in their company. To understand the utility of the developed artefact, a practical application was conducted to a production line of HFA, a Portuguese Small and Medium Enterprise.Através da evolução dos sistemas de informação (SI), o fluxo de dados atingiu novos limites, permitindo assim às empresas desenvolver diferentes focos e aplicar novas perspetivas nos departamentos fulcrais à sua atividade, tais como produção, logística e, mais especificamente, a manutenção. Esta última componente evolui paralelamente à indústria, evidenciando novos desenvolvimentos em cada iteração da mesma. Particularmente, a quarta revolução industrial destacou-se pela capacidade de conectar máquinas entre si e pela evolução posterior do processo de extração de dados. Assim, surgiu uma nova perspetiva focada na utilização dos dados extraídos para resolução de problemas. Consequentemente, esta inovação fomentou uma redefinição das prioridades nas decisões tomadas relativas à manutenção, dando primazia à compreensão dos dados gerados. Por conseguinte, a correta elaboração de um plano de implementação de manutenção preditiva (MP) destaca-se como um passo fulcral para as empresas. Este plano tem como objetivo permitir uma abordagem mais segura, possibilitando assim alocar os recursos estrategicamente, reduzindo o risco e potenciando a recompensa. Mediante a análise de múltiplas abordagens de MP, é proposto um modelo genérico que reúne um conjunto diretrizes. Este tem intuito de auxiliar os departamentos de manutenção que pretendem compreender a viabilidade da instalação de uma solução de MP na empresa. A fim de perceber a utilidade dos artefactos desenvolvidos, foi realizada uma aplicação prática do modelo numa pequena e média empresa (PME)

    Value Creation from Big Data Analytics:A Systems Approach to enabling Big Data Benefits

    Get PDF

    Marketing intelligence from data mining perspective : A literature review

    Get PDF
    The digital transformation enables enterprises to mine big data for marketing intelligence on markets, customers, products, and competitor. However, there is a lack of a comprehensive literature review on this issue. With an aim to support enterprises to accelerate the digital transformation and gain competitive advantages through exploiting marketing intelligence from big data, this paper examines the literature in the period from 2001–2018. Consequently, 76 most relevant articles are analyzed based on four marketing intelligence components (Markets, Customers, Products, and Competitors) and six data mining models (Association, Classification, Clustering, Regression, Prediction, and Sequence Discovery). The findings of this study indicate that the research area of product and customer intelligence receives most research attention. This paper also provides a roadmap to guide future research on bridging marketing and information systems through the application of data mining to exploit marketing intelligence from big data

    The quest for customer intelligence to support marketing decisions: A knowledge-based framework

    Get PDF
    The quest for customer intelligence to create value in marketing has highlighted the significance of the research focus of this paper. Customer intelligence, which is defined as understandings or insights resulting from the application of analytic techniques, plays a significant role in the survival and prosperity of enterprises in the knowledge-based economy. In this light, the paper has developed a framework of customer intelligence to support marketing decisions through the lens of knowledge-based theory. The proposed framework aims at supporting enterprises to identify the right customer data for the right customer intelligence corresponding with the right marketing decisions. In this light, four types of customer intelligence are clarified including product-aware intelligence, customer DNA intelligence, customer experience intelligence, and customer value intelligence. The applications of customer intelligence are also elucidated with relevant marketing decisions to maximize value creation. To illustrate the framework, an example is presented. The importance and originality of this study are that it responds to changes in customer intelligence in the age of massive data and covers multifaced aspects of marketing decisions

    Big Data: Reflexões Epistemológicas e Impactos nos Estudos de Finanças e Mercado de Capitais

    Get PDF
    Objective and method: Access to data series plays a central role in the area of Finance. The increasing availability of large volumes of data, in different formats and at high frequency, combined with the technological advances in data storage and processing tools, have created a new scenario in academic research in general, and in Finance in particular, generating new opportunities and challenges. Among these challenges, methodological issues emerge, which are widely discussed among researchers from different areas, but also epistemological issues that deserve greater space for discussion. Thus, the objective of this theoretical essay is to analyze the conceptual and epistemological aspects of the use of intensive data and its reflections for the area of Finance. Results and contributions: We consider that the hypothetical-deductive method of empirical research, which is the most recurrent, limits the construction of knowledge in the socalled 'Big data era', as this approach starts from an established theory and restricts research to testing the hypothesis(es) proposed. We advocate the use of an abductive approach, as argued in Haig (2005), which converges with the ideas of grounded theory and which seems to be the most appropriate approach to this new context, as it permits greater capacity to collect value information for the data.Objetivo e método: O acesso a séries de dados tem um papel central na área de Finanças. A crescente disponibilidade de grandes volumes de dados, em diferentes formatos e em alta frequência, combinada aos avanços tecnológicos nas ferramentas de armazenamento e processamento desses dados, têm criado um novo cenário nas pesquisas acadêmicas em geral, e em Finanças em particular, gerando novas oportunidade e desafios. Entre esses desafios emergem questões metodológicas, vastamente discutidas por pesquisadores de diferentes áreas, mas também questões epistemológicas que merecem maior espaço de discussão. Assim, o objetivo deste ensaio teórico é analisar o aspecto conceitual e epistemológico da utilização de dados intensivos e seus reflexos para a área de Finanças. Resultados e contribuições: Consideramos que o método hipotético-dedutivo de pesquisas empíricas, que é o mais recorrente, limita a construção do conhecimento na dita ‘era Big data’, uma vez que tal abordagem parte de uma teoria estabelecida e restringe as pesquisas ao teste à(s) hipótese(s) proposta(s). Defendemos aqui a apropriação de uma abordagem abdutiva, como defendida em Haig (2005), que tem convergência com as ideias da grounded theorye que parece ser a abordagem mais adequada para esse novo contexto, por possibilitar a ampliação da capacidade de se obter informações de valor dos dados
    corecore