8,870 research outputs found

    Mercury: using the QuPreSS reference model to evaluate predictive services

    Get PDF
    Nowadays, lots of service providers offer predictive services that show in advance a condition or occurrence about the future. As a consequence, it becomes necessary for service customers to select the predictive service that best satisfies their needs. The QuPreSS reference model provides a standard solution for the selection of predictive services based on the quality of their predictions. QuPreSS has been designed to be applicable in any predictive domain (e.g., weather forecasting, economics, and medicine). This paper presents Mercury, a tool based on the QuPreSS reference model and customized to the weather forecast domain. Mercury measures weather predictive services' quality, and automates the context-dependent selection of the most accurate predictive service to satisfy a customer query. To do so, candidate predictive services are monitored so that their predictions can be eventually compared to real observations obtained from a trusted source. Mercury is a proof-of-concept of QuPreSS that aims to show that the selection of predictive services can be driven by the quality of their predictions. Throughout the paper, we show how Mercury was built from the QuPreSS reference model and how it can be installed and used.Peer ReviewedPostprint (author's final draft

    Unveiling the features of successful ebay sellers of smartphones: a data mining sales predictive model

    Get PDF
    JEL Classification guidelines (M310); (C380).EBay is one of the largest online retailing corporations worldwide, providing numerous ways for customer feedback on registered sellers. In accordance, with the advent of Web 2.0 and online shopping, an immensity of data is collected from manifold devices. This data is often unstructured, which inevitably asks for some form of further treatment that allows classification, discovery of patterns and trends or prediction of outcomes. That treatment implies the usage of increasingly complex and combined statistical tools as the size of datasets builds up. Nowadays, datasets may extend to several exabytes, which can be transformed into knowledge using adequate methods. The aim of the present study is to evaluate and analyse which and in what way seller and product attributes such as feedback ratings and price influence sales of smartphones on eBay using data mining framework and techniques. The methods used include SVM algorithms for modelling the sales of smartphones by eBay sellers combined with 10-fold cross-validation scheme which ensured model robustness and employment of metrics MAE, RAE and NMAE for the sake of gauging prediction accuracy followed by sensitivity analysis in order to assess the influence of individual features on sales. The methods were considered effective for both modelling evaluation and knowledge extraction reaching positive results although with some discrepancies between different prediction accuracy metrics. Lastly, it was discovered that the number of items in auction, average price and the variety of products available from a given seller were the most significant attributes, i.e., the largest contributors for sales.O EBay é uma das plataformas e retalho online de maior dimensão e abarca inúmeras oportunidades de extração de dados de feedback dos consumidores sobre vários vendedores. Em concordância, o advento da Web 2.0 e das compras online está fortemente associado à geração de dados em abundância e à possibilidade da sua respetiva recolha através de variados dispositivos e plataformas. Estes dados encontram-se, frequentemente, desestruturados o que inevitavelmente revela a necessidade da sua normalização e tratamento mais aprofundado de modo a possibilitar tarefas de classificação, descoberta de padrões e tendências ou de previsão. A complexidade dos métodos estatísticos aplicados para executar essas tarefas aumenta ao mesmo tempo que a dimensão das bases de dados. Atualmente, existem bases de dados que atingem vários exabytes e que se constituem como oportunidades para extração de conhecimento dado que métodos apropriados e particularizados sejam utilizados. Pretende-se, então, com o presente estudo quantificar e analisar quais e de que modo as características de vendedores e produtos influenciam as vendas de smartphones no eBay, recorrendo ao enquadramento conceptual e técnicas de mineração de dados. Os métodos utilizados incluem máquinas de vetores de suporte (SVMs) visando a modelação das vendas de smartphones por vendedores do eBay em combinação com validação cruzada 10-fold de modo a assegurar a robustez do modelo e com recurso às métricas de avaliação de desempenho erro absoluto médio (MAE), erro absoluto relativo (RAE) e erro absoluto médio normalizado (NMAE) para garantir a precisão do modelo preditivo. Seguidamente, é implementada a análise de sensibilidade para aferir a contribuição individual de cada atributo para as vendas. Os métodos são considerados eficazes tanto na avaliação do modelo como na extração de conhecimento visto que viabilizam resultados positivos ainda que sejam verificadas discrepâncias entre as estimativas para diferentes métricas de desempenho. Finalmente, foi possível descobrir que número de itens em leilão, o preço médio e a variedade de produtos disponibilizada por cada vendedor foram os atributos mais significantes, i.e., os que mais contribuíram para as vendas

    Using big data for customer centric marketing

    Get PDF
    This chapter deliberates on “big data” and provides a short overview of business intelligence and emerging analytics. It underlines the importance of data for customer-centricity in marketing. This contribution contends that businesses ought to engage in marketing automation tools and apply them to create relevant, targeted customer experiences. Today’s business increasingly rely on digital media and mobile technologies as on-demand, real-time marketing has become more personalised than ever. Therefore, companies and brands are striving to nurture fruitful and long lasting relationships with customers. In a nutshell, this chapter explains why companies should recognise the value of data analysis and mobile applications as tools that drive consumer insights and engagement. It suggests that a strategic approach to big data could drive consumer preferences and may also help to improve the organisational performance.peer-reviewe

    A Review of Supply Chain Data Mining Publications

    Get PDF
    The use of data mining in supply chains is growing, and covers almost all aspects of supply chain management. A framework of supply chain analytics is used to classify data mining publications reported in supply chain management academic literature. Scholarly articles were identified using SCOPUS and EBSCO Business search engines. Articles were classified by supply chain function. Additional papers reflecting technology, to include RFID use and text analysis were separately reviewed. The paper concludes with discussion of potential research issues and outlook for future development

    Artificial Intelligence, Social Media and Supply Chain Management: The Way Forward

    Get PDF
    Supply chain management (SCM) is a complex network of multiple entities ranging from business partners to end consumers. These stakeholders frequently use social media platforms, such as Twitter and Facebook, to voice their opinions and concerns. AI-based applications, such as sentiment analysis, allow us to extract relevant information from these deliberations. We argue that the context-specific application of AI, compared to generic approaches, is more efficient in retrieving meaningful insights from social media data for SCM. We present a conceptual overview of prevalent techniques and available resources for information extraction. Subsequently, we have identified specific areas of SCM where context-aware sentiment analysis can enhance the overall efficiency

    Flexible Decision Control in an Autonomous Trading Agent

    Get PDF
    An autonomous trading agent is a complex piece of software that must operate in a competitive economic environment and support a research agenda. We describe the structure of decision processes in the MinneTAC trading agent, focusing on the use of evaluators – configurable, composable modules for data analysis and prediction that are chained together at runtime to support agent decision-making. Through a set of examples, we show how this structure supports sales and procurement decisions, and how those decision processes can be modified in useful ways by changing evaluator configurations. To put this work in context, we also report on results of an informal survey of agent design approaches among the competitors in the Trading Agent Competition for Supply Chain Management (TAC SCM).autonomous trading agent;decision processes

    Harnessing the power of the general public for crowdsourced business intelligence: a survey

    Get PDF
    International audienceCrowdsourced business intelligence (CrowdBI), which leverages the crowdsourced user-generated data to extract useful knowledge about business and create marketing intelligence to excel in the business environment, has become a surging research topic in recent years. Compared with the traditional business intelligence that is based on the firm-owned data and survey data, CrowdBI faces numerous unique issues, such as customer behavior analysis, brand tracking, and product improvement, demand forecasting and trend analysis, competitive intelligence, business popularity analysis and site recommendation, and urban commercial analysis. This paper first characterizes the concept model and unique features and presents a generic framework for CrowdBI. It also investigates novel application areas as well as the key challenges and techniques of CrowdBI. Furthermore, we make discussions about the future research directions of CrowdBI

    From Social Data Mining to Forecasting Socio-Economic Crisis

    Full text link
    Socio-economic data mining has a great potential in terms of gaining a better understanding of problems that our economy and society are facing, such as financial instability, shortages of resources, or conflicts. Without large-scale data mining, progress in these areas seems hard or impossible. Therefore, a suitable, distributed data mining infrastructure and research centers should be built in Europe. It also appears appropriate to build a network of Crisis Observatories. They can be imagined as laboratories devoted to the gathering and processing of enormous volumes of data on both natural systems such as the Earth and its ecosystem, as well as on human techno-socio-economic systems, so as to gain early warnings of impending events. Reality mining provides the chance to adapt more quickly and more accurately to changing situations. Further opportunities arise by individually customized services, which however should be provided in a privacy-respecting way. This requires the development of novel ICT (such as a self- organizing Web), but most likely new legal regulations and suitable institutions as well. As long as such regulations are lacking on a world-wide scale, it is in the public interest that scientists explore what can be done with the huge data available. Big data do have the potential to change or even threaten democratic societies. The same applies to sudden and large-scale failures of ICT systems. Therefore, dealing with data must be done with a large degree of responsibility and care. Self-interests of individuals, companies or institutions have limits, where the public interest is affected, and public interest is not a sufficient justification to violate human rights of individuals. Privacy is a high good, as confidentiality is, and damaging it would have serious side effects for society.Comment: 65 pages, 1 figure, Visioneer White Paper, see http://www.visioneer.ethz.c
    corecore