17,984 research outputs found
What attracts vehicle consumers’ buying:A Saaty scale-based VIKOR (SSC-VIKOR) approach from after-sales textual perspective?
Purpose:
The increasingly booming e-commerce development has stimulated vehicle consumers to express individual reviews through online forum. The purpose of this paper is to probe into the vehicle consumer consumption behavior and make recommendations for potential consumers from textual comments viewpoint.
Design/methodology/approach:
A big data analytic-based approach is designed to discover vehicle consumer consumption behavior from online perspective. To reduce subjectivity of expert-based approaches, a parallel Naïve Bayes approach is designed to analyze the sentiment analysis, and the Saaty scale-based (SSC) scoring rule is employed to obtain specific sentimental value of attribute class, contributing to the multi-grade sentiment classification. To achieve the intelligent recommendation for potential vehicle customers, a novel SSC-VIKOR approach is developed to prioritize vehicle brand candidates from a big data analytical viewpoint.
Findings:
The big data analytics argue that “cost-effectiveness” characteristic is the most important factor that vehicle consumers care, and the data mining results enable automakers to better understand consumer consumption behavior.
Research limitations/implications:
The case study illustrates the effectiveness of the integrated method, contributing to much more precise operations management on marketing strategy, quality improvement and intelligent recommendation.
Originality/value:
Researches of consumer consumption behavior are usually based on survey-based methods, and mostly previous studies about comments analysis focus on binary analysis. The hybrid SSC-VIKOR approach is developed to fill the gap from the big data perspective
Recommended from our members
A novel knowledge discovery based approach for supplier risk scoring with application in the HVAC industry
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonThis research has led to a novel methodology for assessment and quantification of supply risks in the supply chain. The research has built on advanced Knowledge Discovery techniques and has resulted to a software implementation to be able to do so. The methodology developed and presented here resembles the well-known consumer credit scoring methods as it leads to a similar metric, or score, for assessing a supplier’s reliability and risk of conducting business with that supplier. However, the focus is on a wide range of operational metrics rather than just financial, which credit scoring techniques typically focus on.
The core of the methodology comprises the application of Knowledge Discovery techniques to extract the likelihood of possible risks from within a range of available datasets. In combination with cross-impact analysis, those datasets are examined for establish the inter-relationships and mutual connections among several factors that are likely contribute to risks associated with particular suppliers. This approach is called conjugation analysis. The resulting parameters become the inputs into a logistic regression which leads to a risk scoring model the outcome of the process is the standardized risk score which is analogous to the well-known consumer risk scoring model, better known as FICO score.
The proposed methodology has been applied to an Air Conditioning manufacturing company. Two models have been developed. The first identifies the supply risks based on the data about purchase orders and selected risk factors. With this model the likelihoods of delivery failures, quality failures and cost failures are obtained. The second model built on the first one but also used the actual data about the performance of supplier to identify risks of conducting business with particular suppliers. Its target was to provide quantitative measures of an individual supplier’s risk level.
The supplier risk scoring model is tested on the data acquired from the company for its performance analysis. The supplier risk scoring model achieved 86.2% accuracy, while the area under curve (AUC) was 0.863. The AUC curve is much higher than required model’s validity threshold value of 0.5. It represents developed model’s validity and reliability for future data. The numerical studies conducted with real-life datasets have demonstrated the effectiveness of the proposed methodology and system as well as its future potential for industrial adoption
ETL for data science?: A case study
Big data has driven data science development and research over the last years. However, there is a problem - most of the data science projects don't make it to production. This can happen because many data scientists don’t use a reference data science methodology. Another aggravating element is data itself, its quality and processing. The problem can be mitigated through research, progress and case studies documentation about the topic, fostering knowledge dissemination and reuse. Namely, data mining can benefit from other mature fields’ knowledge that explores similar matters, like data warehousing. To address the problem, this dissertation performs a case study about the project “IA-SI - Artificial Intelligence in Incentives Management”, which aims to improve the management of European grant funds through data mining. The key contributions of this study, to the academia and to the project’s development and success are: (1) A combined process model of the most used data mining process models and their tasks, extended with the ETL’s subsystems and other selected data warehousing best practices. (2) Application of this combined process model to the project and all its documentation. (3) Contribution to the project’s prototype implementation, regarding the data understanding and data preparation tasks. This study concludes that CRISP-DM is still a reference, as it includes all the other data mining process models’ tasks and detailed descriptions, and that its combination with the data warehousing best practices is useful to the project IA-SI and potentially to other data mining projects.A big data tem impulsionado o desenvolvimento e a pesquisa da ciência de dados nos últimos anos. No entanto, há um problema - a maioria dos projetos de ciência de dados não chega à produção. Isto pode acontecer porque muitos deles não usam uma metodologia de ciência de dados de referência. Outro elemento agravador são os próprios dados, a sua qualidade e o seu processamento. O problema pode ser mitigado através da documentação de estudos de caso, pesquisas e desenvolvimento da área, nomeadamente o reaproveitamento de conhecimento de outros campos maduros que exploram questões semelhantes, como data warehousing. Para resolver o problema, esta dissertação realiza um estudo de caso sobre o projeto “IA-SI - Inteligência Artificial na Gestão de Incentivos”, que visa melhorar a gestão dos fundos europeus de investimento através de data mining. As principais contribuições deste estudo, para a academia e para o desenvolvimento e sucesso do projeto são: (1) Um modelo de processo combinado dos modelos de processo de data mining mais usados e as suas tarefas, ampliado com os subsistemas de ETL e outras recomendadas práticas de data warehousing selecionadas. (2) Aplicação deste modelo de processo combinado ao projeto e toda a sua documentação. (3) Contribuição para a implementação do protótipo do projeto, relativamente a tarefas de compreensão e preparação de dados. Este estudo conclui que CRISP-DM ainda é uma referência, pois inclui todas as tarefas dos outros modelos de processos de data mining e descrições detalhadas e que a sua combinação com as melhores práticas de data warehousing é útil para o projeto IA-SI e potencialmente para outros projetos de data mining
Knowledge Discovery Database (KDD)-Data Mining Application in Transportation
In this paper, an understanding and a review of data mining (DM) development and its applications in logistics and specifically transportation are highlighted. Even though data mining has been successful in becoming a major component of various business processes and applications, the benefits and real-world expectations are very important to consider. It is also surprising to note that very little is known to date about the usefulness of applying data mining in transport related research. From the literature, the frameworks for carrying out knowledge discovery and data mining have been revised over the years to meet the business expectations. In this paper, we apply CRISP-DM for formulating effective tire maintenance strategy within the context of a Malaysian's logistics company. The results of applying CRISP-DM for tire maintenance decisions are presented and discussed
Knowledge Discovery Database (KDD)-Data Mining Application in Transportation
In this paper, an understanding and a review of data mining (DM) development and its applications in logistics and specifically transportation are highlighted. Even though data mining has been successful in becoming a major component of various business processes and applications, the benefits and real-world expectations are very important to consider. It is also surprising to note that very little is known to date about the usefulness of applying data mining in transport related research. From the literature, the frameworks for carrying out knowledge discovery and data mining have been revised over the years to meet the business expectations. In this paper, we apply CRISP-DM for formulating effective tire maintenance strategy within the context of a Malaysian’s logistics company. The results of applying CRISP-DM for tire maintenance decisions are presented and discussed
Recommended from our members
Commodities and Linkages: Industrialisation in Sub-Saharan Africa
In a complementary Discussion Paper (MMCP DP 12 2011) we set out the reasons why we believe that there is extensive scope for linkage development into and out of SSA’s commodities sectors. In this Discussion Paper, we present the findings of our detailed empirical enquiry into the determinants of the breadth and depth of linkages in eight SSA countries (Angola, Botswana, Gabon, Ghana, Nigeria, South Africa Tanzania, and Zambia) and six sectors (copper, diamonds, gold, oil and gas, mining services and timber). We conclude from this detailed research that the extent of linkages varies as a consequence of four factors which intrinsically affect their progress – the passage of time, the complexity of the sector and the level of capabilities in the domestic economy. However, beyond this we identify three sets of related factors which determined the nature and pace of linkage development. The first is the structure of ownership, both in lead commodity producing firms and in their suppliers and domestic customers. The second is the nature and quality of both hard infrastructure (for example, roads and ports) and soft infrastructure (for example, the efficiency of customs clearance). The third is the availability of skills and the structure and orientation of the National System of Innovation in the domestic economy. The fourth, and overwhelmingly important contextual factor is policy. This reflects policy towards the commodity sector itself, and policy which affects the three contextual drivers, namely ownership, infrastructure and capabilities. As a result of this comparative analysis we provided an explanation of why linkage development was progressive in some economies (such as Botswana) and regressive in others (such as Tanzania). This cluster of factors also explains why the breadth and depth of linkages is relative advanced in some countries (such as South Africa), and at a very nascent stage in other countries (such as Angola)
Use of Audit Data to Improve the Supply Chain Performance
Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceIn the last decades, globalization and digitalization were two of the main reasons for the increase of complexity in supply chains, altering the industries due to the massive amount of information available. This complexity started to become harmful for the companies that do not understand how to use data and information as their competitive advantage, increasing the risk and costs associated with their processes, and decreasing effectiveness and efficiency. We look for the concept and area of internal auditing and process mining techniques as a solution to revert this situation. While research has focused on different and mostly narrow aspects in these areas and solution-oriented and more practical approaches can be found and applied to a broader environment, a practical solution that incorporates these areas into the supply chain are hard to find. Therefore, following a design science research methodology, this study proposes an iterative framework that consists of a guide for an organization that wants to incorporate new technologies into their processes in the supply chain while making the best out of the massive amount of information available using internal auditing and focus on process mining techniques. The framework provides a chain of steps that can be adapted by the company during the transformational process, guaranteeing a smooth transition away from the traditional systems to a more modern and flexible architecture
A Review of Data-driven Robotic Process Automation Exploiting Process Mining
Purpose: Process mining aims to construct, from event logs, process maps that
can help discover, automate, improve, and monitor organizational processes.
Robotic process automation (RPA) uses software robots to perform some tasks
usually executed by humans. It is usually difficult to determine what processes
and steps to automate, especially with RPA. Process mining is seen as one way
to address such difficulty. This paper aims to assess the applicability of
process mining algorithms in accelerating and improving the implementation of
RPA, along with the challenges encountered throughout project lifecycles.
Methodology: A systematic literature review was conducted to examine the
approaches where process mining techniques were used to understand the as-is
processes that can be automated with software robots. Eight databases were used
to identify papers on this topic. Findings: A total of 19 papers, all published
since 2018, were selected from 158 unique candidate papers and then analyzed.
There is an increase in the number of publications in this domain. Originality:
The literature currently lacks a systematic review that covers the intersection
of process mining and robotic process automation. The literature mainly focuses
on the methods to record the events that occur at the level of user
interactions with the application, and on the preprocessing methods that are
needed to discover routines with the steps that can be automated. Several
challenges are faced with preprocessing such event logs, and many lifecycle
steps of automation project are weakly supported by existing approaches.Comment: 29 pages, 5 figures, 5 table
Knowledge discovery for moderating collaborative projects
In today's global market environment, enterprises are increasingly turning towards
collaboration in projects to leverage their resources, skills and expertise, and
simultaneously address the challenges posed in diverse and competitive markets.
Moderators, which are knowledge based systems have successfully been used to support
collaborative teams by raising awareness of problems or conflicts. However, the
functioning of a moderator is limited to the knowledge it has about the team members.
Knowledge acquisition, learning and updating of knowledge are the major challenges for
a Moderator's implementation. To address these challenges a Knowledge discOvery And
daTa minINg inteGrated (KOATING) framework is presented for Moderators to enable them to continuously learn from the operational databases of the company and semi-automatically update the corresponding expert module. The architecture for the Universal Knowledge Moderator (UKM) shows how the existing moderators can be extended to support global manufacturing.
A method for designing and developing the knowledge acquisition module of the Moderator for manual and semi-automatic update of knowledge is documented using the Unified Modelling Language (UML). UML has been used to explore the static structure and dynamic behaviour, and describe the system analysis, system design and system
development aspects of the proposed KOATING framework. The proof of design has been presented using a case study for a collaborative project in
the form of construction project supply chain. It has been shown that Moderators can
"learn" by extracting various kinds of knowledge from Post Project Reports (PPRs) using
different types of text mining techniques. Furthermore, it also proposed that the
knowledge discovery integrated moderators can be used to support and enhance
collaboration by identifying appropriate business opportunities and identifying
corresponding partners for creation of a virtual organization. A case study is presented in
the context of a UK based SME. Finally, this thesis concludes by summarizing the thesis,
outlining its novelties and contributions, and recommending future research
- …