125,975 research outputs found

    Big data analysis on the business process and management for the store layout and bundling sales

    Get PDF
    [[abstract]]Purpose – In the retailing industry, database is the time and place where a retail transaction is completed. E-business processes are increasingly adopting databases that can obtain in-depth customers and sales knowledge with the big data analysis. The specific big data analysis on a database system allows a retailer designing and implementing business process management (BPM) to maximize profits, minimize costs and satisfy customers on a business model. Thus, the research of big data analysis on the BPM in the retailing is a critical issue. The paper aims to discuss this issue. Design/methodology/approach – This paper develops a database, ER model, and uses cluster analysis, C&R tree and the a priori algorithm as approaches to illustrate big data analysis/data mining results for generating business intelligence and process management, which then obtain customer knowledge from the case firm’s database system. Findings – Big data analysis/data mining results such as customer profiles, product/brand display classifications and product/brand sales associations can be used to propose alternatives to the case firm for store layout and bundling sales business process and management development. Originality/value – This research paper is an example to develop the BPM of database model and big data/ data mining based on insights from big data analysis applications for store layout and bundling sales in the retailing industry.[[notice]]補正完

    Applied business analytics approach to IT projects – Methodological framework

    Full text link
    The design and implementation of a big data project differs from a typical business intelligence project that might be presented concurrently within the same organization. A big data initiative typically triggers a large scale IT project that is expected to deliver the desired outcomes. The industry has identified two major methodologies for running a data centric project, in particular SEMMA (Sample, Explore, Modify, Model and Assess) and CRISP-DM (Cross Industry Standard Process for Data Mining). More general, the professional organizations PMI (Project Management Institute) and IIBA (International Institute of Business Analysis) have defined their methods for project management and business analysis based on the best current industry practices. However, big data projects place new challenges that are not considered by the existing methodologies. The building of end-to-end big data analytical solution for optimization of the supply chain, pricing and promotion, product launch, shop potential and customer value is facing both business and technical challenges. The most common business challenges are unclear and/or poorly defined business cases; irrelevant data; poor data quality; overlooked data granularity; improper contextualization of data; unprepared or bad prepared data; non-meaningful results; lack of skill set. Some of the technical challenges are related to lag of resources and technology limitations; availability of data sources; storage difficulties; security issues; performance problems; little flexibility; and ineffective DevOps. This paper discusses an applied business analytics approach to IT projects and addresses the above-described aspects. The authors present their work on research and development of new methodological framework and analytical instruments applicable in both business endeavors, and educational initiatives, targeting big data. The proposed framework is based on proprietary methodology and advanced analytics tools. It is focused on the development and the implementation of practical solutions for project managers, business analysts, IT practitioners and Business/Data Analytics students. Under discussion are also the necessary skills and knowledge for the successful big data business analyst, and some of the main organizational and operational aspects of the big data projects, including the continuous model deployment

    Human Resource Management and Artificial Intelligence: A Bibliometric Exploration

    Get PDF
    The concept of artificial intelligence, a driving force behind human resource management, has recently gained popularity in the academic community. This study explores the intellectual structure of this field using the Scopus database in the subject area of business, management and accounting. Bibliographic analysis, a recent and rigorous method for delving into scientific data, is used in this investigation. The approach used is a structured and transparent process divided into four steps: (1) search criteria; (2) selection of database and documents; (3) selection of software and data pre-processing; and (4) analysis of findings. We employ bibliometric mapping to observe their numerous linkages and performance evaluation to learn about their structure. A total of 67 articles were collected from the Scopus database between 2015 and 2022 using certain keywords (artificial intelligence, expert systems, big data analytics, and human resource management) and some specific filters (subject–business, management and accounting; language-English; document–article, review articles and source-journals). Ten research clusters were identified: Cluster 1: multi-agent system; Cluster 2: decision support system; Cluster 3: internet of things; Cluster 4: active learning; Cluster 5: decision tree; Cluster 6: optimisation; Cluster 7: software design; Cluster 8: data mining; Cluster 9: cloud computing; Cluster 10: human-robot interaction. The findings could be helpful for researchers and practitioners in the HRM field to extend their knowledge and understanding of AI and HRM research. This study can provide notable guidance and future directions for quite a few firms in expanding the use of AI in HRM. Keywords: Artificial intelligence, human resource management, bibliometric analysi

    Improving data preparation for the application of process mining

    Get PDF
    Immersed in what is already known as the fourth industrial revolution, automation and data exchange are taking on a particularly relevant role in complex environments, such as industrial manufacturing environments or logistics. This digitisation and transition to the Industry 4.0 paradigm is causing experts to start analysing business processes from other perspectives. Consequently, where management and business intelligence used to dominate, process mining appears as a link, trying to build a bridge between both disciplines to unite and improve them. This new perspective on process analysis helps to improve strategic decision making and competitive capabilities. Process mining brings together data and process perspectives in a single discipline that covers the entire spectrum of process management. Through process mining, and based on observations of their actual operations, organisations can understand the state of their operations, detect deviations, and improve their performance based on what they observe. In this way, process mining is an ally, occupying a large part of current academic and industrial research. However, although this discipline is receiving more and more attention, it presents severe application problems when it is implemented in real environments. The variety of input data in terms of form, content, semantics, and levels of abstraction makes the execution of process mining tasks in industry an iterative, tedious, and manual process, requiring multidisciplinary experts with extensive knowledge of the domain, process management, and data processing. Currently, although there are numerous academic proposals, there are no industrial solutions capable of automating these tasks. For this reason, in this thesis by compendium we address the problem of improving business processes in complex environments thanks to the study of the state-of-the-art and a set of proposals that improve relevant aspects in the life cycle of processes, from the creation of logs, log preparation, process quality assessment, and improvement of business processes. Firstly, for this thesis, a systematic study of the literature was carried out in order to gain an in-depth knowledge of the state-of-the-art in this field, as well as the different challenges faced by this discipline. This in-depth analysis has allowed us to detect a number of challenges that have not been addressed or received insufficient attention, of which three have been selected and presented as the objectives of this thesis. The first challenge is related to the assessment of the quality of input data, known as event logs, since the requeriment of the application of techniques for improving the event log must be based on the level of quality of the initial data, which is why this thesis presents a methodology and a set of metrics that support the expert in selecting which technique to apply to the data according to the quality estimation at each moment, another challenge obtained as a result of our analysis of the literature. Likewise, the use of a set of metrics to evaluate the quality of the resulting process models is also proposed, with the aim of assessing whether improvement in the quality of the input data has a direct impact on the final results. The second challenge identified is the need to improve the input data used in the analysis of business processes. As in any data-driven discipline, the quality of the results strongly depends on the quality of the input data, so the second challenge to be addressed is the improvement of the preparation of event logs. The contribution in this area is the application of natural language processing techniques to relabel activities from textual descriptions of process activities, as well as the application of clustering techniques to help simplify the results, generating more understandable models from a human point of view. Finally, the third challenge detected is related to the process optimisation, so we contribute with an approach for the optimisation of resources associated with business processes, which, through the inclusion of decision-making in the creation of flexible processes, enables significant cost reductions. Furthermore, all the proposals made in this thesis are validated and designed in collaboration with experts from different fields of industry and have been evaluated through real case studies in public and private projects in collaboration with the aeronautical industry and the logistics sector

    Integration of decision support systems to improve decision support performance

    Get PDF
    Decision support system (DSS) is a well-established research and development area. Traditional isolated, stand-alone DSS has been recently facing new challenges. In order to improve the performance of DSS to meet the challenges, research has been actively carried out to develop integrated decision support systems (IDSS). This paper reviews the current research efforts with regard to the development of IDSS. The focus of the paper is on the integration aspect for IDSS through multiple perspectives, and the technologies that support this integration. More than 100 papers and software systems are discussed. Current research efforts and the development status of IDSS are explained, compared and classified. In addition, future trends and challenges in integration are outlined. The paper concludes that by addressing integration, better support will be provided to decision makers, with the expectation of both better decisions and improved decision making processes

    The necessities for building a model to evaluate Business Intelligence projects- Literature Review

    Full text link
    In recent years Business Intelligence (BI) systems have consistently been rated as one of the highest priorities of Information Systems (IS) and business leaders. BI allows firms to apply information for supporting their processes and decisions by combining its capabilities in both of organizational and technical issues. Many of companies are being spent a significant portion of its IT budgets on business intelligence and related technology. Evaluation of BI readiness is vital because it serves two important goals. First, it shows gaps areas where company is not ready to proceed with its BI efforts. By identifying BI readiness gaps, we can avoid wasting time and resources. Second, the evaluation guides us what we need to close the gaps and implement BI with a high probability of success. This paper proposes to present an overview of BI and necessities for evaluation of readiness. Key words: Business intelligence, Evaluation, Success, ReadinessComment: International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.2, April 201

    A strategic analytics methodology

    Full text link
    University of Technology, Sydney. Faculty of Information Technology.Commercial organisations are dependent on generating profit from competitive advantage. Central to this approach, is the Strategic Planning Cycle (SPC). SPC converts new information and new subject matter expertise into competitive knowledge, and then converts that knowledge into executable solutions best suited to the organisation’s internal and external circumstances and resources. SPC also maintains the relevance and efficiency of the executed solutions over time. In order to optimise competitiveness, organisations seek to improve SPC in a number of ways. First, they improve the quality of the informational inputs to SPC. Second, they improve the quality of the knowledge which they develop from that information. Third, they optimise the executibility of the solutions, which were based on the knowledge, for the organisation’s particular circumstances and resources. Four, they improve the solutions over time, maintaining competitiveness. All four ways of improving SPC are supported by data analytics. It is therefore a necessity ever to improve the integration of data analytics with SPC. Data mining is an advanced analytics approach, which has been shown to support SPC. Recognising the complexity of integrating data analytics with the business at the turn of the 21st century, the analytics community developed data mining project methodologies to facilitate the integration. The most widely published methodology is CRISP-DM. SAS Institute’s SAS Data Mining Projects Methodology (SDMPM) is a second, albeit proprietary, methodology which is also widely used. Despite the availability of packaged data mining software and project methodologies for more than a decade now, organisations are still finding the integration of data mining with the SPC process complex and daunting. The current situation is that business leaders and data analysts often express the need for better integration of data analytics with SPC and business goals. The researcher hypothesized that the data mining project methodologies may be a major contributor to the above situation. The researcher therefore formulated the research objective of evaluating data mining methodology for its support of the SPC process. The CRISP-DM methodology was chosen for evaluation because it is in the public domain and therefore available to other researchers. (The researcher has evaluated SDMPM in a separate paper.) The research method chosen was Participatory Action Research, specifically that of action science or expert reflection-in-action. The research was industry-based, using data from a real-life Telco customer retention management problem. The researcher and the Telco formulated a data analytics project using CRISP-DM. The project was in support of the Telco’s strategic initiative drastically to reduce customer churn in their consumer business. The data mining project would support the initiative in three ways. First, it would predict customer churn behaviour within an upcoming time window. Second, it would segment the most at-risk customers in strategic marketing dimensions. Third, it would profile the segments in dimensions required for retention campaign re-design. Using expert reflection-in-action, we evaluated the operating and strategic outcome for the Telco, from the project that was formulated using CRISP-DM as the project methodology. The research findings were that the project based on CRISP-DM would be limited in its executibility and strategic impact. This would severely restrict the competitive advantage realisable from the project. Our research identified six key limitations of CRISP-DM in the SPC environment: diagnostic technique for defining the project’s business goals or business deliverables. This is about defining the required informational and marketing components required for the strategic initiative; introduction of new business and analytics subject matter expertise into the project environment. This relates to increasing the understanding of the business problem and its possible solutions through new marketing and data mining subject matter expertise; mapping technique between the project’s business deliverables and the supporting data mining plan. This is about assuring that the data analytics best support the project’s business deliverables; knowledge management activities required by SPC for assessing discovered information against business deliverables, environmental and circumstantial factors, for adapting the information, and for developing competitive, executable business solutions; monitor and control of business and data mining solutions over time for effectiveness and efficiency; and a number of soft project and business solution implementation issues. The main research goal, which flowed from the above finding, was to develop a new, more potent data mining project methodology for the SPC environment. In developing this methodology, the researcher used concepts from the Business, Knowledge Discovery, and Data Mining literature, also drawing on his previous corporate management experience and MBA qualification. The researcher called the new method Strategic Analytics Method (SAM). Essentially SAM is the integration of data analytics project methodology and a proven SPC tool, which is known as Strategic Planning Method (SPM). SPM is a generic decision-making process designed for producing competitive outcomes under conditions of uncertainty and limited resources. SPM is widely used in various guises by business, software engineering, the military, and many other applications. SAM presents a major departure from CRISP-DM’s data centricity, to a project centered on the project’s business deliverables. SAM is targeted at data miners and data analysts working in a commercial environment, and at business intelligence practitioners. Practically SAM contributes the following to data mining projects methodology: omoving the focus from data-related activities to business deliverables; insights about the restrictive impact of the pre-project status quo on the results of the project, the dimensions of the status quo which must be defined into a business problem, and how to achieve that definition; technique for injecting new business and analytics subject matter into the stale business environment, to enable competitive breakthrough; technique for developing business deliverables or goals for the project, which will be competitive. This includes considering the new subject matter, and overcoming the restrictions presented by the current understanding of the status quo; mapping technique between the project’s business deliverables and the data mining plan, which assures the data mining outputs optimally supporting the attainment of the business deliverables; technique for assessing discovered information for its relevance to the business deliverables; knowledge management activities for developing the discovered information into competitive business solutions which are executable under the organisation’s limited resources and limiting circumstances; substantial qualitative and quantitative technique for developing monitor and control plans for both the analytics and the business solution; activities, which pro-actively manage soft issues before they impact on the project negatively. For instance, we reframe data preparation activities as a process, which gradually reduces project risk associated with the data. This offers more understandable and acceptable justification to the business audience about this resource-intensive part of data mining projects; insights for distinguishing between iteration and repetition of activities on advanced SPC projects, and technique for knowing when to start and stop iterating, or repeating. This distinction provides contextual vocabulary for communicating with the business about required project effort. The research validates SAM on the same Telco ABC problem, which was used for evaluating CRISP-DM. The validation came through being able to formulate a project using SAM in which we: assisted Telco ABC in breaking through their limited pre-project marketing perceptions and expectations, to formulate business deliverables based on new marketing and analytics subject matter, which constituted competitiveness in customer retention management; formulated and executed a data mining project which produced the information required by the business deliverables; improved the Telco’s calculation of the extent of the problem; developed knowledge from the discovered information which complemented applicable new marketing subject matter; developed the knowledge into a competitive retention management solution executable under the Telco’s limiting circumstances and limited campaign resources. We presented the solution as new marketing objectives and strategies, and developed these into a retention campaign strategy with various key components; developed a comprehensive monitor and control plan for the campaigns and the operationalised data analytics solution; quantified the project ROI as about 187 times the investment

    A comparison of theory and practice in market intelligence gathering for Australian micro-businesses and SMEs

    Get PDF
    Recent government sponsored research has demonstrated that there is a gap between the theory and practice of market intelligence gathering within the Australian micro, small and medium businesses (SMEs). Typically, there is a significant amount of information in literature about 'what needs to be done', however, there is little insight in terms of how market intelligence gathering should occur. This paper provides a novel insight and a comparison between the theory and practices of market intelligence gathering of micro-business and SMEs in Australia and demonstrates an anomoly in so far as typically the literature does not match what actually occurs in practice. A model for market intelligence gathering for micro-businesses and SMEs is also discussed
    corecore