692 research outputs found

    Finding Influential Users in Social Media Using Association Rule Learning

    Full text link
    Influential users play an important role in online social networks since users tend to have an impact on one other. Therefore, the proposed work analyzes users and their behavior in order to identify influential users and predict user participation. Normally, the success of a social media site is dependent on the activity level of the participating users. For both online social networking sites and individual users, it is of interest to find out if a topic will be interesting or not. In this article, we propose association learning to detect relationships between users. In order to verify the findings, several experiments were executed based on social network analysis, in which the most influential users identified from association rule learning were compared to the results from Degree Centrality and Page Rank Centrality. The results clearly indicate that it is possible to identify the most influential users using association rule learning. In addition, the results also indicate a lower execution time compared to state-of-the-art methods

    A review of the use of artificial intelligence methods in infrastructure systems

    Get PDF
    The artificial intelligence (AI) revolution offers significant opportunities to capitalise on the growth of digitalisation and has the potential to enable the ‘system of systems’ approach required in increasingly complex infrastructure systems. This paper reviews the extent to which research in economic infrastructure sectors has engaged with fields of AI, to investigate the specific AI methods chosen and the purposes to which they have been applied both within and across sectors. Machine learning is found to dominate the research in this field, with methods such as artificial neural networks, support vector machines, and random forests among the most popular. The automated reasoning technique of fuzzy logic has also seen widespread use, due to its ability to incorporate uncertainties in input variables. Across the infrastructure sectors of energy, water and wastewater, transport, and telecommunications, the main purposes to which AI has been applied are network provision, forecasting, routing, maintenance and security, and network quality management. The data-driven nature of AI offers significant flexibility, and work has been conducted across a range of network sizes and at different temporal and geographic scales. However, there remains a lack of integration of planning and policy concerns, such as stakeholder engagement and quantitative feasibility assessment, and the majority of research focuses on a specific type of infrastructure, with an absence of work beyond individual economic sectors. To enable solutions to be implemented into real-world infrastructure systems, research will need to move away from a siloed perspective and adopt a more interdisciplinary perspective that considers the increasing interconnectedness of these systems

    Customer lifetime value: a framework for application in the insurance industry - building a business process to generate and maintain an automatic estimation agent

    Get PDF
    Research Project submited as partial fulfilment for the Master Degree in Statistics and Information Management, specialization in Knowledge Management and Business IntelligenceIn recent years the topic of Customer Lifetime Value (CLV) or in its expanded version, Customer Equity (CE) has become popular as a strategic tool across several industries, in particular in retail and services. Although the core concepts of CLV modelling have been studied for several years and the mathematics that underpins the concept is well understood, the application to specific industries is not trivial. The complexities associated with the development of a CLV programme as a business process are not insignificant causing a myriad of obstacles to its implementation. This research project builds a framework to develop and implement the CLV concept as maintainable business process with the focus on the Insurance Industry, in particular for the nonlife line of business. Key concepts, as churn modelling, portfolio stationary premiums, fiscal policies and balance sheet information must be integrated into the CLV framework. In addition, an automatic estimation machine (AEM) is developed to standardize CLV calculations. The concept of AEM is important, given that CLV information “must be fit for purpose”, when used in other business processes. The field work is carried out in a Portuguese Bancassurance Company which is part of an important Portuguese financial Group. Firstly this is done by investigating how to translate and apply the known CLV concepts into the insurance industry context. Secondly, a sensitivity study is done to establish the optimum parameters strategy. This is done by incorporating and comparing several Datamining concepts applied to churn prediction and customer base segmentation. Scenarios for balance sheet information usage and others actuarial concepts are analyzed to calibrate the Cash Flow component of the CLV framework. Thirdly, an Automatic Estimation Agent is defined for application to the current or the expanding firm portfolio, the advantages of using the SOA approach for deployment is also verified. Additionally a comparative impact study is done between two valuation views: the Premium/Cost driven versus the CLV driven. Finally a framework for a BPM is presented, not only for building the AEM but also for its maintenance according to an explicit performance threshold.O tema do valor embebido do Cliente (Customer Lifetime Value ou CLV), ou na sua versão expandida, valoração patrimonial do Cliente (Customer Equity), adquiriu alguma relevância como ferramenta estratégica em várias indústrias, em particular na Distribuição e Serviços. Embora os principais conceitos subjacentes ao CLV tenham sido já desenvolvidos e a matemática financeira possa ser considerada trivial, a sua aplicação prática não o é. As complexidades associadas ao desenvolvimento de um programa de CLV, especialmente na forma de Processo de Negócio não são insignificantes, existindo uma miríade de obstáculos à sua implementação. Este projecto de pesquisa desenvolve o enquadramento de adaptação, actividades e processos necessários para a aplicação do conceito à Industria de Seguros, especificamente para uma empresa que actue no Sector Não Vida. Conceitos-chave, como a modelação da erosão das carteiras, a estacionaridade dos prémios, as políticas fiscais e informação de balanço terão de ser integrados no âmbito do programa de modelação do valor embebido do Cliente. Um dos entregáveis será uma “máquina automática de estimação” do valor embebido, essa ferramenta servirá para padronizar os cálculos do CLV, para além disso é importante, dado que a informação do CLV será utilizada noutros processos de negócio, como por exemplo a distribuição ou vendas. O trabalho de campo é realizado numa empresa de Seguros tipo Bancassurance pertença de um Grupo Financeiro Português relevante. O primeiro passo do trabalho será a compressão do conceito do CLV e como aplicá-lo aos Seguros. Em segundo lugar, será feito um estudo de sensibilidade para determinar a estratégia óptima de parâmetros através de aplicação de técnicas de modelação. Em terceiro lugar serão abordados alguns detalhes da máquina automática de estimação e a sua utilização do ponto de vista dos Serviços e Sistemas de Negócio ( e.g. via SOA). Em paralelo será realizado um estudo de impacto comparativo entre as duas visões de avaliação do negócio: Rácio de Sinistralidade vs CLV. Por último será apresentado um desenho de processo para a manutenção continuada da utilização deste conceito no suporte ao negócio

    Can bank interaction during rating measurement of micro and very small enterprises ipso facto Determine the collapse of PD status?

    Get PDF
    This paper begins with an analysis of trends - over the period 2012-2018 - for total bank loans, non-performing loans, and the number of active, working enterprises. A review survey was done on national data from Italy with a comparison developed on a local subset from the Sardinia Region. Empirical evidence appears to support the hypothesis of the paper: can the rating class assigned by banks - using current IRB and A-IRB systems - to micro and very small enterprises, whose ability to replace financial resources using endogenous means is structurally impaired, ipso facto orient the results of performance in the same terms of PD assigned by the algorithm, thereby upending the principle of cause and effect? The thesis is developed through mathematical modeling that demonstrates the interaction of the measurement tool (the rating algorithm applied by banks) on the collapse of the loan status (default, performing, or some intermediate point) of the assessed micro-entity. Emphasis is given, in conclusion, to the phenomenon using evidence of the intrinsically mutualistic link of the two populations of banks and (micro) enterprises provided by a system of differential equation

    Data- and Value-Driven Software Engineering with Deep Customer Insight : Proceedings of the Seminar No. 58314308

    Get PDF
    There is a need in many software-based companies to evolve their software development practices towards continuous integration and continuous deployment. This allows a company to frequently and rapidly integrate and deploy their work and in consequence also opens opportunities for getting feedback from customers on a regular basis. Ideally, this feedback is used to support design decisions early in the development process, e.g., to determine which features should be maintained over time and which features should be skipped. In more general terms, the entire R&D system of an organization should be in a state where it is able to respond and act quickly based in instant customer feedback and where actual deployment of software functionality is seen as a way of fast experimenting and testing what the customer needs. Experimentation refers here to fast validation of a business model or more specifically validating a value hypothesis. Reaching such a state of continuous experimentation implies a lot of challenges for organizations. Selected challenges are how to develop the "right" software while developing software "right", how to have an appropriate tool infrastructure in place, how to measure and evaluate customer value, what are appropriate feedback systems, how to improve the velocity of software development, how to increase the business hit rate with new products and features, how to integrate such experiments into the development process, how to link knowledge about value for users or customers to higher-level goals of an organization. These challenges are quite new for many software-based organizations and not sufficiently understood from a software engineering perspective. These proceedings contain selected seminar papers of the student seminar Data- and Value-Driven Software Engineering with Deep Customer Insight that was held at the Department of Computer Science of the University of Helsinki. The seminar was held during the fall semester of 2014 from September 1st to December 8th. Papers in the seminar cover a wide range of topics related to the creation of value in software engineering. An interview of startups shows that emerging companies face a number of key decision points that shape their future. Value has a different meaning in different contexts. Embedded devices can be used to gather data and provide more value to the users through analysis and adaptation to circumstances. In entertainment, metrics can provide content creators the chance to react to user behavior and provide a more meaningful user experience. Value creation needs an active approach to software development from the companies: software engineering processes need to be incorporated with proper mechanisms to find the correct stakeholders, elicit requirements that provide the highest value and successfully implement the necessary changes with short development cycles. When the right building blocks are in place, companies are able to quickly deliver new software and leverage data from their products and services to continuously improve the perceived value of software

    Systematic Literature Review on Customer Switching Behaviour from Marketing and Data Science Perspectives

    Get PDF
    This paper systematically examines the literature review in the field of customer switching behavior. Based on the literature review, it can be concluded that customer switching behavior is a topic that has been widely researched, with a focus on various industries, particularly banking and telecommunications. Research trends in this area have shown a positive direction in recent years, and the amount of research being done in marketing and data science is relatively balanced. In marketing, correlational studies are predominant, with a focus on identifying relationships between customer satisfaction, price-related variables, attractiveness of alternatives, service failure, quality, and switching costs to switching behavior. The PPM model is also gaining popularity as an important development for switching behavior because it considers both push and pull factors. Data science research has shown promising results in predicting customer switching behavior, with each research paper achieving good predictive accuracy. However, research gaps spanning the fields of marketing and data science need to be addressed to provide a comprehensive understanding of the drivers of customer switching behavior. Overall, the literature review shows that customer switching behavior is an important concern for businesses, and further research in this area is essential to gain a better understanding of customer behavior and develop effective strategies to retain customers

    Data mining in manufacturing: a review based on the kind of knowledge

    Get PDF
    In modern manufacturing environments, vast amounts of data are collected in database management systems and data warehouses from all involved areas, including product and process design, assembly, materials planning, quality control, scheduling, maintenance, fault detection etc. Data mining has emerged as an important tool for knowledge acquisition from the manufacturing databases. This paper reviews the literature dealing with knowledge discovery and data mining applications in the broad domain of manufacturing with a special emphasis on the type of functions to be performed on the data. The major data mining functions to be performed include characterization and description, association, classification, prediction, clustering and evolution analysis. The papers reviewed have therefore been categorized in these five categories. It has been shown that there is a rapid growth in the application of data mining in the context of manufacturing processes and enterprises in the last 3 years. This review reveals the progressive applications and existing gaps identified in the context of data mining in manufacturing. A novel text mining approach has also been used on the abstracts and keywords of 150 papers to identify the research gaps and find the linkages between knowledge area, knowledge type and the applied data mining tools and techniques

    An academic review: applications of data mining techniques in finance industry

    Get PDF
    With the development of Internet techniques, data volumes are doubling every two years, faster than predicted by Moore’s Law. Big Data Analytics becomes particularly important for enterprise business. Modern computational technologies will provide effective tools to help understand hugely accumulated data and leverage this information to get insights into the finance industry. In order to get actionable insights into the business, data has become most valuable asset of financial organisations, as there are no physical products in finance industry to manufacture. This is where data mining techniques come to their rescue by allowing access to the right information at the right time. These techniques are used by the finance industry in various areas such as fraud detection, intelligent forecasting, credit rating, loan management, customer profiling, money laundering, marketing and prediction of price movements to name a few. This work aims to survey the research on data mining techniques applied to the finance industry from 2010 to 2015.The review finds that Stock prediction and Credit rating have received most attention of researchers, compared to Loan prediction, Money Laundering and Time Series prediction. Due to the dynamics, uncertainty and variety of data, nonlinear mapping techniques have been deeply studied than linear techniques. Also it has been proved that hybrid methods are more accurate in prediction, closely followed by Neural Network technique. This survey could provide a clue of applications of data mining techniques for finance industry, and a summary of methodologies for researchers in this area. Especially, it could provide a good vision of Data Mining Techniques in computational finance for beginners who want to work in the field of computational finance
    corecore