5,032 research outputs found

    The development of a novel standardisation-customisation continuum

    Get PDF
    Published work on product-oriented customisation lacks clarity in establishing how it is characterised, how it is bounded, and how one would define increasing levels of customisation. This paper describes the development of a standardisation-customisation (S-C) continuum which consists of 13 distinct intervals, starting with “standardisation”, or absence of customisation and ending with “evolution customisation”, or absence of standardisation. Each interval is defined using nine characteristics that collectively define the boundaries of the intervals within the continuum. Analysis using a randomly selected sample of products from a range of industries has demonstrated the continuum’s capability for distinguishing the associated level of S-C. Furthermore, no industry investigated develops products at each level of S-C, however, when combined all industries do. The number of possible levels of S-C tends to depend on the product’s complexity and number of components. The continuum framework clarifies the concept of customisation, provides a scale for determining the product’s customisation and supports the analysis of markets and industries against S-C

    Intelligent data analysis approaches to churn as a business problem: a survey

    Get PDF
    Globalization processes and market deregulation policies are rapidly changing the competitive environments of many economic sectors. The appearance of new competitors and technologies leads to an increase in competition and, with it, a growing preoccupation among service-providing companies with creating stronger customer bonds. In this context, anticipating the customer’s intention to abandon the provider, a phenomenon known as churn, becomes a competitive advantage. Such anticipation can be the result of the correct application of information-based knowledge extraction in the form of business analytics. In particular, the use of intelligent data analysis, or data mining, for the analysis of market surveyed information can be of great assistance to churn management. In this paper, we provide a detailed survey of recent applications of business analytics to churn, with a focus on computational intelligence methods. This is preceded by an in-depth discussion of churn within the context of customer continuity management. The survey is structured according to the stages identified as basic for the building of the predictive models of churn, as well as according to the different types of predictive methods employed and the business areas of their application.Peer ReviewedPostprint (author's final draft

    The probability of default in internal ratings based (IRB) models in Basel II: an application of the rough sets methodology

    Get PDF
    El nuevo Acuerdo de Capital de junio de 2004 (Basilea II) da cabida e incentiva la implantación de modelos propios para la medición de los riesgos financieros en las entidades de crédito. En el trabajo que presentamos nos centramos en los modelos internos para la valoración del riesgo de crédito (IRB) y concretamente en la aproximación a uno de sus componentes: la probabilidad de impago (PD). Los métodos tradicionales usados para la modelización del riesgo de crédito, como son el análisis discriminante y los modelos logit y probit, parten de una serie de restricciones estadísticas. La metodología rough sets se presenta como una alternativa a los métodos estadísticos clásicos, salvando las limitaciones de estos. En nuestro trabajo aplicamos la metodología rought sets a una base de datos, compuesta por 106 empresas, solicitantes de créditos, con el objeto de obtener aquellos ratios que mejor discriminan entre empresas sanas y fallidas, así como una serie de reglas de decisión que ayudarán a detectar las operaciones potencialmente fallidas, como primer paso en la modelización de la probabilidad de impago. Por último, enfrentamos los resultados obtenidos con los alcanzados con el análisis discriminante clásico, para concluir que la metodología de los rough sets presenta mejores resultados de clasificación, en nuestro caso.The new Capital Accord of June 2004 (Basel II) opens the way for and encourages credit entities to implement their own models for measuring financial risks. In the paper presented, we focus on the use of internal rating based (IRB) models for the assessment of credit risk and specifically on the approach to one of their components: probability of default (PD). In our study we apply the rough sets methodology to a database composed of 106 companies, applicants for credit, with the object of obtaining those ratios that discriminate best between healthy and bankrupt companies, together with a series of decision rules that will help to detect the operations potentially in default, as a first step in modelling the probability of default. Lastly, we compare the results obtained against those obtained using classic discriminant análisis. We conclude that the rough sets methodology presents better risk classification results.Junta de Andalucía P06-SEJ-0153

    Future benefits and applications of intelligent on-board processing to VSAT services

    Get PDF
    The trends and roles of VSAT services in the year 2010 time frame are examined based on an overall network and service model for that period. An estimate of the VSAT traffic is then made and the service and general network requirements are identified. In order to accommodate these traffic needs, four satellite VSAT architectures based on the use of fixed or scanning multibeam antennas in conjunction with IF switching or onboard regeneration and baseband processing are suggested. The performance of each of these architectures is assessed and the key enabling technologies are identified

    Experimental making in multi-disciplinary research

    Get PDF
    For the past 3 years, Graham Whiteley has been using making in a project to develop a mechanical analogy for the human skeletal arm to inform the future development of prostheses and other artefacts. Other aspects of the work such as use of drawings and the use of a principled approach in the absence of concrete design goals have been documented elsewhere, this paper concentrates on the central role of making in the process. The paper will discuss the role of making in multi-disciplinary research; craft skills and resources appropriate to each stage of a practice centred research project in this area; the use of models in an iterative experimental investigation and the value of models in eliciting knowledge from a broad community of interested parties and experts.</p

    Customer churn prediction in the banking industry

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Business AnalyticsThe objective of this project is to create a predictive model that will decrease customer churn in a Portuguese bank. That is, we intend to identify customers who could be considering closing their checking accounts. For the bank to be able to take the necessary corrective measures, the model also aims to determine the characteristics of the customers that decided to leave. This model will make use of customer data that the organization already has to hand. Data pre-processing with data cleansing, transformation, and reduction was the initial stage of the analysis. The dataset is imbalanced, meaning that we have a small number of positive outcomes or churners; thus, under-sampling and other approaches were employed to address this issue. The predictive models used are logistic regression, support vector machine, decision trees and artificial neural networks, and for each, parameter tuning was also conducted. In conclusion, regarding the customer churn prediction, the recommended model is a support vector machine with a precision of 0.84 and an AUROC of 0.905. These findings will contribute to the customer lifetime value, helping the bank better understand their customers' behavior and allow them to draw strategies accordingly with the information obtained

    A buyer-seller watermarking protocol for digital secondary market

    Get PDF
    In the digital right management value chain, digital watermarking technology plays a very important role in digital product’s security, especially on its usage tracking and copyrights infringement authentication. However, watermark procedures can only effectively support copyright protection processes if they are applied as part of an appropriate watermark protocol. In this regard, a number of watermark protocols have been proposed in the literature and have been shown to facilitate the use of digital watermarking technology as copyright protection. One example of such protocols is the anonymous buyer-seller watermarking protocol. Although there are a number of protocols that have been proposed in the literature and provide suitable solutions, they are mainly designed as a watermarking protocol for the first-hand market and are unsuitable for second-hand transactions. As the complexity of online transaction increases, so does the size of the digital second-hand market. In this paper, we present a new buyer-seller watermark protocol that addresses the needs of customer’s rights problem in the digital secondary market. The proposed protocol consists of five sub-protocols that cover the registration process, watermarking process for the first, second and third-hand transactions as well as the identification & arbitration processes. This paper provides analysis that compares the proposed protocols with existing state-of-the-arts and shows that it has met not only all the buyer’s and seller’s requirements in the traditional sense but also accommodates the same requirements in the secondary market
    corecore