294 research outputs found
Secret charing vs. encryption-based techniques for privacy preserving data mining
Privacy preserving querying and data publishing has been studied in the context of statistical databases and statistical disclosure control. Recently, large-scale data collection and integration efforts increased privacy concerns which motivated data mining researchers to investigate privacy implications of data mining and how data mining can be performed without violating privacy. In this paper, we first provide an overview of privacy preserving data mining focusing on distributed data sources, then we compare two technologies used in privacy preserving data mining. The first technology is encryption based, and it is used in earlier approaches. The second technology is secret-sharing which is recently being considered as a more efficient approach
Credit Scoring Using Machine Learning
For financial institutions and the economy at large, the role of credit scoring in lending decisions cannot be overemphasised. An accurate and well-performing credit scorecard allows lenders to control their risk exposure through the selective allocation of credit based on the statistical analysis of historical customer data. This thesis identifies and investigates a number of specific challenges that occur during the development of credit scorecards. Four main contributions are made in this thesis. First, we examine the performance of a number supervised classification techniques on a collection of imbalanced credit scoring datasets. Class imbalance occurs when there are significantly fewer examples in one or more classes in a dataset compared to the remaining classes. We demonstrate that oversampling the minority class leads to no overall improvement to the best performing classifiers. We find that, in contrast, adjusting the threshold on classifier output yields, in many cases, an improvement in classification performance. Our second contribution investigates a particularly severe form of class imbalance, which, in credit scoring, is referred to as the low-default portfolio problem. To address this issue, we compare the performance of a number of semi-supervised classification algorithms with that of logistic regression. Based on the detailed comparison of classifier performance, we conclude that both approaches merit consideration when dealing with low-default portfolios. Third, we quantify the differences in classifier performance arising from various implementations of a real-world behavioural scoring dataset. Due to commercial sensitivities surrounding the use of behavioural scoring data, very few empirical studies which directly address this topic are published. This thesis describes the quantitative comparison of a range of dataset parameters impacting classification performance, including: (i) varying durations of historical customer behaviour for model training; (ii) different lengths of time from which a borrower’s class label is defined; and (iii) using alternative approaches to define a customer’s default status in behavioural scoring. Finally, this thesis demonstrates how artificial data may be used to overcome the difficulties associated with obtaining and using real-world data. The limitations of artificial data, in terms of its usefulness in evaluating classification performance, are also highlighted. In this work, we are interested in generating artificial data, for credit scoring, in the absence of any available real-world data
R-miss-tastic: a unified platform for missing values methods and workflows
Missing values are unavoidable when working with data. Their occurrence is
exacerbated as more data from different sources become available. However, most
statistical models and visualization methods require complete data, and
improper handling of missing data results in information loss, or biased
analyses. Since the seminal work of Rubin (1976), there has been a burgeoning
literature on missing values with heterogeneous aims and motivations. This has
resulted in the development of various methods, formalizations, and tools
(including a large number of R packages and Python modules). However, for
practitioners, it remains challenging to decide which method is most suited for
their problem, partially because handling missing data is still not a topic
systematically covered in statistics or data science curricula.
To help address this challenge, we have launched a unified platform:
"R-miss-tastic", which aims to provide an overview of standard missing values
problems, methods, how to handle them in analyses, and relevant implementations
of methodologies. In the same perspective, we have also developed several
pipelines in R and Python to allow for a hands-on illustration of how to handle
missing values in various statistical tasks such as estimation and prediction,
while ensuring reproducibility of the analyses. This will hopefully also
provide some guidance on deciding which method to choose for a specific problem
and data. The objective of this work is not only to comprehensively organize
materials, but also to create standardized analysis workflows, and to provide a
common ground for discussions among the community. This platform is thus suited
for beginners, students, more advanced analysts and researchers.Comment: 38 pages, 9 figure
Holistic Influence Maximization: Combining Scalability and Efficiency with Opinion-Aware Models
The steady growth of graph data from social networks has resulted in
wide-spread research in finding solutions to the influence maximization
problem. In this paper, we propose a holistic solution to the influence
maximization (IM) problem. (1) We introduce an opinion-cum-interaction (OI)
model that closely mirrors the real-world scenarios. Under the OI model, we
introduce a novel problem of Maximizing the Effective Opinion (MEO) of
influenced users. We prove that the MEO problem is NP-hard and cannot be
approximated within a constant ratio unless P=NP. (2) We propose a heuristic
algorithm OSIM to efficiently solve the MEO problem. To better explain the OSIM
heuristic, we first introduce EaSyIM - the opinion-oblivious version of OSIM, a
scalable algorithm capable of running within practical compute times on
commodity hardware. In addition to serving as a fundamental building block for
OSIM, EaSyIM is capable of addressing the scalability aspect - memory
consumption and running time, of the IM problem as well.
Empirically, our algorithms are capable of maintaining the deviation in the
spread always within 5% of the best known methods in the literature. In
addition, our experiments show that both OSIM and EaSyIM are effective,
efficient, scalable and significantly enhance the ability to analyze real
datasets.Comment: ACM SIGMOD Conference 2016, 18 pages, 29 figure
- …