6,856 research outputs found

    Would credit scoring work for Islamic finance? A neural network approach

    Get PDF
    Purpose – The main aim of this paper is to distinguish whether the decision making process of the Islamic financial houses in the UK can be improved through the use of credit scoring modeling techniques as opposed to the currently used judgmental approaches. Subsidiary aims are to identify how scoring models can reclassify accepted applicants who later are considered as having bad credit and how many of the rejected applicants are later considered as having good credit; and highlight significant variables that are crucial in terms of accepting and rejecting applicants which can further aid the decision making process. Design/methodology/approach – A real data-set of 487 applicants are used consisting of 336 accepted credit applications and 151 rejected credit applications make to an Islamic finance house in the UK. In order to build the proposed scoring models, the data-set is divided into training and hold-out sub-set. The training sub-set is used to build the scoring models and the hold-out sub-set is used to test the predictive capabilities of the scoring models.70 percent of the overall applicants will be used for the training sub-set and 30 percent will be used for the testing sub-set. Three statistical modeling techniques namely Discriminant Analysis (DA), Logistic Regression (LR) and Multi-layer Perceptron (MP) neural network are used to build the proposed scoring models. Findings – Our findings reveal that the LR model has the highest Correct Classification (CC) rate in the training sub-set whereas MP outperforms other techniques and has the highest CC rate in the hold-out sub-set. MP also outperforms other techniques in terms of predicting the rejected credit applications and has the lowest Misclassification Cost (MC) above other techniques. In addition, results from MP models show that monthly expenses, age and marital status are identified as the key factors affecting the decision making process. Research limitations/implications – Although our sample is small and restricted to an Islamic Finance house in the UK the results are robust. Future research could consider enlarging the sample in the UK and also internationally allowing for cultural differences to be identified. The results indicate that the scoring models can be of great benefit to Islamic finance houses in regards to their decision making processes of accepting and rejecting new credit applications and thus improve their efficiency and effectiveness. Originality/value –Our contribution is the first to apply credit scoring modeling techniques in Islamic Finance. Also in building a scoring model our application applies a different approach by using accepted and rejected credit applications instead of good and bad credit histories. This identifies opportunity costs of misclassifying credit applications as rejected

    Improving Software Performance in the Compute Unified Device Architecture

    Get PDF
    This paper analyzes several aspects regarding the improvement of software performance for applications written in the Compute Unified Device Architecture CUDA). We address an issue of great importance when programming a CUDA application: the Graphics Processing Unit’s (GPU’s) memory management through ranspose ernels. We also benchmark and evaluate the performance for progressively optimizing a transposing matrix application in CUDA. One particular interest was to research how well the optimization techniques, applied to software application written in CUDA, scale to the latest generation of general-purpose graphic processors units (GPGPU), like the Fermi architecture implemented in the GTX480 and the previous architecture implemented in GTX280. Lately, there has been a lot of interest in the literature for this type of optimization analysis, but none of the works so far (to our best knowledge) tried to validate if the optimizations can apply to a GPU from the latest Fermi architecture and how well does the Fermi architecture scale to these software performance improving techniques.Compute Unified Device Architecture, Fermi Architecture, Naive Transpose, Coalesced Transpose, Shared Memory Copy, Loop in Kernel, Loop over Kernel

    FAKE NEWS DETECTION ON THE WEB: A DEEP LEARNING BASED APPROACH

    Get PDF
    The acceptance and popularity of social media platforms for the dispersion and proliferation of news articles have led to the spread of questionable and untrusted information (in part) due to the ease by which misleading content can be created and shared among the communities. While prior research has attempted to automatically classify news articles and tweets as credible and non-credible. This work complements such research by proposing an approach that utilizes the amalgamation of Natural Language Processing (NLP), and Deep Learning techniques such as Long Short-Term Memory (LSTM). Moreover, in Information System’s paradigm, design science research methodology (DSRM) has become the major stream that focuses on building and evaluating an artifact to solve emerging problems. Hence, DSRM can accommodate deep learning-based models with the availability of adequate datasets. Two publicly available datasets that contain labeled news articles and tweets have been used to validate the proposed model’s effectiveness. This work presents two distinct experiments, and the results demonstrate that the proposed model works well for both long sequence news articles and short-sequence texts such as tweets. Finally, the findings suggest that the sentiments, tagging, linguistics, syntactic, and text embeddings are the features that have the potential to foster fake news detection through training the proposed model on various dimensionality to learn the contextual meaning of the news content

    Trustworthiness in Social Big Data Incorporating Semantic Analysis, Machine Learning and Distributed Data Processing

    Get PDF
    This thesis presents several state-of-the-art approaches constructed for the purpose of (i) studying the trustworthiness of users in Online Social Network platforms, (ii) deriving concealed knowledge from their textual content, and (iii) classifying and predicting the domain knowledge of users and their content. The developed approaches are refined through proof-of-concept experiments, several benchmark comparisons, and appropriate and rigorous evaluation metrics to verify and validate their effectiveness and efficiency, and hence, those of the applied frameworks
    • 

    corecore