2 research outputs found
Application of Artificial Intelligence Techniques in Credit Risk
Tato bakalářská práce se zabĂ˝vá metodami umÄ›lĂ© inteligence a jejich vyuĹľitĂm pĹ™i modelovánĂ kreditnĂho rizika, konkrĂ©tnÄ› pĹ™i modelovánĂ pravdÄ›podobnosti defaultu. V teoretickĂ© části práce jsou popsány pouĹľitĂ© metody, tedy logistická regrese, náhodnĂ© lesy, support vector machines a neuronovĂ© sĂtÄ›. V praktickĂ© části jsou tyto metody implementovány a vytrĂ©novány na datech z online peer-to-peer platformy Lending Club a na datech z online soutěžĂcĂ platformy Kaggle. V závÄ›ru jsou prezentovány vĂ˝slednĂ© hodnotĂcĂ metriky, kde je ilustrováno, Ĺľe metody UI mohou dosahovat lepšĂch vĂ˝sledkĹŻ oproti běžnÄ› uĹľĂvanĂ©mu standardu - logistickĂ© regresi.This bachelor thesis describes artificial intelligence methods and their application in credit risk modelling, particularly in probability of default modelling. In theoretical part are described methods used in practical part, namely logistic regression, random forests,support vector machines and neural networks. In practical part are those methods implemented and trained on data from online peer-to-peer platform Lending Club and on data from online competition platform Kaggle. In the end are presented evaluation metrics, where is showed that AI methods can reach better results compared to commonly used standard-logistic regression
CsFEVER and CTKFacts: Acquiring Czech data for fact verification
In this paper, we examine several methods of acquiring Czech data for
automated fact-checking, which is a task commonly modeled as a classification
of textual claim veracity w.r.t. a corpus of trusted ground truths. We attempt
to collect sets of data in form of a factual claim, evidence within the ground
truth corpus, and its veracity label (supported, refuted or not enough info).
As a first attempt, we generate a Czech version of the large-scale FEVER
dataset built on top of Wikipedia corpus. We take a hybrid approach of machine
translation and document alignment; the approach and the tools we provide can
be easily applied to other languages. We discuss its weaknesses and
inaccuracies, propose a future approach for their cleaning and publish the 127k
resulting translations, as well as a version of such dataset reliably
applicable for the Natural Language Inference task - the CsFEVER-NLI.
Furthermore, we collect a novel dataset of 3,097 claims, which is annotated
using the corpus of 2.2M articles of Czech News Agency. We present its extended
annotation methodology based on the FEVER approach, and, as the underlying
corpus is kept a trade secret, we also publish a standalone version of the
dataset for the task of Natural Language Inference we call CTKFactsNLI. We
analyze both acquired datasets for spurious cues - annotation patterns leading
to model overfitting. CTKFacts is further examined for inter-annotator
agreement, thoroughly cleaned, and a typology of common annotator errors is
extracted. Finally, we provide baseline models for all stages of the
fact-checking pipeline and publish the NLI datasets, as well as our annotation
platform and other experimental data.Comment: submitted to LREV journal for review, resubmission, changed title
according to reviewer suggestio