9 research outputs found

    Intelligent Automated Small and Medium Enterprise (SME) Loan Application Processing System Using Neuro-CBR Approach

    Get PDF
    Developing a group of diverse and competitive small and medium enterprises (SMEs) is a central theme towards achieving sustainable economic growth. SMEs are crucial to the economic growth process and play an important role in the country's overall production network. The focus of this study is to develop an automated decision support model for SMEs sector that can be used by the management to accelerate the loan application processing. This study proposed an intelligent automated SME loan application processing system (i-SMEs) that is a web based application system for processing and monitoring SME applications using Hybrid Intelligent technique which integrate Neural Network and Case-based Reasoning namely NeuroCBR. i-SMEs is used to assist SME bank management in order to improve decision making time processing as well as operational cost. i-SMEs be able to classify SME market segment into three distinctive groups that are MICRO, MEDIUM and SMALL and also can make a pre-approval loan processing faster. It is possible to transform the patterns generated from i-SME into actionable plans that are likely to help the SME Bank

    Autonomous Visualization

    Full text link

    Contrastive Characters of Spatial Composition Process between Architecturally Trained and Untrained Students

    Get PDF
    Inductive logic programming was applied to the analysis of spatial composition processes using an architectural space montage technique. The complexly structured data of the spatial composition processes that consist of many objects, their relationships, and their attributes were modeled with first-order logic. One architectural space montage technique experiment was conducted on 14 architecturally trained students and on 14 untrained students. These experimental cases were analyzed by Progol, which is one ILP system. 513 rules for the trained students and 458 for the untrained students were found. By comparing these rules, we found contrastive characteristics between the two groups from four points of view: (1) extension method of chain of miniatures, (2) relationship as basic unit of composition, (3) type of miniature, and (4) multiplicity of rules

    Finding a short and accurate decision rule in disjunctive normal form by exhaustive search

    Get PDF
    Greedy approaches suffer from a restricted search space which could lead to suboptimal classifiers in terms of performance and classifier size. This study discusses exhaustive search as an alternative to greedy search for learning short and accurate decision rules. The Exhaustive Procedure for LOgic-Rule Extraction (EXPLORE) algorithm is presented, to induce decision rules in disjunctive normal form (DNF) in a systematic and efficient manner. We propose a method based on subsumption to reduce the number of values considered for instantiation in the literals, by taking into account the relational operator without loss of performance. Furthermore, we describe a branch-and-bound approach that makes optimal use of user-defined performance constraints. To improve the generalizability we use a validation set to determine the optimal length of the DNF rule. The performance and size of the DNF rules induced by EXPLORE are compared to those of eight well-known rule learners. Our results show that an exhaustive approach to rule learning in DNF results in significantly smaller classifiers than those of the other rule learners, while securing comparable or even better performance. Clearly, exhaustive search is computer-intensive and may not always be feasible. Nevertheless, based on this study, we believe that exhaustive search should be considered an alternative for greedy search in many problems

    Automatic interpretation of pediatric electrocardiograms

    Get PDF
    The year 1902 saw the birth of clinical electrocardiography when Willem Einthoven published the first electrocardiogram (ECG) of unprecedented quality recorded with his newly invented string- galvanometer [1]. The foundations of electrocardiographic diagnosis were laid in the half century that followed. After the second world war electronic pen-writing recorders made their appearance and quickly pushed the bulky string galvanometers from the scene, notwithstanding a far inferior frequency response. Standards for performancewere then issued thatwere unfortunately based on the frequency characteristics of this type of equipment. We will return to this subject in the chapter on theminimum bandwidth requirements for the recording of pediatric ECGs

    Author index—Volumes 1–89

    Get PDF

    Convex hulls in concept induction

    Full text link
    Classification learning is dominated by systems which induce large numbers of small axis-orthogonal decision surfaces. This strongly biases such systems towards particular hypothesis types but there is reason believe that many domains have underlying concepts which do not involve axis orthogonal surfaces. Further, the multiplicity of small decision regions mitigates against any holistic appreciation of the theories produced by these systems, notwithstanding the fact that many of the small regions are individually comprehensible. This thesis investigates modeling concepts as large geometric structures in n-dimensional space. Convex hulls are a superset of the set of axis orthogonal hyperrectangles into which axis orthogonal systems partition the instance space. In consequence, there is reason to believe that convex hulls might provide a more flexible and general learning bias than axis orthogonal regions. The formation of convex hulls around a group of points of the same class is shown to be a usable generalisation and is more general than generalisations produced by axis-orthogonal based classifiers, without constructive induction, like decision trees, decision lists and rules. The use of a small number of large hulls as a concept representation is shown to provide classification performance which can be better than that of classifiers which use a large number of small fragmentary regions for each concept. A convex hull based classifier, CH1, has been implemented and tested. CH1 can handle categorical and continuous data. Algorithms for two basic generalisation operations on hulls, inflation and facet deletion, are presented. The two operations are shown to improve the accuracy of the classifier and provide moderate classification accuracy over a representative selection of typical, largely or wholly continuous valued machine learning tasks. The classifier exhibits superior performance to well-known axis-orthogonal-based classifiers when presented with domains where the underlying decision surfaces are not axis parallel. The strengths and weaknesses of the system are identified. One particular advantage is the ability of the system to model domains with approximately the same number of structures as there are underlying concepts. This leads to the possibility of extraction of higher level mathematical descriptions of the induced concepts, using the techniques of computational geometry, which is not possible from a multiplicity of small regions

    Le forage distribué des données : une approche basée sur l'agrégation et le raffinement de modèles

    Get PDF
    Avec l’informatisation accrue de toutes les sphères d’activités de la société, nous assistons de nos jours à une explosion de la quantité de données électroniques existantes. C’est pourquoi, nous devons avoir recours à des outils automatiques qui sont à même d’analyser automatiquement les données et de ne nous fournir que l’information pertinente et résumée par rapport à ce qui est recherché. Les techniques de forage de données sont généralement utilisées à cette fin. Cependant, ces dernières nécessitent généralement un temps de calcul considérable afin d’analyser un large volume de données. Par ailleurs, si les données sont géographiquement distribuées, les regrouper sur un même site pour y créer un modèle (un classificateur par exemple) peut s’avérer très coûteux. Pour résoudre ce problème, nous proposons de construire plusieurs modèles, et plus précisément plusieurs classificateurs, soit un classificateur par site. Ensuite, les règles constituant ces classificateurs sont regroupées puis filtrées en se basant sur certaines mesures statistiques et une validation effectuée à partir de très petits échantillons provenant de chacun des sites. Le modèle résultant, appelé méta-classificateur, est, d’une part, un outil de prédiction pour toute nouvelle instance et, d’autre part, une vue abstraite de tout l’ensemble de données. Nous basons notre approche de filtrage de règles de classification sur une mesure de confiance associée à chaque règle qui est calculée statistiquement et validée en utilisant les échantillons recueillis. Nous avons considéré plusieurs techniques de validation tel qu’il sera présenté dans cette thèse.With the pervasive use of computers in all spheres of activity in our society, we are faced nowadays with the explosion of electronic data. This is why we need automatic tools that are able to automatically analyze the data in order to provide us with relevant and summarized information with respect to some query. For this task, data mining techniques are generally used. However, these techniques require considerable computing time in order to analyze a huge volume of data. Moreover, if the data is geographically distributed, gathering it on the same site in order to create a model (a classifier for instance) could be time consuming. To solve this problem, we propose to build several models, that is one classifier by site. Then, rules constituting these classifiers are aggregated and filtered based on some statistical measures, and a validation process is carried out on samples from each site. The resulting model, called a metaclassifier is, on one hand, a prediction tool for any new (unseen) instance and, on the other hand, an abstract view of the whole data set. We base our rule filtering approach on a confidence measure associated with each rule, which is computed statistically and then validated using the data samples (one from each site). We considered several validation techniques such as will be discussed in this thesis
    corecore