31 research outputs found

    Zbirka nalog za predmet Osnove umetne inteligence

    Get PDF

    Zbirka nalog za predmet Osnove umetne inteligence

    Get PDF

    Argument based machine learning

    Get PDF
    AbstractWe present a novel approach to machine learning, called ABML (argumentation based ML). This approach combines machine learning from examples with concepts from the field of argumentation. The idea is to provide expert's arguments, or reasons, for some of the learning examples. We require that the theory induced from the examples explains the examples in terms of the given reasons. Thus arguments constrain the combinatorial search among possible hypotheses, and also direct the search towards hypotheses that are more comprehensible in the light of expert's background knowledge. In this paper we realize the idea of ABML as rule learning. We implement ABCN2, an argument-based extension of the CN2 rule learning algorithm, conduct experiments and analyze its performance in comparison with the original CN2 algorithm

    Vpliv naravnih protimikrobnih snovi na bakterijsko hidrofobnost, adhezijo in zeta potencial

    Get PDF
    Interactions between bacterial cells and contact materials play an important role in food safety and technology. As bacterial strains become ever more resistant to antibiotics, the aim of this study was to analyse adhesion of selected foodborne bacterial strains on polystyrene surface and to evaluate the effects of natural antimicrobials on bacterial cell hydrophobicity, adhesion, and zeta potential as strategies of adhesion prevention. The results showed strain-specific adhesion rate on polystyrene. The lowest and the highest adhesion were found for two B. cereus lines. Natural antimicrobials ferulic and rosmarinic acid substantially decreased adhesion, whereas the effect of epigallocatechin gallate was neglectful. Similar results were found for the zeta potential, indicating that natural antimicrobials reduce bacterial adhesion. Targeting bacterial adhesion using natural extracts we can eliminate potential infection at an early stage. Future experimental studies should focus on situations that are as close to industrial conditions as possible.Interakcije med bakterijskimi celicami in površinami delovnih materialov imajo pomembno vlogo v živilski tehnologiji pri zagotavljanju varnih živil. Poznano je, da različni bakterijski sevi postajajo bolj in bolj odporni proti antibiotikom in drugim biocidom. Zato je bil namen naših raziskav analizirati adhezijo izbranih patogenih bakterij, ki se prenašajo z živili. Proučevali smo njihov oprijem na polistirensko površino in ovrednotili vpliv naravnih protimikrobnih snovi na bakterijsko hidrofobnost, adhezijo in zeta potencial, v smislu možnih strategij za preprečevanje adhezije. Rezultati so pokazali, da je adhezija sevno specifična lastnost, saj je bila najmanjša in največja stopnja adhezije določena za različna seva bakterij vrste Bacillus cereus. Naravni protimikrobni snovi, ferulična in rožmarinska kislina, sta zmanjšali stopnjo adhezije na polistiren, medtem ko je bil vpliv epigalokatehin galata zanemarljiv. Podobne rezultate smo dobili pri zeta potencialu, kar nakazuje na možnosti delovanja naravnih snovi kot protiadhezivnih komponent. Uporaba naravnih protimikrobnih snovi lahko prepreči oziroma zmanjša stopnjo adhezije bakterijskih celic in s tem eliminira možnosti kontaminacij ali okužb v začetni fazi. Nadaljnje eksperimentalno delo bo potrebno za ovrednotenje razmer, ki so čim bolj podobne industrijskemu okolju

    Virulence and antimicrobial resistance determinants of verotoxigenic Escherichia coli (VTEC) and of ESBL-producing multidrug resistant E. coli from foods of animal origin illegally imported to Europe

    Get PDF
    Microbial risk due to illegal food import has not been investigated so far. Here we aimed to reveal frequency, phenotype and genotype of verotoxigenic E. coli (VTEC) and ESBL-producing multidrug resistant (MDR) E. coli isolated from foods of animal origin confiscated at the EU airport borders. Of the 1500 food samples confiscated at the airports of Austria, Germany and Slovenia, the most frequent were cheese and meat products primarily from Turkey and from Balkan countries. The VTEC bacteria were isolated using ISO 16654:2001 for O157 and Ridascreen® ELISA based PCR testing of stx genes or ISO/ TS13136 for non-O157 VTEC, resulting in 15 isolates of VTEC (1%). In addition 600 samples from the Vienna airport were also tested for ESBL-producing MDR E. coli, using cefotaxime-McConkey agar. We identified 14 E. coli strains as ESBL/MDR E. coli. (0,9%) for phenotyping for antimicrobial resistance and for genotypiing by microarray (Identibac®,AMR05). The 15 VTEC isolates were phenotyped as Stx toxin producing non-O157 strain. Only one isolate, from Turkish cheese, proved to be EHEC (O26:H46). The remaining 14 strains represent uncommon VTEC serotypes with stx1 and/or stx2 genes. Microarray analysis (Identibac®, Ec03) revealed a wide range of other non-LEE encoding virulence genes. Pulsed field electrophoresis (PFGE) showed high genetic diversity of the strains. Multilocus sequence typing (MLST) established three new ST types (ST4505, 4506 and 4507) in the MLST database, and indicated the existence of 5 small clusters with no relation to origin or serotype/genotype of the strains, but representing several human-related ST types. All VTEC isolates were sensitive to 18 antimicrobials relevant to human and/or animal health, and did not contain resistance genes. ESB/MDR E. coli were resistant to at least 3 classes of antimicrobials. Microarray analysis detected TEM-1 in all but one strain and a variety of genes encoding resistances to other ESBLs (CTXM-1, OXA-1), trimethromprim, tetracycline, aminoglycosides and class1/class2 integrons (8/14 isolates). E.coli virulence microarray detected 2-6 virulence genes in all but one MDR E. coli, and one of the strains qualified as an atypical EPEC . Even though the frequency and attributes of isolated VTEC and ESBL/MDR E. coli did not represent an immediate major risk through illegal food import for the countries involved, it is suggested that the unusual serovars of VTEC as well as the virulence and antimicrobial resistance determinants of ESBL/MDR E. coli detected here, may indicate a future emerging threat by strains in illegally imported foods. Acknowledgement is due to: EU FP7 PROMISE (Grant No: 265877), to Dr. Mária Herpay, National Institute for Epidemiology, Budapest

    Nomograms for Visualization of Naive Bayesian Classifier

    Get PDF
    Besides good predictive performance, the naive Bayesian classifier can also offer a valuable insight into the structure of the training data and effects of the attributes on the class probabilities. This structure may be effectively revealed through visualization of the classifier. We propose a new way to visualize the naive Bayesian model in the form of a nomogram. The advantages of the proposed method are simplicity of presentation, clear display of the effects of individual attribute values, and visualization of confidence intervals. Nomograms are intuitive and when used for decision support can provide a visual explanation of predicted probabilities. And finally, a naive Bayesian nomogram can be printed out and used for probability prediction without the use of computer or calculator

    Naivni Bajesov klasifikator in logistična regresija: primerjava metod, dokaz pogojne ekvivalentnosti in vizualizacija napovednih modelov

    No full text
    Naive Bayesian classifier is one of the simplest yet surprisingly powerful technique to construct predictive model from classified data. Despite its naivety - assumption of attribute independence given the class - empirical results show that it performs surprisingly well in many domains containing clear attribute dependencies. In this work we theoretically, practically and graphically compare naive Bayesian classifier to logistic regression, a standard predictive modelling method from statistics. On the contrary to naive Bayesian classifier logistic regression makes no assumptions when constructing predictive model. We show that naive Bayesian classifier can be presented in an alternative mathematical form (log odds), which is comparable to logistic regression. We prove that methods are mathematically equivalent when attributes are conditionally independent. This logically implies of this that the differences between the two methods are a result of dependencies among attributes in the data. For visual presentation of naive Bayesian classifier, we develop a normalized naive Bayesian nomogram that is based on logistic regression nomogram. Additionally, we improve naive Bayesian nomogram so that it can depict both negative and positive influences of the values of attributes, and in contrast to logistic regression nomograms does not align the "base" values to the zero point. Another advantage over a nomogram for logistic regression is ability to handle unknown attribute values. We compare the two methods through visualization and study of predictive accuracy. Overall, experiments show very similar results, where logistic regression performs slightly better when learning on large data sets, and naive Bayesian classifiers turns to be better at smaller data sets. We summarize that naive Bayesian classifier performs similarly to logistic regression in most cases. Logistic regressions seems to be preferred to naive Bayesian classifier when learning on large data sets, leaving aside computational issues and matters such as handling missing data. Naive Bayesian classifier proves out to be successful at less deterministic learning problems. Also, when model understanding and graphical presentation of the model is important, naive Bayesian classifier is the better method

    Argument Based Machine Learning

    Get PDF
    The Thesis presents a novel approach to machine learning, called ABML (argument based machine learning). This approach combines machine learning from examples with some concepts from the field of defeasible argumentation, where arguments are used together with learning examples by learning methods in the induction of a hypothesis. An argument represents a relation between the class value of a particular learning example and its attributes and can be regarded as a partial explanation of this example. We require that the theory induced from the examples explains the examples in terms of the given arguments. Thus arguments constrain the combinatorial search among possible hypotheses, and also direct the search towards hypotheses that are more comprehensible in the light of expert's background knowledge. Arguments are usually provided by domain experts. One of the main differences between ABML and other knowledge-intensive learning methods is in the way the knowledge is elicited from these experts. Other methods require general domain knowledge, that is knowledge valid for the entire domain. The problem with this is the difficulty that experts face when they try to articulate their global domain knowledge. On the other hand, as arguments contain knowledge specific only to certain situations, they need to provide only local knowledge for the specific examples. Experiments with ABML and other empirical observations show that experts have significantly less problems while expressing such local knowledge. Furthermore, we define the ABML loop that iteratively selects critical learning examples, namely examples that could not be explained by the current hypothesis, which are then shown to domain experts. Using this loop, the burden that lies on experts is further reduced (only some examples need to be explained) and only relevant knowledge is obtained (difficult examples). We implemented the ABCN2 algorithm, an argument-based extension of the rule learning algorithm CN2. The basic version of ABCN2 ensures that rules classifying argumented examples will contain the reasons of the given arguments in their condition part. We furthermore improved the basic algorithm with a new method for evaluation of rules, called extreme value correction (EVC), that reduces the optimism of evaluation measures due to the large number of rules tested and evaluated during the learning process (known as the multiple comparison procedures problem). This feature is critical for ABCN2, since arguments given to different examples have different number of reasons and therefore differently constrain the space for different rules. Moreover, as shown in the dissertation, using this method in CN2 (without arguments) results in significantly more accurate models as compared to the original CN2. We conclude this work with a set of practical evaluations and comparisons of ABCN2 to other machine learning algorithms on several data sets. The results favour ABCN2 in all experiments, however, as each experiment requires a certain amount of time due to involvement of domain experts, the number of experiments is not large enough to allow a valid statistical test. Therefore, we explored the capability of ABCN2 to deal with erroneous arguments, and showed in the dissertation that using false arguments will not decrease the quality of the induced model. Hence, ABCN2 can not perform worse than CN2, but it can perform better given the quality of arguments is high enough

    Naivni Bajesov klasifikator in logistična regresija: primerjava metod, dokaz pogojne ekvivalentnosti in vizualizacija napovednih modelov

    No full text
    Naive Bayesian classifier is one of the simplest yet surprisingly powerful technique to construct predictive model from classified data. Despite its naivety - assumption of attribute independence given the class - empirical results show that it performs surprisingly well in many domains containing clear attribute dependencies. In this work we theoretically, practically and graphically compare naive Bayesian classifier to logistic regression, a standard predictive modelling method from statistics. On the contrary to naive Bayesian classifier logistic regression makes no assumptions when constructing predictive model. We show that naive Bayesian classifier can be presented in an alternative mathematical form (log odds), which is comparable to logistic regression. We prove that methods are mathematically equivalent when attributes are conditionally independent. This logically implies of this that the differences between the two methods are a result of dependencies among attributes in the data. For visual presentation of naive Bayesian classifier, we develop a normalized naive Bayesian nomogram that is based on logistic regression nomogram. Additionally, we improve naive Bayesian nomogram so that it can depict both negative and positive influences of the values of attributes, and in contrast to logistic regression nomograms does not align the "base" values to the zero point. Another advantage over a nomogram for logistic regression is ability to handle unknown attribute values. We compare the two methods through visualization and study of predictive accuracy. Overall, experiments show very similar results, where logistic regression performs slightly better when learning on large data sets, and naive Bayesian classifiers turns to be better at smaller data sets. We summarize that naive Bayesian classifier performs similarly to logistic regression in most cases. Logistic regressions seems to be preferred to naive Bayesian classifier when learning on large data sets, leaving aside computational issues and matters such as handling missing data. Naive Bayesian classifier proves out to be successful at less deterministic learning problems. Also, when model understanding and graphical presentation of the model is important, naive Bayesian classifier is the better method

    Argument based machine learning

    Full text link
    corecore