3,076 research outputs found

    Dominance-based Rough Set Approach, basic ideas and main trends

    Full text link
    Dominance-based Rough Approach (DRSA) has been proposed as a machine learning and knowledge discovery methodology to handle Multiple Criteria Decision Aiding (MCDA). Due to its capacity of asking the decision maker (DM) for simple preference information and supplying easily understandable and explainable recommendations, DRSA gained much interest during the years and it is now one of the most appreciated MCDA approaches. In fact, it has been applied also beyond MCDA domain, as a general knowledge discovery and data mining methodology for the analysis of monotonic (and also non-monotonic) data. In this contribution, we recall the basic principles and the main concepts of DRSA, with a general overview of its developments and software. We present also a historical reconstruction of the genesis of the methodology, with a specific focus on the contribution of Roman S{\l}owi\'nski.Comment: This research was partially supported by TAILOR, a project funded by European Union (EU) Horizon 2020 research and innovation programme under GA No 952215. This submission is a preprint of a book chapter accepted by Springer, with very few minor differences of just technical natur

    Learning from Partial Labels

    Get PDF
    We address the problem of partially-labeled multiclass classification, where instead of a single label per instance, the algorithm is given a candidate set of labels, only one of which is correct. Our setting is motivated by a common scenario in many image and video collections, where only partial access to labels is available. The goal is to learn a classifier that can disambiguate the partially-labeled training instances, and generalize to unseen data. We define an intuitive property of the data distribution that sharply characterizes the ability to learn in this setting and show that effective learning is possible even when all the data is only partially labeled. Exploiting this property of the data, we propose a convex learning formulation based on minimization of a loss function appropriate for the partial label setting. We analyze the conditions under which our loss function is asymptotically consistent, as well as its generalization and transductive performance. We apply our framework to identifying faces culled from web news sources and to naming characters in TV series and movies; in particular, we annotated and experimented on a very large video data set and achieve 6% error for character naming on 16 episodes of the TV series Lost

    All Thinking is 'Wishful' Thinking

    Get PDF
    Motivation to engage in any epistemic behavior can be decomposed into two basic types that emerge in various guises across different disciplines and areas of study. The first basic dimension refers to a desire to approach versus avoid nonspecific certainty, which has epistemic value. It describes a need for an unambiguous, precise answer to a question, regardless of that answer’s specific content. Second basic dimension refers to a desire to approach versus avoid specific certainty, which has instrumental value. It concerns a need for the specific content of one’s beliefs and prior preferences. Together, they explain diverse epistemic behaviors, such as seeking, avoiding, and biasing new information and revising and updating, versus protecting, one’s beliefs, when confronted with new evidence. The relative strength of these motivational components determines the form of (Bayes optimal) epistemic behavior that follows

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    Learning nonlinear monotone classifiers using the Choquet Integral

    Get PDF
    In der jüngeren Vergangenheit hat das Lernen von Vorhersagemodellen, die eine monotone Beziehung zwischen Ein- und Ausgabevariablen garantieren, wachsende Aufmerksamkeit im Bereich des maschinellen Lernens erlangt. Besonders für flexible nichtlineare Modelle stellt die Gewährleistung der Monotonie eine große Herausforderung für die Umsetzung dar. Die vorgelegte Arbeit nutzt das Choquet Integral als mathematische Grundlage für die Entwicklung neuer Modelle für nichtlineare Klassifikationsaufgaben. Neben den bekannten Einsatzgebieten des Choquet-Integrals als flexible Aggregationsfunktion in multi-kriteriellen Entscheidungsverfahren, findet der Formalismus damit Eingang als wichtiges Werkzeug für Modelle des maschinellen Lernens. Neben dem Vorteil, Monotonie und Flexibilität auf elegante Weise mathematisch vereinbar zu machen, bietet das Choquet-Integral Möglichkeiten zur Quantifizierung von Wechselwirkungen zwischen Gruppen von Attributen der Eingabedaten, wodurch interpretierbare Modelle gewonnen werden können. In der Arbeit werden konkrete Methoden für das Lernen mit dem Choquet Integral entwickelt, welche zwei unterschiedliche Ansätze nutzen, die Maximum-Likelihood-Schätzung und die strukturelle Risikominimierung. Während der erste Ansatz zu einer Verallgemeinerung der logistischen Regression führt, wird der zweite mit Hilfe von Support-Vektor-Maschinen realisiert. In beiden Fällen wird das Lernproblem imWesentlichen auf die Parameter-Identifikation von Fuzzy-Maßen für das Choquet Integral zurückgeführt. Die exponentielle Anzahl von Freiheitsgraden zur Modellierung aller Attribut-Teilmengen stellt dabei besondere Herausforderungen im Hinblick auf Laufzeitkomplexität und Generalisierungsleistung. Vor deren Hintergrund werden die beiden Ansätze praktisch bewertet und auch theoretisch analysiert. Zudem werden auch geeignete Verfahren zur Komplexitätsreduktion und Modellregularisierung vorgeschlagen und untersucht. Die experimentellen Ergebnisse sind auch für anspruchsvolle Referenzprobleme im Vergleich mit aktuellen Verfahren sehr gut und heben die Nützlichkeit der Kombination aus Monotonie und Flexibilität des Choquet Integrals in verschiedenen Ansätzen des maschinellen Lernens hervor

    Schnelle Löser für partielle Differentialgleichungen

    Get PDF
    [no abstract available

    A hybrid approach of anfis—artificial bee colony algorithm for intelligent modeling and optimization of plasma arc cutting on monel™ 400 alloy

    Get PDF
    This paper focusses on a hybrid approach based on genetic algorithm (GA) and an adaptive neuro fuzzy inference system (ANFIS) for modeling the correlation between plasma arc cutting (PAC) parameters and the response characteristics of machined Monel 400 alloy sheets. PAC experiments are performed based on box-behnken design methodology by considering cutting speed, gas pressure, arc current, and stand-off distance as input parameters, and surface roughness (Ra), kerf width (kw), and micro hardness (mh) as response characteristics. GA is efficaciously utilized as the training algorithm to optimize the ANFIS parameters. The training, testing errors, and statistical validation parameter results indicated that the ANFIS learned by GA outperforms in the forecasting of PAC responses compared with the results of multiple linear regression models. Besides that, to obtain the optimal combination PAC parameters, multi-response optimization was performed using a trained ANFIS network coupled with an artificial bee colony algorithm (ABC). The superlative responses, such as Ra of 1.5387 µm, kw of 1.2034 mm, and mh of 176.08, are used to forecast the optimum cutting conditions, such as a cutting speed of 2330.39 mm/min, gas pressure of 3.84 bar, arc current of 45 A, and stand-off distance of 2.01 mm, respectively. Furthermore, the ABC predicted results are validated by conducting confirmatory experiments, and it was found that the error between the predicted and the actual results are lower than 6.38%, indicating the adoptability of the proposed ABC in optimizing real-world complex machining processes
    • …
    corecore