186 research outputs found

    Some views on information fusion and logic based approaches in decision making under uncertainty

    Get PDF
    Decision making under uncertainty is a key issue in information fusion and logic based reasoning approaches. The aim of this paper is to show noteworthy theoretical and applicational issues in the area of decision making under uncertainty that have been already done and raise new open research related to these topics pointing out promising and challenging research gaps that should be addressed in the coming future in order to improve the resolution of decision making problems under uncertainty

    TME Volume 11, Number 2

    Get PDF

    A Review of using Data Mining Techniques in Power Plants

    Get PDF
    Data mining techniques and their applications have developed rapidly during the last two decades. This paper reviews application of data mining techniques in power systems, specially in power plants, through a survey of literature between the year 2000 and 2015. Keyword indices, articles’ abstracts and conclusions were used to classify more than 86 articles about application of data mining in power plants, from many academic journals and research centers. Because this paper concerns about application of data mining in power plants; the paper started by providing a brief introduction about data mining and power systems to give the reader better vision about these two different disciplines. This paper presents a comprehensive survey of the collected articles and classifies them according to three categories: the used techniques, the problem and the application area. From this review we found that data mining techniques (classification, regression, clustering and association rules) could be used to solve many types of problems in power plants, like predicting the amount of generated power, failure prediction, failure diagnosis, failure detection and many others. Also there is no standard technique that could be used for a specific problem. Application of data mining in power plants is a rich research area and still needs more exploration

    Teachers’ Learning of Fraction Division With Area Models

    Get PDF
    Research shows that fractions concepts play an essential role in the learning of later mathematics. However, fractions are notoriously difficult to learn and difficult to teach. Division with fractions is a frequent subject in mathematics education research because division is the most conceptually difficult of the four basic arithmetic operations and rational numbers are the most conceptually difficult numbers in K-12 mathematics curricula. In the U.S., teachers are generally proficient with mathematical procedures, but often have difficulty explaining the concepts underlying the procedures. Research indicates a positive association between student learning and teachers’ depth of conceptual understanding of mathematics. Thus, it is important to ensure that future and practicing teachers are competent with fractions operations at a deep, conceptual level. In order to gain a better understanding of teachers’ conceptions of division with fractions, this study engaged teachers in a 4-hour professional development program designed to deepen the teachers’ understanding of fractions and their ability to represent fraction operations through the construction of rectangular area models. Eight teachers were given one-to-one professional development. Analysis of these videos showed that teachers constructed idiosyncratic conceptions yet faced some common challenges. One common challenge was that a central part of the division concept was readily visible to the teachers in some contexts but not in other contexts. Another common difficulty teachers experienced was conceptually explaining why the quotient to a fraction division problem should be based on a whole unit of the divisor. Additionally, teachers constructed different modules of division arising from the structure of the situation in which the division was conceptualized. Models of partitive division with one apparent referent were easier to conceptualize and represent. Models of partitive division with two apparent referents were more difficult to conceptualize and represent. Two-referent models of partitive division with fractions are fundamental to rate and intensive quantity, and directly relate to other topics in mathematics, such as proportion and derivatives. Results of this study shed light on potentially common conceptual difficulties as well as suggest ways that learners can facilitate a conceptual understanding and representational fluency with fractions division

    Informational Paradigm, management of uncertainty and theoretical formalisms in the clustering framework: A review

    Get PDF
    Fifty years have gone by since the publication of the first paper on clustering based on fuzzy sets theory. In 1965, L.A. Zadeh had published “Fuzzy Sets” [335]. After only one year, the first effects of this seminal paper began to emerge, with the pioneering paper on clustering by Bellman, Kalaba, Zadeh [33], in which they proposed a prototypal of clustering algorithm based on the fuzzy sets theory

    Information structure and the referential status of linguistic expression : workshop as part of the 23th annual meetings of the Deutsche Gesellschaft fĂĽr Sprachwissenschaft in Leipzig, Leipzig, February 28 - March 2, 2001

    Get PDF
    This volume comprises papers that were given at the workshop Information Structure and the Referential Status of Linguistic Expressions, which we organized during the Deutsche Gesellschaft fĂĽr Sprachwissenschaft (DGfS) Conference in Leipzig in February 2001. At this workshop we discussed the connection between information structure and the referential interpretation of linguistic expressions, a topic mostly neglected in current linguistics research. One common aim of the papers is to find out to what extent the focus-background as well as the topic-comment structuring determine the referential interpretation of simple arguments like definite and indefinite NPs on the one hand and sentences on the other

    Analysis and Detection of Outliers in GNSS Measurements by Means of Machine Learning Algorithms

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    The Algorithmic Complexity of Modular Decomposition

    Get PDF
    Modular decomposition is a thoroughly investigated topic in many areas such as switching theory, reliability theory, game theory and graph theory. We propose an O(mn)-algorithm for the recognition of a modular set of a monotone Boolean function f with m prime implicants and n variables. Using this result we show that the computation of the modular closure of a set can be done in time O(mn2). On the other hand, we prove that the recognition problem for general Boolean func tions is NP-complete. Moreover, we introduce the so called generalized Shannon decomposition of a Boolean functions as an efficient tool for proving theorems on Boolean function decompositions

    Efficient parameterized algorithms on structured graphs

    Get PDF
    In der klassischen Komplexitätstheorie werden worst-case Laufzeiten von Algorithmen typischerweise einzig abhängig von der Eingabegröße angegeben. In dem Kontext der parametrisierten Komplexitätstheorie versucht man die Analyse der Laufzeit dahingehend zu verfeinern, dass man zusätzlich zu der Eingabengröße noch einen Parameter berücksichtigt, welcher angibt, wie strukturiert die Eingabe bezüglich einer gewissen Eigenschaft ist. Ein parametrisierter Algorithmus nutzt dann diese beschriebene Struktur aus und erreicht so eine Laufzeit, welche schneller ist als die eines besten unparametrisierten Algorithmus, falls der Parameter klein ist. Der erste Hauptteil dieser Arbeit führt die Forschung in diese Richtung weiter aus und untersucht den Einfluss von verschieden Parametern auf die Laufzeit von bekannten effizient lösbaren Problemen. Einige vorgestellte Algorithmen sind dabei adaptive Algorithmen, was bedeutet, dass die Laufzeit von diesen Algorithmen mit der Laufzeit des besten unparametrisierten Algorithm für den größtmöglichen Parameterwert übereinstimmt und damit theoretisch niemals schlechter als die besten unparametrisierten Algorithmen und übertreffen diese bereits für leicht nichttriviale Parameterwerte. Motiviert durch den allgemeinen Erfolg und der Vielzahl solcher parametrisierten Algorithmen, welche eine vielzahl verschiedener Strukturen ausnutzen, untersuchen wir im zweiten Hauptteil dieser Arbeit, wie man solche unterschiedliche homogene Strukturen zu mehr heterogenen Strukturen vereinen kann. Ausgehend von algebraischen Ausdrücken, welche benutzt werden können, um von Parametern beschriebene Strukturen zu definieren, charakterisieren wir klar und robust heterogene Strukturen und zeigen exemplarisch, wie sich die Parameter tree-depth und modular-width heterogen verbinden lassen. Wir beschreiben dazu effiziente Algorithmen auf heterogenen Strukturen mit Laufzeiten, welche im Spezialfall mit den homogenen Algorithmen übereinstimmen.In classical complexity theory, the worst-case running times of algorithms depend solely on the size of the input. In parameterized complexity the goal is to refine the analysis of the running time of an algorithm by additionally considering a parameter that measures some kind of structure in the input. A parameterized algorithm then utilizes the structure described by the parameter and achieves a running time that is faster than the best general (unparameterized) algorithm for instances of low parameter value. In the first part of this thesis, we carry forward in this direction and investigate the influence of several parameters on the running times of well-known tractable problems. Several presented algorithms are adaptive algorithms, meaning that they match the running time of a best unparameterized algorithm for worst-case parameter values. Thus, an adaptive parameterized algorithm is asymptotically never worse than the best unparameterized algorithm, while it outperforms the best general algorithm already for slightly non-trivial parameter values. As illustrated in the first part of this thesis, for many problems there exist efficient parameterized algorithms regarding multiple parameters, each describing a different kind of structure. In the second part of this thesis, we explore how to combine such homogeneous structures to more general and heterogeneous structures. Using algebraic expressions, we define new combined graph classes of heterogeneous structure in a clean and robust way, and we showcase this for the heterogeneous merge of the parameters tree-depth and modular-width, by presenting parameterized algorithms on such heterogeneous graph classes and getting running times that match the homogeneous cases throughout
    • …
    corecore