260 research outputs found

    Membership Functions for a Fuzzy Relational Database: A Comparison of the Direct Rating and New Random Proportional Methods

    Get PDF
    Fuzzy relational databases deal with imprecise data or fuzzy information in a relational database. The purpose of this fuzzy database implementation is to retrieve images by using fuzzy queries whose common-language descriptions are defined by the consensus of a particular user community. The fuzzy set, which is presentation of fuzzy attribute values of the images, is determined through membership function. This paper compares two methods of constructing membership functions, the Direct Rating and New Random Proportional, to determine which method gives maximum users satisfaction with minimum feedback from the community. The statistical analysis of results suggests the use of Direct Rating method. Moreover, the analysis shows that the performance of the New Random Proportional method can be improved with the inclusion of a Not modifier. This paper also identifies and analyzes issues that are raised by different versions of the database system

    The Use of Relation Valued Attributes in Support of Fuzzy Data

    Get PDF
    In his paper introducing fuzzy sets, L.A. Zadeh describes the difficulty of assigning some real-world objects to a particular class when the notion of class membership is ambiguous. If exact classification is not obvious, most people approximate using intuition and may reach agreement by placing an object in more than one class. Numbers or ‘degrees of membership’ within these classes are used to provide an approximation that supports this intuitive process. This results in a ‘fuzzy set’. This fuzzy set consists any number of ordered pairs to represent both the class and the class’s degree of membership to provide a formal representation that can be used to model this process. Although the fuzzy approach to reasoning and classification makes sense, it does not comply with two of the basic principles of classical logic. These principles are the laws of contradiction and excluded middle. While they play a significant role in logic, it is the violation of these principles that gives fuzzy logic its useful characteristics. The problem of this representation within a database system, however, is that the class and its degree of membership are represented by two separate, but indivisible attributes. Further, this representation may contain any number of such pairs of attributes. While the data for class and membership are maintained in individual attributes, neither of these attributes may exist without the other without sacrificing meaning. And, to maintain a variable number of such pairs within the representation is problematic. C. J. Date suggested a relation valued attribute (RVA) which can not only encapsulate the attributes associated with the fuzzy set and impose constraints on their use, but also provide a relation which may contain any number of such pairs. The goal of this dissertation is to establish a context in which the relational database model can be extended through the implementation of an RVA to support of fuzzy data on an actual system. This goal represents an opportunity to study through application and observation, the use of fuzzy sets to support imprecise and uncertain data using database queries which appropriately adhere to the relational model. The intent is to create a pathway that may extend the support of database applications that need fuzzy logic and/or fuzzy data

    Fuzzy Membership Function Initial Values: Comparing Initialization Methods That Expedite Convergence

    Get PDF
    Fuzzy attributes are used to quantify imprecise data that model real world objects. To effectively use fuzzy attributes, a fuzzy membership function must be defined to provide the boundaries for the fuzzy data. The initialization of these membership function values should allow the data to converge to a stable membership value in the shortest time possible. The paper compares three initialization methods, Random, Midpoint and Random Proportional, to determine which method optimizes convergence. The comparison experiments suggest the use of the Random Proportional method

    I believe it's possible it might be so.... : Exploiting Lexical Clues for the Automatic Generation of Evidentiality Weights for Information Extracted from English Text

    Get PDF
    Information formulated in natural language is being created at an incredible pace, far more quickly than we can make sense of it. Thus, computer algorithms for various kinds of text analytics have been developed to try to find nuggets of new, pertinent and useful information. However, information extracted from text is not always credible or reliable; often buried in sentences are lexical and grammatical structures that indicate the uncertainty of the proposition. Such clues include hedges such as modal adverbs and adjectives, as well as hearsay markers, indicators of inference or belief (”mindsay”), and verb forms identifying future actions which may not take place. In this thesis, we demonstrate how analysis of these lexical and grammatical forms of uncertainty can be automatically analyzed to provide a method of determining an evidential weight to the proposition, which can be used to assess the credibility of the information extracted from English text

    A Hybrid Approach to the Sentiment Analysis Problem at the Sentence Level

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The objective of this article is to present a hybrid approach to the Sentiment Analysis problem at the sentence level. This new method uses natural language processing (NLP) essential techniques, a sentiment lexicon enhanced with the assistance of SentiWordNet, and fuzzy sets to estimate the semantic orientation polarity and its intensity for sentences, which provides a foundation for computing with sentiments. The proposed hybrid method is applied to three different data-sets and the results achieved are compared to those obtained using NaĂŻve Bayes and Maximum Entropy techniques. It is demonstrated that the presented hybrid approach is more accurate and precise than both NaĂŻve Bayes and Maximum Entropy techniques, when the latter are utilised in isolation. In addition, it is shown that when applied to datasets containing snippets, the proposed method performs similarly to state of the art techniques

    Intelligent Systems

    Get PDF
    This book is dedicated to intelligent systems of broad-spectrum application, such as personal and social biosafety or use of intelligent sensory micro-nanosystems such as "e-nose", "e-tongue" and "e-eye". In addition to that, effective acquiring information, knowledge management and improved knowledge transfer in any media, as well as modeling its information content using meta-and hyper heuristics and semantic reasoning all benefit from the systems covered in this book. Intelligent systems can also be applied in education and generating the intelligent distributed eLearning architecture, as well as in a large number of technical fields, such as industrial design, manufacturing and utilization, e.g., in precision agriculture, cartography, electric power distribution systems, intelligent building management systems, drilling operations etc. Furthermore, decision making using fuzzy logic models, computational recognition of comprehension uncertainty and the joint synthesis of goals and means of intelligent behavior biosystems, as well as diagnostic and human support in the healthcare environment have also been made easier

    An intent-based blockchain-agnostic interaction environment

    Full text link

    Machine Learning in Tribology

    Get PDF
    Tribology has been and continues to be one of the most relevant fields, being present in almost all aspects of our lives. The understanding of tribology provides us with solutions for future technical challenges. At the root of all advances made so far are multitudes of precise experiments and an increasing number of advanced computer simulations across different scales and multiple physical disciplines. Based upon this sound and data-rich foundation, advanced data handling, analysis and learning methods can be developed and employed to expand existing knowledge. Therefore, modern machine learning (ML) or artificial intelligence (AI) methods provide opportunities to explore the complex processes in tribological systems and to classify or quantify their behavior in an efficient or even real-time way. Thus, their potential also goes beyond purely academic aspects into actual industrial applications. To help pave the way, this article collection aimed to present the latest research on ML or AI approaches for solving tribology-related issues generating true added value beyond just buzzwords. In this sense, this Special Issue can support researchers in identifying initial selections and best practice solutions for ML in tribology

    A Hybrid Approach to the Sentiment Analysis Problem at the Sentence Level

    Get PDF
    This doctoral thesis deals with a number of challenges related to investigating and devising solutions to the Sentiment Analysis Problem, a subset of the discipline known as Natural Language Processing (NLP), following a path that differs from the most common approaches currently in-use. The majority of the research and applications building in Sentiment Analysis (SA) / Opinion Mining (OM) have been conducted and developed using Supervised Machine Learning techniques. It is our intention to prove that a hybrid approach merging fuzzy sets, a solid sentiment lexicon, traditional NLP techniques and aggregation methods will have the effect of compounding the power of all the positive aspects of these tools. In this thesis we will prove three main aspects, namely: 1. That a Hybrid Classification Model based on the techniques mentioned in the previous paragraphs will be capable of: (a) performing same or better than established Supervised Machine Learning techniques -namely, NaĂŻve Bayes and Maximum Entropy (ME)- when the latter are utilised respectively as the only classification methods being applied, when calculating subjectivity polarity, and (b) computing the intensity of the polarity previously estimated. 2. That cross-ratio uninorms can be used to effectively fuse the classification outputs of several algorithms producing a compensatory effect. 3. That the Induced Ordered Weighted Averaging (IOWA) operator is a very good choice to model the opinion of the majority (consensus) when the outputs of a number of classification methods are combined together. For academic and experimental purposes we have built the proposed methods and associated prototypes in an iterative fashion: Step 1: we start with the so-called Hybrid Standard Classification (HSC) method, responsible for subjectivity polarity determination. Step 2: then, we have continued with the Hybrid Advanced Classification (HAC) method that computes the polarity intensity of opinions/sentiments. Step 3: in closing, we present two methods that produce a semantic-specific aggregation of two or more classification methods, as a complement to the HSC/HAC methods when the latter cannot generate a classification value or when we are looking for an aggregation that implies consensus, respectively: *the Hybrid Advanced Classification with Aggregation by Cross-ratio Uninorm (HACACU) method
    • 

    corecore