102 research outputs found

    Fuzzy Sets, Fuzzy Logic and Their Applications

    Get PDF
    The present book contains 20 articles collected from amongst the 53 total submitted manuscripts for the Special Issue “Fuzzy Sets, Fuzzy Loigic and Their Applications” of the MDPI journal Mathematics. The articles, which appear in the book in the series in which they were accepted, published in Volumes 7 (2019) and 8 (2020) of the journal, cover a wide range of topics connected to the theory and applications of fuzzy systems and their extensions and generalizations. This range includes, among others, management of the uncertainty in a fuzzy environment; fuzzy assessment methods of human-machine performance; fuzzy graphs; fuzzy topological and convergence spaces; bipolar fuzzy relations; type-2 fuzzy; and intuitionistic, interval-valued, complex, picture, and Pythagorean fuzzy sets, soft sets and algebras, etc. The applications presented are oriented to finance, fuzzy analytic hierarchy, green supply chain industries, smart health practice, and hotel selection. This wide range of topics makes the book interesting for all those working in the wider area of Fuzzy sets and systems and of fuzzy logic and for those who have the proper mathematical background who wish to become familiar with recent advances in fuzzy mathematics, which has entered to almost all sectors of human life and activity

    Fuzzy Sets, Fuzzy Logic and Their Applications 2020

    Get PDF
    The present book contains the 24 total articles accepted and published in the Special Issue “Fuzzy Sets, Fuzzy Logic and Their Applications, 2020” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of fuzzy sets and systems of fuzzy logic and their extensions/generalizations. These topics include, among others, elements from fuzzy graphs; fuzzy numbers; fuzzy equations; fuzzy linear spaces; intuitionistic fuzzy sets; soft sets; type-2 fuzzy sets, bipolar fuzzy sets, plithogenic sets, fuzzy decision making, fuzzy governance, fuzzy models in mathematics of finance, a philosophical treatise on the connection of the scientific reasoning with fuzzy logic, etc. It is hoped that the book will be interesting and useful for those working in the area of fuzzy sets, fuzzy systems and fuzzy logic, as well as for those with the proper mathematical background and willing to become familiar with recent advances in fuzzy mathematics, which has become prevalent in almost all sectors of the human life and activity

    Contributions to reasoning on imprecise data

    Get PDF
    This thesis contains four contributions which advocate cautious statistical modelling and inference. They achieve it by taking sets of models into account, either directly or indirectly by looking at compatible data situations. Special care is taken to avoid assumptions which are technically convenient, but reduce the uncertainty involved in an unjustified manner. This thesis provides methods for cautious statistical modelling and inference, which are able to exhaust the potential of precise and vague data, motivated by different fields of application, ranging from political science to official statistics. At first, the inherently imprecise Nonparametric Predictive Inference model is involved in the cautious selection of splitting variables in the construction of imprecise classification trees, which are able to describe a structure and allow for a reasonably high predictive power. Dependent on the interpretation of vagueness, different strategies for vague data are then discussed in terms of finite random closed sets: On the one hand, the data to be analysed are regarded as set-valued answers of an item in a questionnaire, where each possible answer corresponding to a subset of the sample space is interpreted as a separate entity. By this the finite random set is reduced to an (ordinary) random variable on a transformed sample space. The context of application is the analysis of voting intentions, where it is shown that the presented approach is able to characterise the undecided in a more detailed way, which common approaches are not able to. Altough the presented analysis, regarded as a first step, is carried out on set-valued data, which are suitably self-constructed with respect to the scientific research question, it still clearly demonstrates that the full potential of this quite general framework is not exhausted. It is capable of dealing with more complex applications. On the other hand, the vague data are produced by set-valued single imputation (imprecise imputation) where the finite random sets are interpreted as being the result of some (unspecified) coarsening. The approach is presented within the context of statistical matching, which is used to gain joint knowledge on features that were not jointly collected in the initial data production. This is especially relevant in data production, e.g. in official statistics, as it allows to fuse the information of already accessible data sets into a new one, without the requirement of actual data collection in the field. Finally, in order to share data, they need to be suitably anonymised. For the specific class of anonymisation techniques of microaggregation, its ability to infer on generalised linear regression models is evaluated. Therefore, the microaggregated data are regarded as a set of compatible, unobserved underlying data situations. Two strategies to follow are proposed. At first, a maximax-like optimisation strategy is pursued, in which the underlying unobserved data are incorporated into the regression model as nuisance parameters, providing a concise yet over-optimistic estimation of the regression coefficients. Secondly, an approach in terms of partial identification, which is inherently more cautious than the previous one, is applied to estimate the set of all regression coefficients that are obtained by performing the estimation on each compatible data situation. Vague data are deemed favourable to precise data as they additionally encompass the uncertainty of the individual observation, and therefore they have a higher informational value. However, to the present day, there are few (credible) statistical models that are able to deal with vague or set-valued data. For this reason, the collection of such data is neglected in data production, disallowing such models to exhaust their full potential. This in turn prevents a throughout evaluation, negatively affecting the (further) development of such models. This situation is a variant of the chicken or egg dilemma. The ambition of this thesis is to break this cycle by providing actual methods for dealing with vague data in relevant situations in practice, to stimulate the required data production.Diese Schrift setzt sich in vier Beiträgen für eine vorsichtige statistische Modellierung und Inferenz ein. Dieses wird erreicht, indem man Mengen von Modellen betrachtet, entweder direkt oder indirekt über die Interpretation der Daten als Menge zugrunde liegender Datensituationen. Besonderer Wert wird dabei darauf gelegt, Annahmen zu vermeiden, die zwar technisch bequem sind, aber die zugrunde liegende Unsicherheit der Daten in ungerechtfertigter Weise reduzieren. In dieser Schrift werden verschiedene Methoden der vorsichtigen Modellierung und Inferenz vorgeschlagen, die das Potential von präzisen und unscharfen Daten ausschöpfen können, angeregt von unterschiedlichen Anwendungsbereichen, die von Politikwissenschaften bis zur amtlichen Statistik reichen. Zuerst wird das Modell der Nonparametrischen Prädiktiven Inferenz, welches per se unscharf ist, in der vorsichtigen Auswahl von Split-Variablen bei der Erstellung von Klassifikationsbäumen verwendet, die auf Methoden der Imprecise Probabilities fußen. Diese Bäume zeichnen sich dadurch aus, dass sie sowohl eine Struktur beschreiben, als auch eine annehmbar hohe Prädiktionsgüte aufweisen. In Abhängigkeit von der Interpretation der Unschärfe, werden dann verschiedene Strategien für den Umgang mit unscharfen Daten im Rahmen von finiten Random Sets erörtert. Einerseits werden die zu analysierenden Daten als mengenwertige Antwort auf eine Frage in einer Fragebogen aufgefasst. Hierbei wird jede mögliche (multiple) Antwort, die eine Teilmenge des Stichprobenraumes darstellt, als eigenständige Entität betrachtet. Somit werden die finiten Random Sets auf (gewöhnliche) Zufallsvariablen reduziert, die nun in einen transformierten Raum abbilden. Im Rahmen einer Analyse von Wahlabsichten hat der vorgeschlagene Ansatz gezeigt, dass die Unentschlossenen mit ihm genauer charakterisiert werden können, als es mit den gängigen Methoden möglich ist. Obwohl die vorgestellte Analyse, betrachtet als ein erster Schritt, auf mengenwertige Daten angewendet wird, die vor dem Hintergrund der wissenschaftlichen Forschungsfrage in geeigneter Weise selbst konstruiert worden sind, zeigt diese dennoch klar, dass die Möglichkeiten dieses generellen Ansatzes nicht ausgeschöpft sind, so dass er auch in komplexeren Situationen angewendet werden kann. Andererseits werden unscharfe Daten durch eine mengenwertige Einfachimputation (imprecise imputation) erzeugt. Hier werden die finiten Random Sets als Ergebnis einer (unspezifizierten) Vergröberung interpretiert. Der Ansatz wird im Rahmen des Statistischen Matchings vorgeschlagen, das verwendet wird, um gemeinsame Informationen über ursprünglich nicht zusammen erhobene Merkmale zur erhalten. Dieses ist insbesondere relevant bei der Datenproduktion, beispielsweise in der amtlichen Statistik, weil es erlaubt, die verschiedenartigen Informationen aus unterschiedlichen bereits vorhandenen Datensätzen zu einen neuen Datensatz zu verschmelzen, ohne dass dafür tatsächlich Daten neu erhoben werden müssen. Zudem müssen die Daten für den Datenaustausch in geeigneter Weise anonymisiert sein. Für die spezielle Klasse der Anonymisierungstechnik der Mikroaggregation wird ihre Eignung im Hinblick auf die Verwendbarkeit in generalisierten linearen Regressionsmodellen geprüft. Hierfür werden die mikroaggregierten Daten als eine Menge von möglichen, unbeobachtbaren zu Grunde liegenden Datensituationen aufgefasst. Es werden zwei Herangehensweisen präsentiert: Als Erstes wird eine maximax-ähnliche Optimisierungsstrategie verfolgt, dabei werden die zu Grunde liegenden unbeobachtbaren Daten als Nuisance Parameter in das Regressionsmodell aufgenommen, was eine enge, aber auch über-optimistische Schätzung der Regressionskoeffizienten liefert. Zweitens wird ein Ansatz im Sinne der partiellen Identifikation angewendet, der per se schon vorsichtiger ist (als der vorherige), indem er nur die Menge aller möglichen Regressionskoeffizienten schätzt, die erhalten werden können, wenn die Schätzung auf jeder zu Grunde liegenden Datensituation durchgeführt wird. Unscharfe Daten haben gegenüber präzisen Daten den Vorteil, dass sie zusätzlich die Unsicherheit der einzelnen Beobachtungseinheit umfassen. Damit besitzen sie einen höheren Informationsgehalt. Allerdings gibt es zur Zeit nur wenige glaubwürdige statistische Modelle, die mit unscharfen Daten umgehen können. Von daher wird die Erhebung solcher Daten bei der Datenproduktion vernachlässigt, was dazu führt, dass entsprechende statistische Modelle ihr volles Potential nicht ausschöpfen können. Dies verhindert eine vollumfängliche Bewertung, wodurch wiederum die (Weiter-)Entwicklung jener Modelle gehemmt wird. Dies ist eine Variante des Henne-Ei-Problems. Diese Schrift will durch Vorschlag konkreter Methoden hinsichtlich des Umgangs mit unscharfen Daten in relevanten Anwendungssituationen Lösungswege aus der beschriebenen Situation aufzeigen und damit die entsprechende Datenproduktion anregen

    Fuzzy Techniques for Decision Making 2018

    Get PDF
    Zadeh's fuzzy set theory incorporates the impreciseness of data and evaluations, by imputting the degrees by which each object belongs to a set. Its success fostered theories that codify the subjectivity, uncertainty, imprecision, or roughness of the evaluations. Their rationale is to produce new flexible methodologies in order to model a variety of concrete decision problems more realistically. This Special Issue garners contributions addressing novel tools, techniques and methodologies for decision making (inclusive of both individual and group, single- or multi-criteria decision making) in the context of these theories. It contains 38 research articles that contribute to a variety of setups that combine fuzziness, hesitancy, roughness, covering sets, and linguistic approaches. Their ranges vary from fundamental or technical to applied approaches

    The 1992 4th NASA SERC Symposium on VLSI Design

    Get PDF
    Papers from the fourth annual NASA Symposium on VLSI Design, co-sponsored by the IEEE, are presented. Each year this symposium is organized by the NASA Space Engineering Research Center (SERC) at the University of Idaho and is held in conjunction with a quarterly meeting of the NASA Data System Technology Working Group (DSTWG). One task of the DSTWG is to develop new electronic technologies that will meet next generation electronic data system needs. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The NASA SERC is proud to offer, at its fourth symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories, the electronics industry, and universities. These speakers share insights into next generation advances that will serve as a basis for future VLSI design

    Trust networks for recommender systems

    Get PDF
    Recommender systems use information about their user’s profiles and relationships to suggest items that might be of interest to them. Recommenders that incorporate a social trust network among their users have the potential to make more personalized recommendations compared to traditional systems, provided they succeed in utilizing the additional (dis)trust information to their advantage. Such trust-enhanced recommenders consist of two main components: recommendation technologies and trust metrics (techniques which aim to estimate the trust between two unknown users.) We introduce a new bilattice-based model that considers trust and distrust as two different but dependent components, and study the accompanying trust metrics. Two of their key building blocks are trust propagation and aggregation. If user a wants to form an opinion about an unknown user x, a can contact one of his acquaintances, who can contact another one, etc., until a user is reached who is connected with x (propagation). Since a will often contact several persons, one also needs a mechanism to combine the trust scores that result from several propagation paths (aggregation). We introduce new fuzzy logic propagation operators and focus on the potential of OWA strategies and the effect of knowledge defects. Our experiments demonstrate that propagators that actively incorporate distrust are more accurate than standard approaches, and that new aggregators result in better predictions than purely bilattice-based operators. In the second part of the dissertation, we focus on the application of trust networks in recommender systems. After the introduction of a new detection measure for controversial items, we show that trust-based approaches are more effective than baselines. We also propose a new algorithm that achieves an immediate high coverage while the accuracy remains adequate. Furthermore, we also provide the first experimental study on the potential of distrust in a memory-based collaborative filtering recommendation process. Finally, we also study the user cold start problem; we propose to identify key figures in the network, and to suggest them as possible connection points for newcomers. Our experiments show that it is much more beneficial for a new user to connect to an identified key figure instead of making random connections

    Collected Papers (on various scientific topics), Volume XIII

    Get PDF
    This thirteenth volume of Collected Papers is an eclectic tome of 88 papers in various fields of sciences, such as astronomy, biology, calculus, economics, education and administration, game theory, geometry, graph theory, information fusion, decision making, instantaneous physics, quantum physics, neutrosophic logic and set, non-Euclidean geometry, number theory, paradoxes, philosophy of science, scientific research methods, statistics, and others, structured in 17 chapters (Neutrosophic Theory and Applications; Neutrosophic Algebra; Fuzzy Soft Sets; Neutrosophic Sets; Hypersoft Sets; Neutrosophic Semigroups; Neutrosophic Graphs; Superhypergraphs; Plithogeny; Information Fusion; Statistics; Decision Making; Extenics; Instantaneous Physics; Paradoxism; Mathematica; Miscellanea), comprising 965 pages, published between 2005-2022 in different scientific journals, by the author alone or in collaboration with the following 110 co-authors (alphabetically ordered) from 26 countries: Abduallah Gamal, Sania Afzal, Firoz Ahmad, Muhammad Akram, Sheriful Alam, Ali Hamza, Ali H. M. Al-Obaidi, Madeleine Al-Tahan, Assia Bakali, Atiqe Ur Rahman, Sukanto Bhattacharya, Bilal Hadjadji, Robert N. Boyd, Willem K.M. Brauers, Umit Cali, Youcef Chibani, Victor Christianto, Chunxin Bo, Shyamal Dalapati, Mario Dalcín, Arup Kumar Das, Elham Davneshvar, Bijan Davvaz, Irfan Deli, Muhammet Deveci, Mamouni Dhar, R. Dhavaseelan, Balasubramanian Elavarasan, Sara Farooq, Haipeng Wang, Ugur Halden, Le Hoang Son, Hongnian Yu, Qays Hatem Imran, Mayas Ismail, Saeid Jafari, Jun Ye, Ilanthenral Kandasamy, W.B. Vasantha Kandasamy, Darjan Karabašević, Abdullah Kargın, Vasilios N. Katsikis, Nour Eldeen M. Khalifa, Madad Khan, M. Khoshnevisan, Tapan Kumar Roy, Pinaki Majumdar, Sreepurna Malakar, Masoud Ghods, Minghao Hu, Mingming Chen, Mohamed Abdel-Basset, Mohamed Talea, Mohammad Hamidi, Mohamed Loey, Mihnea Alexandru Moisescu, Muhammad Ihsan, Muhammad Saeed, Muhammad Shabir, Mumtaz Ali, Muzzamal Sitara, Nassim Abbas, Munazza Naz, Giorgio Nordo, Mani Parimala, Ion Pătrașcu, Gabrijela Popović, K. Porselvi, Surapati Pramanik, D. Preethi, Qiang Guo, Riad K. Al-Hamido, Zahra Rostami, Said Broumi, Saima Anis, Muzafer Saračević, Ganeshsree Selvachandran, Selvaraj Ganesan, Shammya Shananda Saha, Marayanagaraj Shanmugapriya, Songtao Shao, Sori Tjandrah Simbolon, Florentin Smarandache, Predrag S. Stanimirović, Dragiša Stanujkić, Raman Sundareswaran, Mehmet Șahin, Ovidiu-Ilie Șandru, Abdulkadir Șengür, Mohamed Talea, Ferhat Taș, Selçuk Topal, Alptekin Ulutaș, Ramalingam Udhayakumar, Yunita Umniyati, J. Vimala, Luige Vlădăreanu, Ştefan Vlăduţescu, Yaman Akbulut, Yanhui Guo, Yong Deng, You He, Young Bae Jun, Wangtao Yuan, Rong Xia, Xiaohong Zhang, Edmundas Kazimieras Zavadskas, Zayen Azzouz Omar, Xiaohong Zhang, Zhirou Ma.‬‬‬‬‬‬‬

    Automated knowledge acquisition for knowledge-based systems: KE-KIT

    Get PDF
    Despite recent progress, knowledge acquisition remains a central problem for the development of intelligent systems. There are many people throughout the world doing studies in this area. However, very few automated techniques have made it to the market place. In this light, the idea of automating the knowledge acquisition process is very appealing and may lead to a break through. Most (if not all) of the approaches and techniques concerning intelligent, expert systems and specifically knowledge-based systems can still be considered in their infancy and definitely do not subscribe to any kind of standards. Many things have yet to be learned and incorporated into the technology and combined with methods from traditional computer science and psychology. KE-KIT is a prototype system which attempts to automate a portion of the knowledge engineering process. The emphasis is on the automation of knowledge acquisition activities. However, the transformation of knowledge from an intermediate form to a knowledge -base format is also addressed. The approach used to automate the knowledge acquisition process is based on the personal construct theory developed by George Kelly in the field of psychology. This thesis gives and in-depth view of knowledge engineering with a concentration on the knowledge acquisition process. Several issues and approaches are described. Greater details surrounding the personal construct theory approach to knowledge acquisition and its use of a repertory grid are given. In addition, some existing knowledge acquisition tools are briefly explored. Details concerning the implementation of KE-KIT and reflections on its applicability round out the presented material

    Computational Argumentation for the Automatic Analysis of Argumentative Discourse and Human Persuasion

    Full text link
    Tesis por compendio[ES] La argumentación computacional es el área de investigación que estudia y analiza el uso de distintas técnicas y algoritmos que aproximan el razonamiento argumentativo humano desde un punto de vista computacional. En esta tesis doctoral se estudia el uso de distintas técnicas propuestas bajo el marco de la argumentación computacional para realizar un análisis automático del discurso argumentativo, y para desarrollar técnicas de persuasión computacional basadas en argumentos. Con estos objetivos, en primer lugar se presenta una completa revisión del estado del arte y se propone una clasificación de los trabajos existentes en el área de la argumentación computacional. Esta revisión nos permite contextualizar y entender la investigación previa de forma más clara desde la perspectiva humana del razonamiento argumentativo, así como identificar las principales limitaciones y futuras tendencias de la investigación realizada en argumentación computacional. En segundo lugar, con el objetivo de solucionar algunas de estas limitaciones, se ha creado y descrito un nuevo conjunto de datos que permite abordar nuevos retos y investigar problemas previamente inabordables (e.g., evaluación automática de debates orales). Conjuntamente con estos datos, se propone un nuevo sistema para la extracción automática de argumentos y se realiza el análisis comparativo de distintas técnicas para esta misma tarea. Además, se propone un nuevo algoritmo para la evaluación automática de debates argumentativos y se prueba con debates humanos reales. Finalmente, en tercer lugar se presentan una serie de estudios y propuestas para mejorar la capacidad persuasiva de sistemas de argumentación computacionales en la interacción con usuarios humanos. De esta forma, en esta tesis se presentan avances en cada una de las partes principales del proceso de argumentación computacional (i.e., extracción automática de argumentos, representación del conocimiento y razonamiento basados en argumentos, e interacción humano-computador basada en argumentos), así como se proponen algunos de los cimientos esenciales para el análisis automático completo de discursos argumentativos en lenguaje natural.[CA] L'argumentació computacional és l'àrea de recerca que estudia i analitza l'ús de distintes tècniques i algoritmes que aproximen el raonament argumentatiu humà des d'un punt de vista computacional. En aquesta tesi doctoral s'estudia l'ús de distintes tècniques proposades sota el marc de l'argumentació computacional per a realitzar una anàlisi automàtic del discurs argumentatiu, i per a desenvolupar tècniques de persuasió computacional basades en arguments. Amb aquestos objectius, en primer lloc es presenta una completa revisió de l'estat de l'art i es proposa una classificació dels treballs existents en l'àrea de l'argumentació computacional. Aquesta revisió permet contextualitzar i entendre la investigació previa de forma més clara des de la perspectiva humana del raonament argumentatiu, així com identificar les principals limitacions i futures tendències de la investigació realitzada en argumentació computacional. En segon lloc, amb l'objectiu de sol\cdotlucionar algunes d'aquestes limitacions, hem creat i descrit un nou conjunt de dades que ens permet abordar nous reptes i investigar problemes prèviament inabordables (e.g., avaluació automàtica de debats orals). Conjuntament amb aquestes dades, es proposa un nou sistema per a l'extracció d'arguments i es realitza l'anàlisi comparativa de distintes tècniques per a aquesta mateixa tasca. A més a més, es proposa un nou algoritme per a l'avaluació automàtica de debats argumentatius i es prova amb debats humans reals. Finalment, en tercer lloc es presenten una sèrie d'estudis i propostes per a millorar la capacitat persuasiva de sistemes d'argumentació computacionals en la interacció amb usuaris humans. D'aquesta forma, en aquesta tesi es presenten avanços en cada una de les parts principals del procés d'argumentació computacional (i.e., l'extracció automàtica d'arguments, la representació del coneixement i raonament basats en arguments, i la interacció humà-computador basada en arguments), així com es proposen alguns dels fonaments essencials per a l'anàlisi automàtica completa de discursos argumentatius en llenguatge natural.[EN] Computational argumentation is the area of research that studies and analyses the use of different techniques and algorithms that approximate human argumentative reasoning from a computational viewpoint. In this doctoral thesis we study the use of different techniques proposed under the framework of computational argumentation to perform an automatic analysis of argumentative discourse, and to develop argument-based computational persuasion techniques. With these objectives in mind, we first present a complete review of the state of the art and propose a classification of existing works in the area of computational argumentation. This review allows us to contextualise and understand the previous research more clearly from the human perspective of argumentative reasoning, and to identify the main limitations and future trends of the research done in computational argumentation. Secondly, to overcome some of these limitations, we create and describe a new corpus that allows us to address new challenges and investigate on previously unexplored problems (e.g., automatic evaluation of spoken debates). In conjunction with this data, a new system for argument mining is proposed and a comparative analysis of different techniques for this same task is carried out. In addition, we propose a new algorithm for the automatic evaluation of argumentative debates and we evaluate it with real human debates. Thirdly, a series of studies and proposals are presented to improve the persuasiveness of computational argumentation systems in the interaction with human users. In this way, this thesis presents advances in each of the main parts of the computational argumentation process (i.e., argument mining, argument-based knowledge representation and reasoning, and argument-based human-computer interaction), and proposes some of the essential foundations for the complete automatic analysis of natural language argumentative discourses.This thesis has been partially supported by the Generalitat Valenciana project PROME- TEO/2018/002 and by the Spanish Government projects TIN2017-89156-R and PID2020- 113416RB-I00.Ruiz Dolz, R. (2023). Computational Argumentation for the Automatic Analysis of Argumentative Discourse and Human Persuasion [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/194806Compendi
    corecore