38,275 research outputs found
Data granulation by the principles of uncertainty
Researches in granular modeling produced a variety of mathematical models,
such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets,
which are all suitable to characterize the so-called information granules.
Modeling of the input data uncertainty is recognized as a crucial aspect in
information granulation. Moreover, the uncertainty is a well-studied concept in
many mathematical settings, such as those of probability theory, fuzzy set
theory, and possibility theory. This fact suggests that an appropriate
quantification of the uncertainty expressed by the information granule model
could be used to define an invariant property, to be exploited in practical
situations of information granulation. In this perspective, a procedure of
information granulation is effective if the uncertainty conveyed by the
synthesized information granule is in a monotonically increasing relation with
the uncertainty of the input data. In this paper, we present a data granulation
framework that elaborates over the principles of uncertainty introduced by
Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is
possible to apply such principles regardless of the input data type and the
specific mathematical setting adopted for the information granules. The
proposed framework is conceived (i) to offer a guideline for the synthesis of
information granules and (ii) to build a groundwork to compare and
quantitatively judge over different data granulation procedures. To provide a
suitable case study, we introduce a new data granulation technique based on the
minimum sum of distances, which is designed to generate type-2 fuzzy sets. We
analyze the procedure by performing different experiments on two distinct data
types: feature vectors and labeled graphs. Results show that the uncertainty of
the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference
Recommended from our members
Classifying Research Papers with the Computer Science Ontology
Ontologies of research areas are important tools for characterising, exploring and analysing the research landscape. We recently released the Computer Science Ontology (CSO), a large-scale, automatically generated ontology of research areas, which includes about 26K topics and 226K semantic relationships. CSO currently powers several tools adopted by the Springer Nature editorial team and has been used to enable a variety of solutions, such as classifying research publications, detecting research communities, and predicting research trends. As an effort to encourage the usage of CSO, we have developed the CSO Portal, a web application that enables users to download, explore, and provide granular feedbacks at different levels of the ontology. In this paper, we present the CSO Classifier, an application for automatically classifying academic papers according to the rich taxonomy of topics from CSO. The aim is to facilitate the adoption of CSO across the various communities engaged with scholarly data and to foster the development of new applications based on this knowledge base
Ontological theory for ontological engineering: Biomedical systems information integration
Software application ontologies have the potential to become the keystone in state-of-the-art information management techniques. It is expected that these ontologies will support the sort of reasoning power required to navigate large and complex terminologies correctly and efficiently. Yet, there is one problem in particular that continues to stand in our way. As these terminological structures increase in size and complexity, and the drive to integrate them inevitably swells, it is clear that the level of consistency required for such navigation will become correspondingly difficult to maintain. While descriptive semantic representations are certainly a necessary component to any adequate ontology-based system, so long as ontology engineers rely solely on semantic information, without a sound ontological theory informing their modeling decisions, this goal will surely remain out of reach. In this paper we describe how Language and Computing nv (L&C), along with The Institute for Formal Ontology and Medical Information Sciences (IFOMIS), are working towards developing and implementing just such a theory, combining the open
software architecture of L&C’s LinkSuiteTM with the philosophical rigor of IFOMIS’s Basic Formal Ontology. In this way we aim to move beyond the more or less simple controlled vocabularies that have dominated the industry to date
Knowledge Engineering from Data Perspective: Granular Computing Approach
The concept of rough set theory is a mathematical approach to uncertainly and vagueness in data analysis, introduced by Zdzislaw Pawlak in 1980s. Rough set theory assumes the underlying structure of knowledge is a partition. We have extended Pawlak’s concept of knowledge to coverings. We have taken a soft approach regarding any generalized subset as a basic knowledge. We regard a covering as basic knowledge from which the theory of knowledge approximations and learning, knowledge dependency and reduct are developed
Implicit yield function formulation for granular and rock-like materials
The constitutive modelling of granular, porous and quasi-brittle materials is
based on yield (or damage) functions, which may exhibit features (for instance,
lack of convexity, or branches where the values go to infinity, or false
elastic domains) preventing the use of efficient return-mapping integration
schemes. This problem is solved by proposing a general construction strategy to
define an implicitly defined convex yield function starting from any convex
yield surface. Based on this implicit definition of the yield function, a
return-mapping integration scheme is implemented and tested for elastic-plastic
(or -damaging) rate equations. The scheme is general and, although it
introduces a numerical cost when compared to situations where the scheme is not
needed, is demonstrated to perform correctly and accurately.Comment: 19 page
- …