23,102 research outputs found
Reasoning with Non-Numeric Linguistic Variables
Where decisions are based on imprecise numeric data and linguistic variables, the development of automated decision aids presents particular difficulties. In such applications, linguistic variables often take their values from a pre-ordered set of vaguely defined linguistic terms. The mathematical structures that arise from the assumption that sets of linguistic terms are pair-wise tolerant are considered. A homomorphism between tolerance spaces, filter bases and fuzzy numbers is shown. A proposal for modeling linguistic terms with an ordered set of fuzzy numbers is introduced. A procedure for structured knowledge acquisition based on the topology of the term sets and the cognitive theory of prototypes is shown to give rise to sparse rule bases. Similarity as a function of “distance” between fuzzy numbers treated as tolerance mappings is used as an inference mechanism in sparse rule bases to give linguistically valued outputs. Measuring the “distance” between fuzzy sets to correspond to intuitive notions of nearness is not straightforward, since the usual metric axioms are not adequate. An alternative way of measuring “distance” between fuzzy numbers is introduced, which reduces to the usual one when applied to crisp numbers
(Quantum) Space-Time as a Statistical Geometry of Fuzzy Lumps and the Connection with Random Metric Spaces
We develop a kind of pregeometry consisting of a web of overlapping fuzzy
lumps which interact with each other. The individual lumps are understood as
certain closely entangled subgraphs (cliques) in a dynamically evolving network
which, in a certain approximation, can be visualized as a time-dependent random
graph. This strand of ideas is merged with another one, deriving from ideas,
developed some time ago by Menger et al, that is, the concept of probabilistic-
or random metric spaces, representing a natural extension of the metrical
continuum into a more microscopic regime. It is our general goal to find a
better adapted geometric environment for the description of microphysics. In
this sense one may it also view as a dynamical randomisation of the causal-set
framework developed by e.g. Sorkin et al. In doing this we incorporate, as a
perhaps new aspect, various concepts from fuzzy set theory.Comment: 25 pages, Latex, no figures, some references added, some minor
changes added relating to previous wor
Measuring Relations Between Concepts In Conceptual Spaces
The highly influential framework of conceptual spaces provides a geometric
way of representing knowledge. Instances are represented by points in a
high-dimensional space and concepts are represented by regions in this space.
Our recent mathematical formalization of this framework is capable of
representing correlations between different domains in a geometric way. In this
paper, we extend our formalization by providing quantitative mathematical
definitions for the notions of concept size, subsethood, implication,
similarity, and betweenness. This considerably increases the representational
power of our formalization by introducing measurable ways of describing
relations between concepts.Comment: Accepted at SGAI 2017 (http://www.bcs-sgai.org/ai2017/). The final
publication is available at Springer via
https://doi.org/10.1007/978-3-319-71078-5_7. arXiv admin note: substantial
text overlap with arXiv:1707.05165, arXiv:1706.0636
Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data
We provide formal definitions and efficient secure techniques for
- turning noisy information into keys usable for any cryptographic
application, and, in particular,
- reliably and securely authenticating biometric data.
Our techniques apply not just to biometric information, but to any keying
material that, unlike traditional cryptographic keys, is (1) not reproducible
precisely and (2) not distributed uniformly. We propose two primitives: a
"fuzzy extractor" reliably extracts nearly uniform randomness R from its input;
the extraction is error-tolerant in the sense that R will be the same even if
the input changes, as long as it remains reasonably close to the original.
Thus, R can be used as a key in a cryptographic application. A "secure sketch"
produces public information about its input w that does not reveal w, and yet
allows exact recovery of w given another value that is close to w. Thus, it can
be used to reliably reproduce error-prone biometric inputs without incurring
the security risk inherent in storing them.
We define the primitives to be both formally secure and versatile,
generalizing much prior work. In addition, we provide nearly optimal
constructions of both primitives for various measures of ``closeness'' of input
data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS
3027, pp. 523-540. Differences from version 3: minor edits for grammar,
clarity, and typo
- …