2,086 research outputs found
FUSE (Fuzzy Similarity Measure) - A measure for determining fuzzy short text similarity using Interval Type-2 fuzzy sets
Measurement of the semantic and syntactic similarity of human utterances is essential in developing language that is understandable when machines engage in dialogue with users. However, human language is complex and the semantic meaning of an utterance is usually dependent on context at a given time and also based on learnt experience of the meaning of the perception based words that are used. Limited work in terms of the representation and coverage has been done on the development of fuzzy semantic similarity measures. This paper proposes a new measure known as FUSE (FUzzy Similarity mEasure) which determines similarity using expanded categories of perception based words that have been modelled using Interval Type-2 fuzzy sets. The paper describes the method of obtaining the human ratings of these words based on Mendelās methodology and applies them within the FUSE algorithm. FUSE is then evaluated on three established datasets and is compared with two known semantic similarity algorithms. Results indicate FUSE provides higher correlations to human ratings
Using Fuzzy Set Similarity in Sentence Similarity Measures
Sentence similarity measures the similarity between two blocks of text. A semantic similarity measure between individual pairs of words, each taken from the two blocks of text, has been used in STASIS. Word similarity is measured based on the distance between the words in the WordNet ontology. If the vague words, referred to as fuzzy words, are not found in WordNet, their semantic similarity cannot be used in the sentence similarity measure. FAST and FUSE transform these vague words into fuzzy set representations, type-1 and type-2 respectively, to create ontological structures where the same semantic similarity measure used in WordNet can then be used. This paper investigates eliminating the process of building an ontology with the fuzzy words and instead directly using fuzzy set similarity measures between the fuzzy words in the task of sentence similarity measurement. Their performance is evaluated based on their correlation with human judgments of sentence similarity. In addition, statistical tests showed there is not any significant difference in the sentence similarity values produced using fuzzy set similarity measures between fuzzy sets representing fuzzy words and using FAST semantic similarity within ontologies representing fuzzy words
Fuzzy natural language similarity measures through computing with words
A vibrant area of research is the understanding of human language by machines to engage in
conversation with humans to achieve set goals. Human language is naturally fuzzy by nature,
with words meaning different things to different people, depending on the context. Fuzzy
words are words with a subjective meaning, typically used in everyday human natural
language dialogue and often ambiguous and vague in meaning and dependent on an
individualās perception. Fuzzy Sentence Similarity Measures (FSSM) are algorithms that can
compare two or more short texts which contain fuzzy words and return a numeric measure
of similarity of meaning between them.
The motivation for this research is to create a new FSSM called FUSE (FUzzy Similarity
mEasure). FUSE is an ontology-based similarity measure that uses Interval Type-2 Fuzzy Sets
to model relationships between categories of human perception-based words. Four versions
of FUSE (FUSE_1.0 ā FUSE_4.0) have been developed, investigating the presence of linguistic
hedges, the expansion of fuzzy categories and their use in natural language, incorporating
logical operators such as ānotā and the introduction of the fuzzy influence factor.
FUSE has been compared to several state-of-the-art, traditional semantic similarity measures
(SSMās) which do not consider the presence of fuzzy words. FUSE has also been compared to
the only published FSSM, FAST (Fuzzy Algorithm for Similarity Testing), which has a limited
dictionary of fuzzy words and uses Type-1 Fuzzy Sets to model relationships between
categories of human perception-based words. Results have shown FUSE is able to improve on
the limitations of traditional SSMās and the FAST algorithm by achieving a higher correlation
with the average human rating (AHR) compared to traditional SSMās and FAST using several
published and gold-standard datasets.
To validate FUSE, in the context of a real-world application, versions of the algorithm were
incorporated into a simple Question & Answer (Q&A) dialogue system (DS), referred to as
FUSION, to evaluate the improvement of natural language understanding. FUSION was tested
on two different scenarios using human participants and results compared to a traditional
SSM known as STASIS. Results of the DS experiments showed a True rating of 88.65%
compared to STASIS with an average True rating of 61.36%. Results showed that the FUSE
algorithm can be used within real world applications and evaluation of the DS showed an
improvement of natural language understanding, allowing semantic similarity to be
calculated more accurately from natural user responses.
The key contributions of this work can be summarised as follows: The development of a new
methodology to model fuzzy words using Interval Type-2 fuzzy sets; leading to the creation of
a fuzzy dictionary for nine fuzzy categories, a useful resource which can be used by other
researchers in the field of natural language processing and Computing with Words with other
fuzzy applications such as semantic clustering. The development of a FSSM known as FUSE,
which was expanded over four versions, investigating the incorporation of linguistic hedges,
the expansion of fuzzy categories and their use in natural language, inclusion of logical
operators such as ānotā and the introduction of the fuzzy influence factor. Integration of the
FUSE algorithm into a simple Q&A DS referred to as FUSION demonstrated that FSSM can be
used in a real-world practical implementation, therefore making FUSE and its fuzzy dictionary
generalisable to other applications
Fuzzy Influence in Fuzzy Semantic Similarity Measures
The field of Computing with Words has been pivotal in the development of fuzzy semantic similarity measures. Fuzzy semantic similarity measures allow the modelling of words in a given context with a tolerance for the imprecise nature of human perceptions. In this work, we look at how this imprecision can be addressed with the use of fuzzy semantic similarity measures in the field of natural language processing. A fuzzy influence factor is introduced into an existing measure known as FUSE. FUSE computes the similarity between two short texts based on weighted syntactic and semantic components in order to address the issue of comparing fuzzy words that exist in different word categories. A series of empirical experiments investigates the effect of introducing a fuzzy influence factor into FUSE across a number of short text datasets. Comparisons with other similarity measures demonstrates that the fuzzy influence factor has a positive effect in improving the correlation of machine similarity judgments with similarity judgments of humans
Interpreting Human Responses in Dialogue Systems using Fuzzy Semantic Similarity Measures
Dialogue systems are automated systems that interact with humans using natural language. Much work has been done on dialogue management and learning using a range of computational intelligence based approaches, however the complexity of human dialogue in different contexts still presents many challenges. The key impact of work presented in this paper is to use fuzzy semantic similarity measures embedded within a dialogue system to allow a machine to semantically comprehend human utterances in a given context and thus communicate more effectively with a human in a specific domain using natural language. To achieve this, perception based words should be understood by a machine in context of the dialogue. In this work, a simple question and answer dialogue system is implemented for a cafƩ customer satisfaction feedback survey. Both fuzzy and crisp semantic similarity measures are used within the dialogue engine to assess the accuracy and robustness of rule firing. Results from a 32 participant study, show that the fuzzy measure improves rule matching within the dialogue system by 21.88% compared with the crisp measure known as STASIS, thus providing a more natural and fluid dialogue exchange
An investigation into fuzzy negation in semantic similarity measures
Machine computation of semantic similarity between short texts aims to approximate human measurements of similarity, often influenced by context, domain knowledge, and life experiences. Logical negation in natural language plays an important role as it can change the polarity of meaning within a sentence, yet it is a complex problem for semantic similarity measures to identify and measure. This paper investigates the impact of logical negation on determining fuzzy semantic similarity between short texts containing fuzzy words. A methodology is proposed to interpret the implications of a negation word on a fuzzy word within the context of a user utterance. Three known fuzzy logical not operators proposed by Zadeh, Yager and Sugeno are incorporated into a fuzzy semantic similarity measure called FUSE. Experiments are conducted on a sample dataset of short text inputs captured through human engagement with a dialogue system. Results show that Yager's weighted operator is the most suitable for achieving a matching threshold of 90.47% accuracy. This finding has significant implications for the field of semantic similarity measures. It provides a more accurate way to measure the similarity of short texts that contain fuzzy words combined with logical negation. Whilst validation of the approach on more substantial datasets is required, this study contributes to a better understanding of how to account for logical negation in fuzzy semantic similarity measures and provides a valuable methodology for future research in this area
Keystroke dynamics in the pre-touchscreen era
Biometric authentication seeks to measure an individualās unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individualsā typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts
- ā¦