4 research outputs found
Clustering Concept Chains from Ordered Data without Path Descriptions
This paper describes a process for clustering concepts into chains from data
presented randomly to an evaluating system. There are a number of rules or
guidelines that help the system to determine more accurately what concepts
belong to a particular chain and what ones do not, but it should be possible to
write these in a generic way. This mechanism also uses a flat structure without
any hierarchical path information, where the link between two concepts is made
at the level of the concept itself. It does not require related metadata, but
instead, a simple counting mechanism is used. Key to this is a count for both
the concept itself and also the group or chain that it belongs to. To test the
possible success of the mechanism, concept chain parts taken randomly from a
larger ontology were presented to the system, but only at a depth of 2 concepts
each time. That is - root concept plus a concept that it is linked to. The
results show that this can still lead to very variable structures being formed
and can also accommodate some level of randomness.Comment: Pre-prin
A Repeated Signal Difference for Recognising Patterns
This paper describes a new mechanism that might help with defining pattern
sequences, by the fact that it can produce an upper bound on the ensemble value
that can persistently oscillate with the actual values produced from each
pattern. With every firing event, a node also receives an on/off feedback
switch. If the node fires, then it sends a feedback result depending on the
input signal strength. If the input signal is positive or larger, it can store
an 'on' switch feedback for the next iteration. If the signal is negative or
smaller, it can store an 'off' switch feedback for the next iteration. If the
node does not fire, then it does not affect the current feedback situation and
receives the switch command produced by the last active pattern event for the
same neuron. The upper bound therefore also represents the largest or most
enclosing pattern set and the lower value is for the actual set of firing
patterns. If the pattern sequence repeats, it will oscillate between the two
values, allowing them to be recognised and measured more easily, over time.
Tests show that changing the sequence ordering produces different value sets,
which can also be measured
A Brain-like Cognitive Process with Shared Methods
This paper describes a new entropy-style of equation that may be useful in a
general sense, but can be applied to a cognitive model with related processes.
The model is based on the human brain, with automatic and distributed pattern
activity. Methods for carrying out the different processes are suggested. The
main purpose of this paper is to reaffirm earlier research on different
knowledge-based and experience-based clustering techniques. The overall
architecture has stayed essentially the same and so it is the localised
processes or smaller details that have been updated. For example, a counting
mechanism is used slightly differently, to measure a level of 'cohesion'
instead of a 'correct' classification, over pattern instances. The introduction
of features has further enhanced the architecture and the new entropy-style
equation is proposed. While an earlier paper defined three levels of functional
requirement, this paper re-defines the levels in a more human vernacular, with
higher-level goals described in terms of action-result pairs
Clustering Concept Chains from Ordered Data without Path Descriptions
Version 1.2 Abstract β This paper describes a process for clustering concepts into chains from data presented randomly to an evaluating system. There are a number of rules or guidelines that help the system to determine more accurately what concepts belong to a particular chain and what ones do not, but it should be possible to write these in a generic way. This mechanism also uses a flat structure without any hierarchical path information, where the link between two concepts is made at the level of the concept itself. It does not require related metadata, but instead, a simple counting mechanism is used. Key to this is a count for both the concept itself and also the group or chain that it belongs to. To test the possible success of the mechanism, concept chain parts taken randomly from a larger ontology were presented to the system, but only at a depth of 2 concepts each time. That is β root concept plus a concept that it is linked to. The results show that this can still lead to very variable structures being formed and can also accommodate some level of randomness.