1,240 research outputs found
A cognitive exploration of the “non-visual” nature of geometric proofs
Why are Geometric Proofs (Usually) “Non-Visual”? We asked this question as
a way to explore the similarities and differences between diagrams and text (visual
thinking versus language thinking). Traditional text-based proofs are considered
(by many to be) more rigorous than diagrams alone. In this paper we focus on
human perceptual-cognitive characteristics that may encourage textual modes for
proofs because of the ergonomic affordances of text relative to diagrams. We suggest
that visual-spatial perception of physical objects, where an object is perceived
with greater acuity through foveal vision rather than peripheral vision, is similar
to attention navigating a conceptual visual-spatial structure. We suggest that attention
has foveal-like and peripheral-like characteristics and that textual modes
appeal to what we refer to here as foveal-focal attention, an extension of prior
work in focused attention
Formal functional testing of graphical user interfaces.
SIGLEAvailable from British Library Document Supply Centre- DSC:DX177960 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
Recommended from our members
Formalizing graphical notations
The thesis describes research into graphical notations for software engineering, with a principal interest in ways of formalizing them. The research seeks to provide a theoretical basis that will help in designing both notations and the software tools that process them.
The work starts from a survey of literature on notation, followed by a review of techniques for formal description and for computational handling of notations. The survey concentrates on collecting views of the benefits and the problems attending notation use in software development; the review covers picture description languages, grammars and tools such as generic editors and visual programming environments. The main problem of notation is found to be a lack of any coherent, rigorous description methods. The current approaches to this problem are analysed as lacking in consensus on syntax specification and also lacking a clear focus on a defined concept of notated expression.
To address these deficiencies, the thesis embarks upon an exploration of serniotic, linguistic and logical theory; this culminates in a proposed formalization of serniosis in notations, using categorial model theory as a mathematical foundation. An argument about the structure of sign systems leads to an analysis of notation into a layered system of tractable theories, spanning the gap between expressive pictorial medium and subject domain. This notion of 'tectonic' theory aims to treat both diagrams and formulae together.
The research gives details of how syntactic structure can be sketched in a mathematical sense, with examples applying to software development diagrams, offering a new solution to the problem of notation specification. Based on these methods, the thesis discusses directions for resolving the harder problems of supporting notation design, processing and computer-aided generic editing. A number of future research areas are thereby opened up. For practical trial of the ideas, the work proceeds to the development and partial implementation of a system to aid the design of notations and editors. Finally the thesis is evaluated as a contribution to theory in an area which has not attracted a standard approach
Recommended from our members
Information enforcement in learning with graphics : improving syllogistic reasoning skills
This thesis is an investigation into the factors that contribute to good choices among graphical systems used in teaching, and the feasibility of implementing teaching software that uses this knowledge.The thesis describes a mathematical metric derived from a cognitive theory of human diagram processing. The theory characterises differences among representations by their ability to express information. The theory provides the factors and relationships needed to build the metric. It says that good representations are easily processed because they are more vivid, more tractable and less expressive, than poor representations.The metric is applied to abstract systems for teaching and learning syllogistic reasoning, TARSKI'S WORLD, EULER CIRCLES, VENN DIAGRAMS and CARROLL'S GAME OF LOGIC. A rank ordering reflects the value of each system predicted by the theory and the metric. The theory, the metric and the systems are then tested in empirical studies. Five studies involving sixty-eight learners, examined the benefit of software based on these abstract systems.Studies showed the theory correctly predicted learners' success with the circle systems and poorer performance with TARSKI'S WORLD. The metric showed small but clear differences in expressivity between the circle systems. Differences between results of the learners using the circle systems contradicted the predictions of the metric.Learners with mathematical training were better equipped and more successful at learning syllogistic reasoning with the systems. Performance of learners without mathematical training declined after using the software systems. Diagrams drawn by learners together with video footage collected during problem solving, led to a catalogue of errors, misconceptions and some helpful strategies for learning from graphical systems.A cognitive style test investigated the poor performance of non-mathematically trained learners. Learners with mathematics training showed serialist and versatile learning styles while learners without this training showed a holist learning style. This is consistent with the hypothesis that non-mathematically trained learners emphasise the use of semantic cues during learning and problem solving.A card-sorting task investigated learners' preferences for parts of the graphical lexicon used in the diagram systems. Preferences for the EULER lexicon increased difficulty in explaining the system's poor results in earlier studies. Video footage of learners using the systems in the final study illustrated useful learning strategies and improved performance with EULER while individual instruction was available.Further work describes a preliminary design for an adaptive syllogism tutor and other related work
Toward a formal theory for computing machines made out of whatever physics offers: extended version
Approaching limitations of digital computing technologies have spurred
research in neuromorphic and other unconventional approaches to computing. Here
we argue that if we want to systematically engineer computing systems that are
based on unconventional physical effects, we need guidance from a formal theory
that is different from the symbolic-algorithmic theory of today's computer
science textbooks. We propose a general strategy for developing such a theory,
and within that general view, a specific approach that we call "fluent
computing". In contrast to Turing, who modeled computing processes from a
top-down perspective as symbolic reasoning, we adopt the scientific paradigm of
physics and model physical computing systems bottom-up by formalizing what can
ultimately be measured in any physical substrate. This leads to an
understanding of computing as the structuring of processes, while classical
models of computing systems describe the processing of structures.Comment: 76 pages. This is an extended version of a perspective article with
the same title that will appear in Nature Communications soon after this
manuscript goes public on arxi
Design and integrity of deterministic system architectures.
Architectures represented by system construction 'building block' components and interrelationships provide the structural form. This thesis addresses processes, procedures and methods that support system design synthesis and specifically the determination of the integrity of candidate architectural structures. Particular emphasis is given to the structural representation of system architectures, their consistency and functional quantification. It is a design imperative that a hierarchically decomposed structure maintains compatibility and consistency between the functional and realisation solutions. Complex systems are normally simplified by the use of hierarchical decomposition so that lower level components are precisely defined and simpler than higher-level components. To enable such systems to be reconstructed from their components, the hierarchical construction must provide vertical intra-relationship consistency, horizontal interrelationship consistency, and inter-component functional consistency. Firstly, a modified process design model is proposed that incorporates the generic structural representation of system architectures. Secondly, a system architecture design knowledge domain is proposed that enables viewpoint evaluations to be aggregated into a coherent set of domains that are both necessary and sufficient to determine the integrity of system architectures. Thirdly, four methods of structural analysis are proposed to assure the integrity of the architecture. The first enables the structural compatibility between the 'building blocks' that provide the emergent functional properties and implementation solution properties to be determined. The second enables the compatibility of the functional causality structure and the implementation causality structure to be determined. The third method provides a graphical representation of architectural structures. The fourth method uses the graphical form of structural representation to provide a technique that enables quantitative estimation of performance estimates of emergent properties for large scale or complex architectural structures. These methods have been combined into a procedure of formal design. This is a design process that, if rigorously executed, meets the requirements for reconstructability
Possible models diagrams: a new approach to teaching propositional logic.
Masters Degree. University of KwaZulu-Natal, Pietermaritzburg.Abstract available in PDF.Quality of scanned PDF has been compromised owing to poor condition of original document
- …