103 research outputs found

    Toward Cyborg PPGIS: exploring socio-technical requirements for the use of web-based PPGIS in two municipal planning cases, Stockholm region, Sweden

    Get PDF
    Web-based Public Participation Geographic Information Systems (PPGIS) are increasingly used for surveying place values and informing municipal planning in contexts of urban densification. However, research is lagging behind the rapid deployment of PPGIS applications. Some of the main opportunities and challenges for the uptake and implementation of web-based PPGIS are derived from a literature review and two case studies dealing with municipal planning for urban densification in the Stockholm region, Sweden. A simple clustering analysis identified three interconnected themes that together determine the performance of PPGIS: (i) tool design and affordances; (ii) organisational capacity; and (iii) governance. The results of the case studies augment existing literature regarding the connections between the different socio-technical dimensions for the design, implementation and evaluation of PPGIS applications in municipal planning. A cyborg approach to PPGIS is then proposed to improve the theoretical basis for addressing these dimensions together

    CLASSIFICATION OF COMPLEX TWO-DIMENSIONAL IMAGES IN A PARALLEL DISTRIBUTED PROCESSING ARCHITECTURE

    Get PDF
    Neural network analysis is proposed and evaluated as a method of analysis of marine biological data, specifically images of plankton specimens. The quantification of the various plankton species is of great scientific importance, from modelling global climatic change to predicting the economic effects of toxic red tides. A preliminary evaluation of the neural network technique is made by the development of a back-propagation system that successfully learns to distinguish between two co-occurring morphologically similar species from the North Atlantic Ocean, namely Ceratium arcticum and C. longipes. Various techniques are developed to handle the indeterminately labelled source data, pre-process the images and successfully train the networks. An analysis of the network solutions is made, and some consideration given to how the system might be extended.Plymouth Marine Laborator

    Data-driven neural mass modelling

    Get PDF
    The brain is a complex organ whose activity spans multiple scales, both spatial and temporal. The computational unit of the brain is thought to be the neurone. At the microscopic level, neurones communicate via action potentials. These may be observed experimentally by means of precise techniques that work with a small number of these cells and their interactions, and that can be modelled mathematically in a variety of ways. Other techniques consider the averaged activity of large groups of neurones in the mesoscale, or cortical columns; theoretical models of these signals also abound. The problem of relating the microscopic scale to the mesoscopic is not trivial. Analytical derivations of mesoscopic models are based on assumptions that are not always justified. Also, traditionally there has been a separation between the clinically oriented analysts that process neural signals for medical purposes and the theoretical modelling community. This Thesis aims to lay bridges both between the microscopic and mesoscopic scales of brain activity, and between the experimental and theoretical angles of its study. This is achieved via the unscented Kalman filter (UKF), which allows us to combine knowledge from different sources (microscopic/mesoscopic and experimental/theoretical). The outcome is a better understanding of the system than each of the sources of information could provide separately. The Thesis is organised as follows. Chapter 1 is a brief reflection on the current methodology in Science and its underlying motivations. This is followed by chapters 2 to 4, which introduce and contextualise the concepts discussed in the remainder of the work. Chapter 5 tackles the interrelationship of the microscopic and mesoscopic scales. Although efforts have been made to derive mesoscopic equations from models of microscopic networks, they are based on assumptions that may not always hold. We use the UKF to assimilate the output of microscopic networks into a mesoscopic model and study a variety of dynamical situations. Our results show that using the Kalman filter compensates for the loss of information that is common in analytical derivations. Chapters 6 and 7 address the combination of experimental data with neural mass models. More specifically, we extend Jansen and Rit's model of a cortical column with a model of the head, which allows us to use electroencephalography (EEG) data. With this, we estimate the state of the system and a relevant parameter of choice. In chapter 6 we use in silico data to test the UKF under a variety of dynamical conditions, comparing simulated intracranial data with simulated EEG. Extracranial estimation is always superior in speed and quality to intracortical estimation, even though intracortical electrodes are closer to the source of activity than extracranial electrodes. We suggest that this is due to the more complete picture of the cortex that is visible with the set of extracranial electrodes. Chapter 7 feeds experimental EEG data of an epileptic patient into Jansen and Rit's model; the goal is to estimate a parameter that governs the dynamical behaviour of the system, again with the UKF. The estimation of the state closely follows the experimental data, while the parameter shows sensitivity to the changes in brain regimes, especially seizures. These results show promise for using data assimilation to address some shortcomings of brain modelling techniques. On the one hand, the mutual influence of neural structures at the microscopic and the mesoscopic scales may become better characterised, by means of filtering approaches that bypass analytical limitations. On the other hand, fusing experimental EEG data with mathematical models of the brain may enable us to determine the underlying dynamics of observed physiological signals, and at the same time to improve our models with patient-specific information. The potential of these enhanced algorithms spans a wide range of brain-related applications.El cervell humà és un òrgan de gran complexitat l’activitat del qual es desenvolupa en múltiples escales, tant espacials com temporals. Es creu que la unitat computacional del cervell és la neurona, una cèl·lula altament especialitzada que té com a funció rebre, processar i transmetre informació. A nivell microscòpic, les neurones es comuniquen les unes amb les altres per potencials d’acció. Aquests es poden observar experimentalment “in vivo” per mitjà de tècniques de gran precisió que només poden tenir en compte un nombre relativament reduït de cèl·lules i interaccions, i que es poden modelar matemàticament de diverses maneres. Altres tècniques tracten amb grans grups de neurones a escala mesoscòpica, o columnes corticals, i detecten l’activitat mitjana de la població neuronal; en aquest cas també abunden els models teòrics que intenten reproduir aquests senyals. Malgrat que està ben establert que hi ha una intercomunicació entre les escales microscòpica i mesoscòpica, relacionar una escala amb una altra no és gens trivial. Les derivacions analítiques de models mesoscòpics a partir de xarxes microscòpiques es basen en suposicions que no sempre es poden justificar. A part, tradicionalment hi ha hagut una frontera de separació entre els analistes clínics que processen senyals neuronals amb fins mèdics (i que sovint usen tècniques molt invasives i/o costoses), i la comunitat teòrica que modelitza aquests senyals, per a qui el repte més gran és caracteritzar els paràmetres que governen els models perquè aquests s’acostin el més possible a la realitat. Aquesta Tesi té com a objectiu, per una banda, fer un pas més a caracteritzar la relació entre les escales microscòpica i mesoscòpica d’activitat cerebral, i, per l’altra, establir ponts entre els punts de vista experimental i teòric del seu estudi. Ho aconseguim amb un algoritme d’assimilació de dades, el filtre de Kalman desodorat (UKF, de les sigles en anglès), que ens permet combinar informació de diverses procedències (microscòpica/mesoscòpica o experimental/teòrica). El resultat és una comprensió més àmplia del sistema estudiat que la que haurien permès les fonts d’informació per separat. La Tesi està organitzada de la següent manera. El capítol 1 comença amb una breu reflexió sobre la metodologia científica actual i les seves motivacions subjacents (segons l’autora). El segueixen els capítols del 2 al 4, que introdueixen i posen en context els conceptes que s’exposen a la resta del treball. El capítol 5 aborda el problema de la relació entre l’escala microscòpica i la mesoscòpica. Tot i que existeixen diverses derivacions d’equacions mesoscòpiques partint de models de xarxes neuronals, sovint es basen en suposicions fràgils que no es compleixen en situacions més complicades. Aquí utilitzem l’UKF per assimilar la sortida de xarxes microscòpiques en un model mesoscòpic simple i estudiar diverses situacions dinàmiques. Els resultats mostren que la manera que el filtre de Kalman gestiona les incerteses del model compensa les pèrdues d’informació pròpies de les derivacions analítiques de models mesoscòpics. Els capítols 6 i 7 tracten la combinació de dades experimentals del cervell amb models de masses neurals que descriuen la dinàmica de grups de neurones. Concretament, estenem el model de Jansen i Rit d’una columna cortical amb un model del cap, el qual ens permet fer servir dades extracranials no invasives. Amb això estimem l’estat del sistema i un paràmetre d’interès de possible rellevància en l’estudi clínic d’afeccions com l’epilèpsia. En el capítol 6 fem servir dades “in silico” per provar l’UKF en diversos escenaris dinàmics: conjunts de paràmetres que causen comportaments diferents en les columnes corticals, diferents nivells de soroll de mesura i dues modalitats de transmissió d’informació; tot això comparant dades intracranials simulades amb simulacions d’electroencefalogrames (EEG). En totes les situacions estudiades, l’estimació extracranial és sempre superior, en velocitat i precisió, a l’estimació intracortical, encara que els elèctrodes intracorticals són molt més propers a la font de l’activitat que els elèctrodes de la superfície cranial. Suggerim que això pot ser causat per la visió més completa del còrtex que es pot obtenir amb el conjunt d’elèctrodes extracranials. Aquesta idea ve reforçada pels resultats observats amb elèctrodes extracranials individuals treballant de manera independent, que apunten a la sensibilitat espacial de les mesures. En el capítol 7 alimentem el model de Jansen i Rit amb dades experimentals de l’EEG d’un pacient epilèptic; l’objectiu és estimar un paràmetre significatiu que governa l’evolució dinàmica del sistema, de nou amb l’UKF. L’estimació de l’estat és precisa i el paràmetre es veu afectat pels canvis de règim, especialment (però no exclusivament) per les convulsions. Aquests resultats són prometedors a l’hora d’utilitzar l’assimilació de dades per superar les diverses carències de les tècniques de modelització cerebral. Per una banda, la influència mútua entre estructures a escala microscòpica i a escala mesoscòpica es pot caracteritzar millor, gràcies a tècniques de filtrat que permeten esquivar les habituals limitacions analítiques. Això dóna com a resultat una millor comprensió de l’estructura i funció cerebrals. Per una altra banda, fusionar dades experimentals d’EEG amb els models matemàtics del cervell existents ens pot permetre determinar les dinàmiques subjacents dels senyals fisiològics que tenim disponibles, a la vegada que millorem els nostres models amb informació individual de cada pacient. Aquests algoritmes augmentats tenen potencial per a un ampli espectre d’aplicacions en el camp de les neurociències, des d’interfícies cervell/ordinador fins a tota mena d’usos en medicina personalitzada com el diagnòstic precoç de malalties neurodegeneratives, la predicció de crisis convulsives o la monitorització de la rehabilitació postisquèmica o posttraumàtica, entre molts altres.Postprint (published version

    A formal framework for the specification of interactive systems

    Get PDF
    We are primarily concerned with interactive systems whose behaviour is highly reliant on end user activity. A framework for describing and synthesising such systems is developed. This consists of a functional description of the capabilities of a system together with a means of expressing its desired 'usability'. Previous work in this area has concentrated on capturing 'usability properties' in discrete mathematical models. We propose notations for describing systems in a 'requirements' style and a 'specification' style. The requirements style is based on a simple temporal logic and the specification style is based on Lamport's Temporal Logic of Actions (TLA) [74]. System functionality is specified as a collection of 'reactions', the temporal composition of which define the behaviour of the system. By observing and analysing interactions it is possible to determine how 'well' a user performs a given task. We argue that a 'usable' system is one that encourages users to perform their tasks efficiently (i.e. to consistently perform their tasks well) hence a system in which users perform their tasks well in a consistent manner is likely to be a usable system. The use of a given functionality linked with different user interfaces then gives a means by which interfaces (and other aspects) can be compared and suggests how they might be harnessed to bias system use so as to encourage the desired user behaviour. Normalising across different users anq different tasks moves us away from the discrete nature of reactions and hence to comfortably describe the use of a system we employ probabilistic rather than discrete mathematics. We illustrate that framework with worked examples and propose an agenda for further work

    Embodied reflective practice : the embodied nature of reflection-in-action

    Get PDF
    The purpose of this thesis is to examine the applicability of aspects of Schön’s (1983) theories of reflection-in-action in relation to visual art practice. Schön’s (1983) theories demonstrate that whilst they are written with design disciplines in mind, they do not extend to consider the appropriateness of its use in visual art practice. Scrivener (2000: 10) draws the distinction that whilst Schön’s (1983) use of scientific language in reflection-in-action is considered applicable for problem-solving projects in design, aspects of it are problematic for creative production research projects and recommends focusing reflection on the underlying experience of creative production. This thesis proposes that this and other issues, such as the emphasis on problem solving, and particularly, a reliance on a conversational metaphor, is likewise problematic for visual art practice. This thesis therefore moves to examine what is distinct about the application of reflective methods in visual art practice, in relation to design and research in the arts, through a series of text-based and documentary case studies. Analysis of the case studies suggest that there is an emphasis on embodiment essential to visual art processes, which is experiential in nature rather than problem-solving. A thorough examination of recent theories of embodied mind, which provide empirical evidence from a broad range of knowledge fields for the pervasive role of embodiment in shaping human experience, is presented. The primary research method is a review of two existing sets of theories and a synthesis of aspects of them in an original context, a process offered as an original contribution to knowledge. The context in question is the assessment of the applicability of the resulting synthesis to visual art practice, a domain for which neither theory was written. Knowing-in-action (Schön, 1983) describes the tacit knowing implicit in skillful performance when practice is going well, reflection-inaction (Schön, 1983) takes over, and describes the processes cycled through, only when problems are encountered in practice. Through an analysis of theories of embodied mind, and the documentary cases studies, the conclusion is drawn that in addition to these descriptions there is a rich layer of non-verbal embodied experience shaping action, conceptual meaning and verbal articulations of practice. This thesis therefore suggests modifications to theories of reflective practice in the visual arts, by incorporating theories of embodied mind in the development of additional reflective methods to supplement Schön’s theories (1983). Two methods are proposed as worthy of further study. The first researches Mark Johnson’s (1987) theory of metaphorical projection, which is presented as a means of mapping aspects of visual arts practitioners' verbal articulations of practice, back onto source domains in their embodied experiences of practice. The second explores a recommendation from within theories of embodied mind (Varela, Thompson and Rosch, 1993: 27) that mindfulness training could help develop a mindful, open-ended reflection. Taken together, this thesis proposes that an Embodied Reflective Practice could be developed to the benefit of visual art practitioners

    A framework for the design of usable electronic text

    Get PDF
    This thesis examines the human issues underlying the design and usability of electronic text systems. In so doing it develops a framework for the conceptualisation of these issues that aims to guide designers of electronic texts in their attempts to produce usable systems. The thesis commences with a review of the traditional human factors literature on electronic text according to three basic themes: its concern with perceptual, manipulatory and structural issues. From this examination it is concluded that shortcomings in translating this work into design result from the adoption of overly narrow uni-disciplinary views of reading taken from cognitive psychology and information science which are inappropriate to serve the needs of electronic text designers. In an attempt to provide a more relevant description of the reading process a series of studies examining readers and their views as well as uses of texts is reported. In the first, a repertory grid based investigation revealed that all texts can be described in reader-relvant terms according to three criteria: why a text is read, what a text contains and how it is read. These criteria then form the basis of two investigations of reader-text interaction using academic journals and user manuals. The results of these studies highlighted the need to consider readers' models of a document's structure in discussing text usability. Subsequent experimental work on readers' models of academic articles demonstrated not only that such models are important aspects of reader-text interaction but that data of this form could usefully be employed in the design of an electronic text system. The proposed framework provides a broad, qualitative model of the important issues for designers to consider when developing a product It consists of four interactive elements that focus attention on aspects of reading that have been identified as central to usability. Simple tests of the utility and validity of the framework are reported and it is shown that the framework both supports reasoned analysis and subsequent prediction of reader behaviour as well as providing a parsimonious account of their verbal utterances while reading. The thesis concludes with an analysis of the likely uses of such a framework and the potential for electronic text systems in an increasingly information-hungry world

    Knowledge engineering for mental-health risk assessment and decision support

    Get PDF
    Mental-health risk assessment practice in the UK is mainly paper-based, with little standardisation in the tools that are used across the Services. The tools that are available tend to rely on minimal sets of items and unsophisticated scoring methods to identify at-risk individuals. This means the reasoning by which an outcome has been determined remains uncertain. Consequently, there is little provision for: including the patient as an active party in the assessment process, identifying underlying causes of risk, and eecting shared decision-making. This thesis develops a tool-chain for the formulation and deployment of a computerised clinical decision support system for mental-health risk assessment. The resultant tool, GRiST, will be based on consensual domain expert knowledge that will be validated as part of the research, and will incorporate a proven psychological model of classication for risk computation. GRiST will have an ambitious remit of being a platform that can be used over the Internet, by both the clinician and the layperson, in multiple settings, and in the assessment of patients with varying demographics. Flexibility will therefore be a guiding principle in the development of the platform, to the extent that GRiST will present an assessment environment that is tailored to the circumstances in which it nds itself. XML and XSLT will be the key technologies that help deliver this exibility

    The Usefulness of Funds Flow Statements: An Empirical Study of Hong Kong Banks' Loan Officers' Use of Published Company Accounts

    Get PDF
    Funds flow statements were part of the published accounts of most companies in most jurisdictions in the last two decades. In the USA and a few other countries, they have been replaced by cash flow statements. Before other countries, including the UK, follow the US lead, it is important to gather and assess evidence on the usefulness of the funds statement to see if the arguments for its replacement by the cash flow statement are well founded. In essence, the usefulness of the funds flow statement is a matter of its ability to enable its readers to make better, or possibly faster, judgments about a firm's changes in financial position than they would make in the absence of that statement. The research reported in this thesis addresses the usefulness of the funds statement to a group of users especially concerned with changes in the financial position of companies with whom members of the group do business. Banks employ loan officers and credit analysts to vet applications for new loans, and this group of people is therefore likely to appreciate information useful to them in assessing the ability of applicants to meet their actual and prospective financial obligations. Such a group based in Hong Kong would be exposed to accounts prepared under all kinds of different national formats and should not be unduly fixated on the format of any one nation. Such assumptions were the basis of the research. A factorial ANOVA research design was used with 116 Hong Kong bank loan officers in 15 sets to see if the provision of funds flow statements and cash flow statements in a variety of formats improved their speed or accuracy in answering simple calculation-based or judgment-based questions concerning the accounts. Order effects were controlled by shuffling question order. Accounts difficulty effects were controlled by providing the accounts in two matched sets of equivalent processing difficulty. Subject selection effects were controlled through random assignments of subjects to accounts sets. It was found that funds statements marginally improved accuracy but greatly increased processing time. Cash flow statements performed no better than funds flow statements in either respect. An information load explanation is discussed for these results
    • …
    corecore