355 research outputs found

    Visualisation of heterogeneous data with the generalised generative topographic mapping

    Get PDF
    Heterogeneous and incomplete datasets are common in many real-world visualisation applications. The probabilistic nature of the Generative Topographic Mapping (GTM), which was originally developed for complete continuous data, can be extended to model heterogeneous (i.e. containing both continuous and discrete values) and missing data. This paper describes and assesses the resulting model on both synthetic and real-world heterogeneous data with missing values

    Visualisation of heterogeneous data with simultaneous feature saliency using Generalised Generative Topographic Mapping

    Get PDF
    Most machine-learning algorithms are designed for datasets with features of a single type whereas very little attention has been given to datasets with mixed-type features. We recently proposed a model to handle mixed types with a probabilistic latent variable formalism. This proposed model describes the data by type-specific distributions that are conditionally independent given the latent space and is called generalised generative topographic mapping (GGTM). It has often been observed that visualisations of high-dimensional datasets can be poor in the presence of noisy features. In this paper we therefore propose to extend the GGTM to estimate feature saliency values (GGTMFS) as an integrated part of the parameter learning process with an expectation-maximisation (EM) algorithm. The efficacy of the proposed GGTMFS model is demonstrated both for synthetic and real datasets

    Visualisation of bioinformatics datasets

    Get PDF
    Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset

    3D City Models and urban information: Current issues and perspectives

    Get PDF
    Considering sustainable development of cities implies investigating cities in a holistic way taking into account many interrelations between various urban or environmental issues. 3D city models are increasingly used in different cities and countries for an intended wide range of applications beyond mere visualization. Could these 3D City models be used to integrate urban and environmental knowledge? How could they be improved to fulfill such role? We believe that enriching the semantics of current 3D city models, would extend their functionality and usability; therefore, they could serve as integration platforms of the knowledge related to urban and environmental issues allowing a huge and significant improvement of city sustainable management and development. But which elements need to be added to 3D city models? What are the most efficient ways to realize such improvement / enrichment? How to evaluate the usability of these improved 3D city models? These were the questions tackled by the COST Action TU0801 “Semantic enrichment of 3D city models for sustainable urban development”. This book gathers various materials developed all along the four year of the Action and the significant breakthroughs

    An improved LOD specification for 3D building models

    Full text link

    The Value of Seizure Semiology in Epilepsy Surgery: Epileptogenic-Zone Localisation in Presurgical Patients using Machine Learning and Semiology Visualisation Tool

    Get PDF
    Background Eight million individuals have focal drug resistant epilepsy worldwide. If their epileptogenic focus is identified and resected, they may become seizure-free and experience significant improvements in quality of life. However, seizure-freedom occurs in less than half of surgical resections. Seizure semiology - the signs and symptoms during a seizure - along with brain imaging and electroencephalography (EEG) are amongst the mainstays of seizure localisation. Although there have been advances in algorithmic identification of abnormalities on EEG and imaging, semiological analysis has remained more subjective. The primary objective of this research was to investigate the localising value of clinician-identified semiology, and secondarily to improve personalised prognostication for epilepsy surgery. Methods I data mined retrospective hospital records to link semiology to outcomes. I trained machine learning models to predict temporal lobe epilepsy (TLE) and determine the value of semiology compared to a benchmark of hippocampal sclerosis (HS). Due to the hospital dataset being relatively small, we also collected data from a systematic review of the literature to curate an open-access Semio2Brain database. We built the Semiology-to-Brain Visualisation Tool (SVT) on this database and retrospectively validated SVT in two separate groups of randomly selected patients and individuals with frontal lobe epilepsy. Separately, a systematic review of multimodal prognostic features of epilepsy surgery was undertaken. The concept of a semiological connectome was devised and compared to structural connectivity to investigate probabilistic propagation and semiology generation. Results Although a (non-chronological) list of patients’ semiologies did not improve localisation beyond the initial semiology, the list of semiology added value when combined with an imaging feature. The absolute added value of semiology in a support vector classifier in diagnosing TLE, compared to HS, was 25%. Semiology was however unable to predict postsurgical outcomes. To help future prognostic models, a list of essential multimodal prognostic features for epilepsy surgery were extracted from meta-analyses and a structural causal model proposed. Semio2Brain consists of over 13000 semiological datapoints from 4643 patients across 309 studies and uniquely enabled a Bayesian approach to localisation to mitigate TLE publication bias. SVT performed well in a retrospective validation, matching the best expert clinician’s localisation scores and exceeding them for lateralisation, and showed modest value in localisation in individuals with frontal lobe epilepsy (FLE). There was a significant correlation between the number of connecting fibres between brain regions and the seizure semiologies that can arise from these regions. Conclusions Semiology is valuable in localisation, but multimodal concordance is more valuable and highly prognostic. SVT could be suitable for use in multimodal models to predict the seizure focus

    Visual conversations on urban futures:understanding participatory processes and artefacts

    Get PDF
    Visualisations of future cities contribute to our social imaginary. They can, and have been used as speculative objects for imagining new possible ways of living as communities (Dunn et al., 2014). However, future cities are usually represented through coherent scenarios that only tell one story (or one version of it), and rarely express the complexity of urban life.How can the diversity that characterises the city be represented in visions of the future that give voice to different, diverging ways of living and experiencing it? How do these visualisations contribute to inclusive design and research actions aimed at envisioning, prototyping, and reflecting on possible scenarios for liveable cities?My research focuses on ways of visualising possibilities for life in future cities that include and valorise plurality and agonism (DiSalvo, 2010), rather than present (as usually happens) only one story. For a lack of existing terminology, I am calling this approach “Visual Conversations on Urban Futures” (VCUF).Although there are no definitions or structured descriptions of VCUF, some prototypes can be found in design, art, and architecture. These examples show the great variety of methods and media that can be adopted in participatory processes of imagining futures cities.As a designer, I have chosen to adopt an action-research methodology (Kock, 2012; Rust, Mottram, & Till, 2007) to conduct, document, and reflect on a series of design experiments (Eriksen & Bang, 2013) that enhance my understanding of what it means to make pluralism explicit when producing visions of urban futures.The four main design experiments that I have undertaken are:-Living in the city. A first experiment in visualising future urban scenarios from a collaboratively written text.-Envisioning Urban Futures. Speculative Co-design practices: designing spaces for imaginary explorations and mapping them in an Atlas that makes visions readable and explorable-Sharing Cities. Conducting situated conversations on the relationship between social practices and urban futures: co-creating scenarios of sharing cities.-Birmingham Parks Summit. Visions designed to be unpacked, reworked, and developed into actions.The main contribution of my research is the proposal of a set of design principles, including a definition of the design space of VCUF. The design space outlined in the dissertation is a framework that can be used both as an analytical lens (to understand existing processes and artefacts of VCUF) as well as a design tool.Visual Conversations on Urban Futures could offer a significant contribution to the early stages of scenario building processes for possible futures. Manzini and Coad (2015) describe scenarios as “communicative artifacts produced to further the social conversation about what to do”. This way of imagining futures is ultimately about building alternatives to the dominant order by “making possible what appear(s) to be impossible” (Lefebvre, 1970, cited in Buckley & Violeau, 2011).While in times of urgent change seeking clarity and agreement might seem a much preferable route, I argue that articulating divergence is a necessary step to explore truly radical solutions. Stepping back from a solution-oriented approach allows us to visualise and better understand underlying tensions, and to critically question assumptions about what futures are or should be desirable

    Development of a Conceptual, Mathematical and Model of System Dynamics for Landfill Water Treatment

    Get PDF
    Leachate is a major problem in landfills due to the type and amount of pollutants. In Croatia, the usual way of handling leachate is recirculation back to the landfill body. However, this method poses a danger of their leakage into the environment, especially during periods of increased precipitation. Leachate is heavily polluted with organic matter, and its spillage into the environment can cause environmental incident. This paper presents a model for efficient treatment of landfill water contaminated with organic matter, based on the operating parameters of the actual water treatment system. The aim of this scientific research is to develop a model for landfill water treatment and to design a methodology suitable for significant patterns of organic matter pollution behaviour. The developed conceptual model is a computer-based model that uses randomly selected values from the theoretical probability distribution of the applied variables. The mathematical model is based on a system of differential equations solved by the Runge-Kutta method. To validate the model, a nonparametric test was applied, given that the distributions are asymmetric non-Gaussian distributions. The methodology proposed in this paper is based on simulation modelling as a useful method in environmental protection. The developed and validated model has proven that landfill water can be effectively and economically purified. Simulation modelling and environmental informatics can effectively contribute to solving environmental problems on the computer without unnecessary risk to the environment
    • 

    corecore