35,153 research outputs found

    Non-linear hierarchical visualisation

    Get PDF
    This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining

    Using treemaps for variable selection in spatio-temporal visualisation

    Get PDF
    We demonstrate and reflect upon the use of enhanced treemaps that incorporate spatial and temporal ordering for exploring a large multivariate spatio-temporal data set. The resulting data-dense views summarise and simultaneously present hundreds of space-, time-, and variable-constrained subsets of a large multivariate data set in a structure that facilitates their meaningful comparison and supports visual analysis. Interactive techniques allow localised patterns to be explored and subsets of interest selected and compared with the spatial aggregate. Spatial variation is considered through interactive raster maps and high-resolution local road maps. The techniques are developed in the context of 42.2 million records of vehicular activity in a 98 km(2) area of central London and informally evaluated through a design used in the exploratory visualisation of this data set. The main advantages of our technique are the means to simultaneously display hundreds of summaries of the data and to interactively browse hundreds of variable combinations with ordering and symbolism that are consistent and appropriate for space- and time- based variables. These capabilities are difficult to achieve in the case of spatio-temporal data with categorical attributes using existing geovisualisation methods. We acknowledge limitations in the treemap representation but enhance the cognitive plausibility of this popular layout through our two-dimensional ordering algorithm and interactions. Patterns that are expected (e.g. more traffic in central London), interesting (e.g. the spatial and temporal distribution of particular vehicle types) and anomalous (e.g. low speeds on particular road sections) are detected at various scales and locations using the approach. In many cases, anomalies identify biases that may have implications for future use of the data set for analyses and applications. Ordered treemaps appear to have potential as interactive interfaces for variable selection in spatio-temporal visualisation. Information Visualization (2008) 7, 210-224. doi: 10.1057/palgrave.ivs.950018

    Nature-Inspired Learning Models

    Get PDF
    Intelligent learning mechanisms found in natural world are still unsurpassed in their learning performance and eficiency of dealing with uncertain information coming in a variety of forms, yet remain under continuous challenge from human driven artificial intelligence methods. This work intends to demonstrate how the phenomena observed in physical world can be directly used to guide artificial learning models. An inspiration for the new learning methods has been found in the mechanics of physical fields found in both micro and macro scale. Exploiting the analogies between data and particles subjected to gravity, electrostatic and gas particle fields, new algorithms have been developed and applied to classification and clustering while the properties of the field further reused in regression and visualisation of classification and classifier fusion. The paper covers extensive pictorial examples and visual interpretations of the presented techniques along with some testing over the well-known real and artificial datasets, compared when possible to the traditional methods

    WebDocBall : a graphical visualisation tool for web search results

    Get PDF
    In the Web search process people often think that the hardest work is done by the search engines or by the directories which are entrusted with finding the Web pages. While this is partially true, a not less important part of the work is done by the user, who has to decide which page is relevant from the huge set of retrieved pages. In this paper we present a graphical visualisation tool aimed at helping users to determine the relevance of a Web page with respect to its structure. Such tool can help the user in the often tedious task of deciding which page is relevant enough to deserve a visit

    ROI coding of volumetric medical images with application to visualisation

    Get PDF

    A grid-enabled problem solving environment for parallel computational engineering design

    Get PDF
    This paper describes the development and application of a piece of engineering software that provides a problem solving environment (PSE) capable of launching, and interfacing with, computational jobs executing on remote resources on a computational grid. In particular it is demonstrated how a complex, serial, engineering optimisation code may be efficiently parallelised, grid-enabled and embedded within a PSE. The environment is highly flexible, allowing remote users from different sites to collaborate, and permitting computational tasks to be executed in parallel across multiple grid resources, each of which may be a parallel architecture. A full working prototype has been built and successfully applied to a computationally demanding engineering optimisation problem. This particular problem stems from elastohydrodynamic lubrication and involves optimising the computational model for a lubricant based on the match between simulation results and experimentally observed data

    Hierarchical progressive surveys. Multi-resolution HEALPix data structures for astronomical images, catalogues, and 3-dimensional data cubes

    Full text link
    Scientific exploitation of the ever increasing volumes of astronomical data requires efficient and practical methods for data access, visualisation, and analysis. Hierarchical sky tessellation techniques enable a multi-resolution approach to organising data on angular scales from the full sky down to the individual image pixels. Aims. We aim to show that the Hierarchical progressive survey (HiPS) scheme for describing astronomical images, source catalogues, and three-dimensional data cubes is a practical solution to managing large volumes of heterogeneous data and that it enables a new level of scientific interoperability across large collections of data of these different data types. Methods. HiPS uses the HEALPix tessellation of the sphere to define a hierarchical tile and pixel structure to describe and organise astronomical data. HiPS is designed to conserve the scientific properties of the data alongside both visualisation considerations and emphasis on the ease of implementation. We describe the development of HiPS to manage a large number of diverse image surveys, as well as the extension of hierarchical image systems to cube and catalogue data. We demonstrate the interoperability of HiPS and Multi-Order Coverage (MOC) maps and highlight the HiPS mechanism to provide links to the original data. Results. Hierarchical progressive surveys have been generated by various data centres and groups for ~200 data collections including many wide area sky surveys, and archives of pointed observations. These can be accessed and visualised in Aladin, Aladin Lite, and other applications. HiPS provides a basis for further innovations in the use of hierarchical data structures to facilitate the description and statistical analysis of large astronomical data sets.Comment: 21 pages, 6 figures. Accepted for publication in Astronomy & Astrophysic

    Manifold Learning in MR spectroscopy using nonlinear dimensionality reduction and unsupervised clustering

    Get PDF
    Purpose To investigate whether nonlinear dimensionality reduction improves unsupervised classification of 1H MRS brain tumor data compared with a linear method. Methods In vivo single-voxel 1H magnetic resonance spectroscopy (55 patients) and 1H magnetic resonance spectroscopy imaging (MRSI) (29 patients) data were acquired from histopathologically diagnosed gliomas. Data reduction using Laplacian eigenmaps (LE) or independent component analysis (ICA) was followed by k-means clustering or agglomerative hierarchical clustering (AHC) for unsupervised learning to assess tumor grade and for tissue type segmentation of MRSI data. Results An accuracy of 93% in classification of glioma grade II and grade IV, with 100% accuracy in distinguishing tumor and normal spectra, was obtained by LE with unsupervised clustering, but not with the combination of k-means and ICA. With 1H MRSI data, LE provided a more linear distribution of data for cluster analysis and better cluster stability than ICA. LE combined with k-means or AHC provided 91% accuracy for classifying tumor grade and 100% accuracy for identifying normal tissue voxels. Color-coded visualization of normal brain, tumor core, and infiltration regions was achieved with LE combined with AHC. Conclusion Purpose To investigate whether nonlinear dimensionality reduction improves unsupervised classification of 1H MRS brain tumor data compared with a linear method. Methods In vivo single-voxel 1H magnetic resonance spectroscopy (55 patients) and 1H magnetic resonance spectroscopy imaging (MRSI) (29 patients) data were acquired from histopathologically diagnosed gliomas. Data reduction using Laplacian eigenmaps (LE) or independent component analysis (ICA) was followed by k-means clustering or agglomerative hierarchical clustering (AHC) for unsupervised learning to assess tumor grade and for tissue type segmentation of MRSI data. Results An accuracy of 93% in classification of glioma grade II and grade IV, with 100% accuracy in distinguishing tumor and normal spectra, was obtained by LE with unsupervised clustering, but not with the combination of k-means and ICA. With 1H MRSI data, LE provided a more linear distribution of data for cluster analysis and better cluster stability than ICA. LE combined with k-means or AHC provided 91% accuracy for classifying tumor grade and 100% accuracy for identifying normal tissue voxels. Color-coded visualization of normal brain, tumor core, and infiltration regions was achieved with LE combined with AHC. Conclusion The LE method is promising for unsupervised clustering to separate brain and tumor tissue with automated color-coding for visualization of 1H MRSI data after cluster analysis

    The various forms of civilization arranged in chronological strata: manipulating the HTOED

    Get PDF
    corecore