5,926 research outputs found
Mapping Topographic Structure in White Matter Pathways with Level Set Trees
Fiber tractography on diffusion imaging data offers rich potential for
describing white matter pathways in the human brain, but characterizing the
spatial organization in these large and complex data sets remains a challenge.
We show that level set trees---which provide a concise representation of the
hierarchical mode structure of probability density functions---offer a
statistically-principled framework for visualizing and analyzing topography in
fiber streamlines. Using diffusion spectrum imaging data collected on
neurologically healthy controls (N=30), we mapped white matter pathways from
the cortex into the striatum using a deterministic tractography algorithm that
estimates fiber bundles as dimensionless streamlines. Level set trees were used
for interactive exploration of patterns in the endpoint distributions of the
mapped fiber tracks and an efficient segmentation of the tracks that has
empirical accuracy comparable to standard nonparametric clustering methods. We
show that level set trees can also be generalized to model pseudo-density
functions in order to analyze a broader array of data types, including entire
fiber streamlines. Finally, resampling methods show the reliability of the
level set tree as a descriptive measure of topographic structure, illustrating
its potential as a statistical descriptor in brain imaging analysis. These
results highlight the broad applicability of level set trees for visualizing
and analyzing high-dimensional data like fiber tractography output
Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)
Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope
with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through
routing models. The most important input to debris \ufb02ow routing models are the
topographic data, usually in the form of Digital Elevation Models (DEMs). The quality
of DEMs depends on the accuracy, density, and spatial distribution of the sampled
points; on the characteristics of the surface; and on the applied gridding methodology.
Therefore, the choice of the interpolation method affects the realistic representation
of the channel and fan morphology, and thus potentially the debris \ufb02ow routing
modeling outcomes. In this paper, we initially investigate the performance of common
interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor,
Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging)
in building DEMs with the complex topography of a debris \ufb02ow channel located
in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full-
waveform Light Detection And Ranging (LiDAR) data. The investigation is carried
out through a combination of statistical analysis of vertical accuracy, algorithm
robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability
assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms
on the performance of a Geographic Information System (GIS)-based cell model for
simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation
between the DEMs heights uncertainty resulting from the gridding procedure and
that on the corresponding simulated erosion/deposition depths, both the effect of
interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid
discharges, and channel morphology after the event. The comparison among the tested
interpolation methods highlights that the ANUDEM and ordinary kriging algorithms
are not suitable for building DEMs with complex topography. Conversely, the linear
triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy
and shape reliability. Anyway, the evaluation of the effects of gridding techniques on
debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does
not signi\ufb01cantly affect the model outcomes
Ground Profile Recovery from Aerial 3D LiDAR-based Maps
The paper presents the study and implementation of the ground detection
methodology with filtration and removal of forest points from LiDAR-based 3D
point cloud using the Cloth Simulation Filtering (CSF) algorithm. The
methodology allows to recover a terrestrial relief and create a landscape map
of a forestry region. As the proof-of-concept, we provided the outdoor flight
experiment, launching a hexacopter under a mixed forestry region with sharp
ground changes nearby Innopolis city (Russia), which demonstrated the
encouraging results for both ground detection and methodology robustness.Comment: 8 pages, FRUCT-2019 conferenc
Evaluating a Self-Organizing Map for Clustering and Visualizing Optimum Currency Area Criteria
Optimum currency area (OCA) theory attempts to define the geographical region in which it would maximize economic efficiency to have a single currency. In this paper, the focus is on prospective and current members of the Economic and Monetary Union. For this task, a self-organizing neural network, the Self-organizing map (SOM), is combined with hierarchical clustering for a two-level approach to clustering and visualizing OCA criteria. The output of the SOM is a topologically preserved two-dimensional grid. The final models are evaluated based on both clustering tendencies and accuracy measures. Thereafter, the two-dimensional grid of the chosen model is used for visual assessment of the OCA criteria, while its clustering results are projected onto a geographic map.Self-organizing maps, Optimum Currency Area, projection, clustering, geospatial visualization
Visualisation of bioinformatics datasets
Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset
Generalized topographic block model
Co-clustering leads to parsimony in data visualisation with a number of parameters dramatically reduced in comparison to the dimensions of the data sample. Herein, we propose a new generalized approach for nonlinear mapping by a re-parameterization of the latent block mixture model. The densities modeling the blocks are in an exponential family such that the Gaussian, Bernoulli and Poisson laws are particular cases. The inference of the parameters is derived from the block expectationâmaximization algorithm with a NewtonâRaphson procedure at the maximization step. Empirical experiments with textual data validate the interest of our generalized model
Mechanisms of Feedback in the Visual System
Feedback is an ubiquitous feature of neural systems though there is little consensus on the roles of mechanisms involved with feedback. We set up an in vivo preparation to study and characterize an accessible and isolated feedback loop within the visual system of the leopard frog, Rana pipiens. We recorded extracellularly within the nucleus isthmi, a nucleus providing direct topographic feedback to the optic tectum, a nucleus that receives the vast majority of retinal output. The optic tectum and nucleus isthmi of the amphibian are homologous structures to the superior colliculus and parabigeminal nucleus in mammals, respectively. We formulated a novel threshold for detecting neuronal spikes within a low signal-to-noise environment, as exists in the nucleus isthmi due to its high density of small neuronal cell bodies. Combining this threshold with a recently developed spike sorting procedure enabled us to extract simultaneous recordings from up to 7 neurons at a time from a single extracellular electrode. We then stimulated the frog using computer driven dynamic spatiotemporal visual stimuli to characterize the responses of the nucleus isthmi neurons. We found that the responses display surprisingly long time courses to simple visual stimuli. Furthermore, we found that when stimulated with complex contextual stimuli the response of the nucleus isthmi is quite counter-intuitive. When a stimulus is presented outside of the classical receptive field along with a stimulus within the receptive field, the response is actually higher than the response to just a stimulus within the classical receptive field. Finally, we compared the responses of all of the simultaneously recorded neurons and, together with data from in vitro experiments within the nucleus isthmi, conclude that the nucleus isthmi of the frog is composed of just one electrophysiological population of cells
- âŠ