2,386 research outputs found

    Knife Edge Scanning Microscope Brain Atlas Interface for Tracing and Analysis of Vasculature Data

    Get PDF
    The study of the neurovascular network in the brain is important to understand brain functions as well as causes of several brain dysfunctions. Many techniques have been applied to acquire neurovascular data. The Knife-Edge Scanning Microscope (KESM), developed by the Brain Network Lab at Texas A&M University, can generate whole-brain-scale data at submicrometer resolution. The specimen can be stained with different stains, and depending on the type of stain used, the KESM can image different types of microstructures in the brain. The India ink stain allows the neurovascular network in the brain to be imaged. In order to visualize and analyze such large datasets (~ 1.5 TB per brain), a lightweight, web-based mouse brain atlas called the Knife-Edge Scanning Microscope Brain Atlas (KESMBA) was developed in the lab. The atlas serves several whole mouse brain data sets including India ink. The multi-section overlay technique used in the atlas enables 3D visualization of the structural information in the data. To solve the challenging issue of tracing micro-vessels in the brain, in this thesis a semi-automated tracing and analysis method is developed and integrated into the KESM brain atlas. Using the KESMBA interface developed in this thesis, the user can look at the 3D structure of the vessels on the brain atlas and can guide the tracing algorithm. To analyze the vasculature network traced by the user, a data analysis component is also added. This new KESMBA interface is expected to help in quickly tracing and analyzing the vascular network of the brain with minimal manual effort. In order to visualize and analyze such large data sets (~ 1.5 TB per brain), a light-weight, web-based mouse brain atlas called the Knife-Edge Scanning Microscope Brain Atlas (KESMBA) was developed in the lab. The atlas serves several whole mouse brain data sets including India ink. The multi-section overlay technique used in the atlas enables 3D visualization of the structural information in the data. To solve the challenging issue of tracing micro-vessels in the brain, in this thesis a semi-automated tracing and analysis method is developed and integrated into the KESM brain atlas. Using the KESMBA interface developed in this thesis, the user can look at the 3D structure of the vessels on the brain atlas and can guide the tracing algorithm. In order to analyze the vasculature network traced by the user, a data analysis component is also added. This new KESMBA interface is expected to help in quickly tracing and analyzing the vascular network of the brain with minimal manual effort

    Methods for Automated Creation and Efficient Visualisation of Large-Scale Terrains based on Real Height-Map Data

    Get PDF
    Real-time rendering of large-scale terrains is a difficult problem and remains an active field of research. The massive scale of these landscapes, where the ratio between the size of the terrain and its resolution is spanning multiple orders of magnitude, requires an efficient level of detail strategy. It is crucial that the geometry, as well as the terrain data, are represented seamlessly at varying distances while maintaining a constant visual quality. This thesis investigates common techniques and previous solutions to problems associated with the rendering of height field terrains and discusses their benefits and drawbacks. Subsequently, two solutions to the stated problems are presented, which build and expand upon the state-of-the-art rendering methods. A seamless and efficient mesh representation is achieved by the novel Uniform Distance-Dependent Level of Detail (UDLOD) triangulation method. This fully GPU-based algorithm subdivides a quadtree covering the terrain into small tiles, which can be culled in parallel, and are morphed seamlessly in the vertex shader, resulting in a densely and temporally consistent triangulated mesh. The proposed Chunked Clipmap combines the strengths of both quadtrees and clipmaps to enable efficient out-of-core paging of terrain data. This data structure allows for constant time view-dependent access, graceful degradation if data is unavailable, and supports trilinear and anisotropic filtering. Together these, otherwise independent, techniques enable the rendering of large-scale real-world terrains, which is demonstrated on a dataset encompassing the entire Free State of Saxony at a resolution of one meter, in real-time

    3rd Many-core Applications Research Community (MARC) Symposium. (KIT Scientific Reports ; 7598)

    Get PDF
    This manuscript includes recent scientific work regarding the Intel Single Chip Cloud computer and describes approaches for novel approaches for programming and run-time organization

    Exploration of Pervasive Games in Relation to Mobile Technologies

    Get PDF
    The project is an exploration of Pervasive Games in relation to mobile technologies, with the intention of developing a pervasive game engine. Pervasive Games are interactive games where the participants drive the game play by playing the game in both the real world and a virtual environment. This is an area of gaming that has rapidly evolved over the last few years. The initial research involved establishing several key elements common to existing pervasive applications, defining real world / virtual world considerations for game play (both positive and negative) and identifying the technical requirements needed to implement play elements on a mobile device. After comparing several platforms the Windows 7 platform was selected for development purposes. The requirements for establishing a working development platform (with delivery mechanism) was investigated and a working environment set-up. A pervasive games engine was then developed in the format of 67 code stubs (coding solutions) that allow the implementation of solutions to gaming elements required in the development of pervasive applications. Two new helper classes were in addition developed containing solutions to topics related to run-time data storage (StorageUtils.cs) and generic gaming tasks (GameCode.cs). A pervasive game was implemented to test a cross section of functionality in the engine. The basic principle behind the game was to overlay various layers video, backgrounds, sprite and text, to build up an immersive pervasive environment with a player in the centre of the game imagery, game domain and real world. The intention of the game was to see how the pervasive game experience could be reflected in the game mechanics and pervasive interaction, while utilising the engine functionality

    THREE TEMPORAL PERSPECTIVES ON DECENTRALIZED LOCATION-AWARE COMPUTING: PAST, PRESENT, FUTURE

    Get PDF
    Durant les quatre dernières décennies, la miniaturisation a permis la diffusion à large échelle des ordinateurs, les rendant omniprésents. Aujourd’hui, le nombre d’objets connectés à Internet ne cesse de croitre et cette tendance n’a pas l’air de ralentir. Ces objets, qui peuvent être des téléphones mobiles, des véhicules ou des senseurs, génèrent de très grands volumes de données qui sont presque toujours associés à un contexte spatiotemporel. Le volume de ces données est souvent si grand que leur traitement requiert la création de système distribués qui impliquent la coopération de plusieurs ordinateurs. La capacité de traiter ces données revêt une importance sociétale. Par exemple: les données collectées lors de trajets en voiture permettent aujourd’hui d’éviter les em-bouteillages ou de partager son véhicule. Un autre exemple: dans un avenir proche, les données collectées à l’aide de gyroscopes capables de détecter les trous dans la chaussée permettront de mieux planifier les interventions de maintenance à effectuer sur le réseau routier. Les domaines d’applications sont par conséquent nombreux, de même que les problèmes qui y sont associés. Les articles qui composent cette thèse traitent de systèmes qui partagent deux caractéristiques clés: un contexte spatiotemporel et une architecture décentralisée. De plus, les systèmes décrits dans ces articles s’articulent autours de trois axes temporels: le présent, le passé, et le futur. Les systèmes axés sur le présent permettent à un très grand nombre d’objets connectés de communiquer en fonction d’un contexte spatial avec des temps de réponses proche du temps réel. Nos contributions dans ce domaine permettent à ce type de système décentralisé de s’adapter au volume de donnée à traiter en s’étendant sur du matériel bon marché. Les systèmes axés sur le passé ont pour but de faciliter l’accès a de très grands volumes données spatiotemporelles collectées par des objets connectés. En d’autres termes, il s’agit d’indexer des trajectoires et d’exploiter ces indexes. Nos contributions dans ce domaine permettent de traiter des jeux de trajectoires particulièrement denses, ce qui n’avait pas été fait auparavant. Enfin, les systèmes axés sur le futur utilisent les trajectoires passées pour prédire les trajectoires que des objets connectés suivront dans l’avenir. Nos contributions permettent de prédire les trajectoires suivies par des objets connectés avec une granularité jusque là inégalée. Bien qu’impliquant des domaines différents, ces contributions s’articulent autour de dénominateurs communs des systèmes sous-jacents, ouvrant la possibilité de pouvoir traiter ces problèmes avec plus de généricité dans un avenir proche. -- During the past four decades, due to miniaturization computing devices have become ubiquitous and pervasive. Today, the number of objects connected to the Internet is in- creasing at a rapid pace and this trend does not seem to be slowing down. These objects, which can be smartphones, vehicles, or any kind of sensors, generate large amounts of data that are almost always associated with a spatio-temporal context. The amount of this data is often so large that their processing requires the creation of a distributed system, which involves the cooperation of several computers. The ability to process these data is important for society. For example: the data collected during car journeys already makes it possible to avoid traffic jams or to know about the need to organize a carpool. Another example: in the near future, the maintenance interventions to be carried out on the road network will be planned with data collected using gyroscopes that detect potholes. The application domains are therefore numerous, as are the prob- lems associated with them. The articles that make up this thesis deal with systems that share two key characteristics: a spatio-temporal context and a decentralized architec- ture. In addition, the systems described in these articles revolve around three temporal perspectives: the present, the past, and the future. Systems associated with the present perspective enable a very large number of connected objects to communicate in near real-time, according to a spatial context. Our contributions in this area enable this type of decentralized system to be scaled-out on commodity hardware, i.e., to adapt as the volume of data that arrives in the system increases. Systems associated with the past perspective, often referred to as trajectory indexes, are intended for the access to the large volume of spatio-temporal data collected by connected objects. Our contributions in this area makes it possible to handle particularly dense trajectory datasets, a problem that has not been addressed previously. Finally, systems associated with the future per- spective rely on past trajectories to predict the trajectories that the connected objects will follow. Our contributions predict the trajectories followed by connected objects with a previously unmet granularity. Although involving different domains, these con- tributions are structured around the common denominators of the underlying systems, which opens the possibility of being able to deal with these problems more generically in the near future

    Scalable visual analytics over voluminous spatiotemporal data

    Get PDF
    2018 Fall.Includes bibliographical references.Visualization is a critical part of modern data analytics. This is especially true of interactive and exploratory visual analytics, which encourages speedy discovery of trends, patterns, and connections in data by allowing analysts to rapidly change what data is displayed and how it is displayed. Unfortunately, the explosion of data production in recent years has led to problems of scale as storage, processing, querying, and visualization have struggled to keep pace with data volumes. Visualization of spatiotemporal data pose unique challenges, thanks in part to high-dimensionality in the input feature space, interactions between features, and the production of voluminous, high-resolution outputs. In this dissertation, we address challenges associated with supporting interactive, exploratory visualization of voluminous spatiotemporal datasets and underlying phenomena. This requires the visualization of millions of entities and changes to these entities as the spatiotemporal phenomena unfolds. The rendering and propagation of spatiotemporal phenomena must be both accurate and timely. Key contributions of this dissertation include: 1) the temporal and spatial coupling of spatially localized models to enable the visualization of phenomena at far greater geospatial scales; 2) the ability to directly compare and contrast diverging spatiotemporal outcomes that arise from multiple exploratory "what-if" queries; and 3) the computational framework required to support an interactive user experience in a heavily resource-constrained environment. We additionally provide support for collaborative and competitive exploration with multiple synchronized clients

    Doctor of Philosophy

    Get PDF
    dissertationInteractive editing and manipulation of digital media is a fundamental component in digital content creation. One media in particular, digital imagery, has seen a recent increase in popularity of its large or even massive image formats. Unfortunately, current systems and techniques are rarely concerned with scalability or usability with these large images. Moreover, processing massive (or even large) imagery is assumed to be an off-line, automatic process, although many problems associated with these datasets require human intervention for high quality results. This dissertation details how to design interactive image techniques that scale. In particular, massive imagery is typically constructed as a seamless mosaic of many smaller images. The focus of this work is the creation of new technologies to enable user interaction in the formation of these large mosaics. While an interactive system for all stages of the mosaic creation pipeline is a long-term research goal, this dissertation concentrates on the last phase of the mosaic creation pipeline - the composition of registered images into a seamless composite. The work detailed in this dissertation provides the technologies to fully realize interactive editing in mosaic composition on image collections ranging from the very small to massive in scale

    Multi-sensor Evolution Analysis: an advanced GIS for interactive time series analysis and modelling based on satellite data

    Get PDF
    Archives of Earth remote sensing data, acquired from orbiting satellites, contain large amounts of information that can be used both for research activities and decision support. Thematic categorization is one method to extract from satellite data meaningful information that humans can directly comprehend. An interactive system that permits to analyse geo-referenced thematic data and its evolution over time is proposed as a tool to efficiently exploit such vast and growing amount of data. This thesis describes the approach used in building the system, the data processing methodology, details architectural elements and graphical interfaces. Finally, this thesis provides an evaluation of potential uses of the features provided, performance levels and usability of an implementation hosting an archive of 15 years moderate resolution (1 Km, from the ATSR instrument) thematic data
    corecore