9,419 research outputs found

    VIOLA - A multi-purpose and web-based visualization tool for neuronal-network simulation output

    Full text link
    Neuronal network models and corresponding computer simulations are invaluable tools to aid the interpretation of the relationship between neuron properties, connectivity and measured activity in cortical tissue. Spatiotemporal patterns of activity propagating across the cortical surface as observed experimentally can for example be described by neuronal network models with layered geometry and distance-dependent connectivity. The interpretation of the resulting stream of multi-modal and multi-dimensional simulation data calls for integrating interactive visualization steps into existing simulation-analysis workflows. Here, we present a set of interactive visualization concepts called views for the visual analysis of activity data in topological network models, and a corresponding reference implementation VIOLA (VIsualization Of Layer Activity). The software is a lightweight, open-source, web-based and platform-independent application combining and adapting modern interactive visualization paradigms, such as coordinated multiple views, for massively parallel neurophysiological data. For a use-case demonstration we consider spiking activity data of a two-population, layered point-neuron network model subject to a spatially confined excitation originating from an external population. With the multiple coordinated views, an explorative and qualitative assessment of the spatiotemporal features of neuronal activity can be performed upfront of a detailed quantitative data analysis of specific aspects of the data. Furthermore, ongoing efforts including the European Human Brain Project aim at providing online user portals for integrated model development, simulation, analysis and provenance tracking, wherein interactive visual analysis tools are one component. Browser-compatible, web-technology based solutions are therefore required. Within this scope, with VIOLA we provide a first prototype.Comment: 38 pages, 10 figures, 3 table

    Visual Dynamics: Stochastic Future Generation via Layered Cross Convolutional Networks

    Full text link
    We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods that have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, and on real-world video frames. We present analyses of the learned network representations, showing it is implicitly learning a compact encoding of object appearance and motion. We also demonstrate a few of its applications, including visual analogy-making and video extrapolation.Comment: Journal preprint of arXiv:1607.02586 (IEEE TPAMI, 2019). The first two authors contributed equally to this work. Project page: http://visualdynamics.csail.mit.ed

    Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks

    Get PDF
    We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose a novel approach that models future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. Future frame synthesis is challenging, as it involves low- and high-level image and motion understanding. We propose a novel network structure, namely a Cross Convolutional Network to aid in synthesizing future frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-wold videos. We also show that our model can be applied to tasks such as visual analogy-making, and present an analysis of the learned network representations.Comment: The first two authors contributed equally to this wor

    IUPUC Spatial Innovation Lab

    Get PDF
    During the summer of 2016 the IUPUC ME Division envi-sioned the concept of an “Imagineering Lab” based largely on academic makerspace concepts. Important sub-sections of the Imagineering Lab are its “Actualization Lab” (mecha-tronics, actuators, sensors, DAQ devices etc.) and a “Spatial Innovation Lab” (SIL) based on developing “dream stations” (computer work stations) equipped with exciting new tech-nology in intuitive 2D and 3D image creation and Virtual Reality (VR) technology. The objective of the SIL is to cre-ate a work flow converting intuitively created imagery to an-imation, engineering simulation and analysis and computer driven manufacturing interfaces. This paper discusses the challenges and methods being used to create a sustainable Spatial Innovation Lab

    Learning to Dress {3D} People in Generative Clothing

    Get PDF
    Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shapes. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term in SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses. The model, code and data are available for research purposes at https://cape.is.tue.mpg.de.Comment: CVPR-2020 camera ready. Code and data are available at https://cape.is.tue.mpg.d

    Detail-Preserving Controllable Deformation from Sparse Examples

    Get PDF
    published_or_final_versio

    Analysis of Flood Patterns in Adams County, Pennsylvania Utilizing Drone Technology and Computer Simulations

    Full text link
    Drone imagery and photogrammetry models of the Gettysburg College campus and the terrain at Boyer Nurseries and Orchards were utilized to study flood patterns in Adams County, Pennsylvania. Gettysburg College has lower-sloped land and moderately built infrastructure while Boyer Orchards has drastically sloped land with many patches of abundant vegetation. The two locations were selected due to the fact that they have starkly different surface features, while the bedrock geology of the areas are very similar. The terrain of the models was isolated before a 3D carver and 3D printer were used to construct physical models to further analyze potential water flow and speed through virtual, modeled flood simulations. The models were used to compare real world rainfall data and flood events in the investigated areas from the months of June to August in 2018. I hypothesized that the Gettysburg College campus would experience more severe flooding that would take longer to subside in comparison to Boyer Orchards due to the steeper slope of the orchards’ terrain. The research revealed that Boyer Orchards experienced more extreme flooding and rainfall than Gettysburg College but was able to neutralize the effects due to plentiful vegetation and physio-graphic differences. Modeled flood simulations demonstrated less rainfall in comparison to actual rainfall values: there were differences of 0.78 cm and 1.32 cm between the actual and simulated rainfall amounts for Gettysburg and the Boyer Orchards area, respectively
    • 

    corecore