7,191 research outputs found
Improving Big Data Visual Analytics with Interactive Virtual Reality
For decades, the growth and volume of digital data collection has made it
challenging to digest large volumes of information and extract underlying
structure. Coined 'Big Data', massive amounts of information has quite often
been gathered inconsistently (e.g from many sources, of various forms, at
different rates, etc.). These factors impede the practices of not only
processing data, but also analyzing and displaying it in an efficient manner to
the user. Many efforts have been completed in the data mining and visual
analytics community to create effective ways to further improve analysis and
achieve the knowledge desired for better understanding. Our approach for
improved big data visual analytics is two-fold, focusing on both visualization
and interaction. Given geo-tagged information, we are exploring the benefits of
visualizing datasets in the original geospatial domain by utilizing a virtual
reality platform. After running proven analytics on the data, we intend to
represent the information in a more realistic 3D setting, where analysts can
achieve an enhanced situational awareness and rely on familiar perceptions to
draw in-depth conclusions on the dataset. In addition, developing a
human-computer interface that responds to natural user actions and inputs
creates a more intuitive environment. Tasks can be performed to manipulate the
dataset and allow users to dive deeper upon request, adhering to desired
demands and intentions. Due to the volume and popularity of social media, we
developed a 3D tool visualizing Twitter on MIT's campus for analysis. Utilizing
emerging technologies of today to create a fully immersive tool that promotes
visualization and interaction can help ease the process of understanding and
representing big data.Comment: 6 pages, 8 figures, 2015 IEEE High Performance Extreme Computing
Conference (HPEC '15); corrected typo
Unwind: Interactive Fish Straightening
The ScanAllFish project is a large-scale effort to scan all the world's
33,100 known species of fishes. It has already generated thousands of
volumetric CT scans of fish species which are available on open access
platforms such as the Open Science Framework. To achieve a scanning rate
required for a project of this magnitude, many specimens are grouped together
into a single tube and scanned all at once. The resulting data contain many
fish which are often bent and twisted to fit into the scanner. Our system,
Unwind, is a novel interactive visualization and processing tool which
extracts, unbends, and untwists volumetric images of fish with minimal user
interaction. Our approach enables scientists to interactively unwarp these
volumes to remove the undesired torque and bending using a piecewise-linear
skeleton extracted by averaging isosurfaces of a harmonic function connecting
the head and tail of each fish. The result is a volumetric dataset of a
individual, straight fish in a canonical pose defined by the marine biologist
expert user. We have developed Unwind in collaboration with a team of marine
biologists: Our system has been deployed in their labs, and is presently being
used for dataset construction, biomechanical analysis, and the generation of
figures for scientific publication
Early evaluation of Unistats: user experiences
This paper sets out the findings of the user evaluation of Unistats.UK Higher Education Funding Bodie
Immersive Analytics of Large Dynamic Networks via Overview and Detail Navigation
Analysis of large dynamic networks is a thriving research field, typically
relying on 2D graph representations. The advent of affordable head mounted
displays however, sparked new interest in the potential of 3D visualization for
immersive network analytics. Nevertheless, most solutions do not scale well
with the number of nodes and edges and rely on conventional fly- or
walk-through navigation. In this paper, we present a novel approach for the
exploration of large dynamic graphs in virtual reality that interweaves two
navigation metaphors: overview exploration and immersive detail analysis. We
thereby use the potential of state-of-the-art VR headsets, coupled with a
web-based 3D rendering engine that supports heterogeneous input modalities to
enable ad-hoc immersive network analytics. We validate our approach through a
performance evaluation and a case study with experts analyzing a co-morbidity
network
Seeing the invisible: from imagined to virtual urban landscapes
Urban ecosystems consist of infrastructure features working together to provide services for inhabitants. Infrastructure functions akin to an ecosystem, having dynamic relationships and interdependencies. However, with age, urban infrastructure can deteriorate and stop functioning. Additional pressures on infrastructure include urbanizing populations and a changing climate that exposes vulnerabilities. To manage the urban infrastructure ecosystem in a modernizing world, urban planners need to integrate a coordinated management plan for these co-located and dependent infrastructure features. To implement such a management practice, an improved method for communicating how these infrastructure features interact is needed. This study aims to define urban infrastructure as a system, identify the systematic barriers preventing implementation of a more coordinated management model, and develop a virtual reality tool to provide visualization of the spatial system dynamics of urban infrastructure. Data was collected from a stakeholder workshop that highlighted a lack of appreciation for the system dynamics of urban infrastructure. An urban ecology VR model was created to highlight the interconnectedness of infrastructure features. VR proved to be useful for communicating spatial information to urban stakeholders about the complexities of infrastructure ecology and the interactions between infrastructure features.https://doi.org/10.1016/j.cities.2019.102559Published versio
Space-Time Diffusion Visualization using Bayesian Inference
Retail marketing geography has traditionally employed static gravity models for location analytics based on probabilistic locational consumer demand. However, such retail trade area models provide little insight into the dynamic space-time hierarchical diffusionary processes that aggregate to an eventual market structure equilibrium (Mason et. al., 1994), which gravity models attempt to predict for retail trade areas. In addition, most attempts to display the aggregating dynamic space-time hierarchical diffusionary processes of space, time and attributes of interest, in a geographical information system (GIS), produce visualizations that are overly complex and typically displayed utilizing unfamiliar paradigms. Further, these attempts fail to take into account the extensive body of literature in psychology and brain science that stress the importance of perceptual elements and design in achieving optimum visualization comprehension. In other words, simplicity (three-way factor analysis) and visual familiarity (cognitive fit theory (Vessey, 2006), mere-exposure effect in psychology (Dajonc, 1968). This will provide faster perception and better visuospatial and temporal understanding of objects and trends. In this study we incorporate these elements in our visualization object that we refer to as “Avatar”. A Huff inspired, Bayesian framework of inference for spatial allocation and hypothesis testing allows the Avatar object to display the spatial allocation of the Bass model’s innovators and imitators for sales forecasts of new product diffusion (e.g. a mathematical version of Everett Roger’s adoption concept), thus enabling and supporting faster and improved visuospatial understanding of very large data repositories of unbounded and/or “countably infinite” sized geo-big-data (referred to throughout the rest of this paper as GBD). We then introduce the three steps necessary to create an Avatar object (i.e. a 3-D semaphoric, space-time diffusion visualization object). The Avatar object is designed specifically to visualize determinant attributes (e.g. demographics) for the Bass, Bayes, Berry and Huff integrated ensemble model forming part of an ancillary paper to this study. In this way we display the timed hierarchical diffusion of new innovative products throughout store trade areas and across the ensuing and evolving store networks. In addition, by calculating Bayesian conjugate priors and posterior spatial allocation probabilities for the “smallest units of human settlement” (Christaller, 1966) or in our case statistical demographic units (i.e. Census Blocks), we establish customer (innovator and imitator) spatial distributions for the Bass temporal-only model for the case of the aggregating store level trade area (SLTA) scenario. Our approach is empirically supported by five years of new product diffusion geocoded panel data from the Southern California market. We conclude that our cognitive fit theory validated Avatar space-time diffusion visualization strengthens “location analytics” and “location intelligence” and provides a simple and familiar tool for displaying GBD across a growing domain of varying applications and end-user knowledge and needs
The Digital Architectures of Social Media: Comparing Political Campaigning on Facebook, Twitter, Instagram, and Snapchat in the 2016 U.S. Election
The present study argues that political communication on social media is
mediated by a platform's digital architecture, defined as the technical
protocols that enable, constrain, and shape user behavior in a virtual space. A
framework for understanding digital architectures is introduced, and four
platforms (Facebook, Twitter, Instagram, and Snapchat) are compared along the
typology. Using the 2016 US election as a case, interviews with three
Republican digital strategists are combined with social media data to qualify
the studyies theoretical claim that a platform's network structure,
functionality, algorithmic filtering, and datafication model affect political
campaign strategy on social media
- …