578,279 research outputs found
Fine-grained visualization pipelines and lazy functional languages
The pipeline model in visualization has evolved from a conceptual model of data processing into a widely used architecture for implementing visualization systems. In the process, a number of capabilities have been introduced, including streaming of data in chunks, distributed pipelines, and demand-driven processing. Visualization systems have invariably built on stateful programming technologies, and these capabilities have had to be implemented explicitly within the lower layers of a complex hierarchy of services. The good news for developers is that applications built on top of this hierarchy can access these capabilities without concern for how they are implemented. The bad news is that by freezing capabilities into low-level services expressive power and flexibility is lost. In this paper we express visualization systems in a programming language that more naturally supports this kind of processing model. Lazy functional languages support fine-grained demand-driven processing, a natural form of streaming, and pipeline-like function composition for assembling applications. The technology thus appears well suited to visualization applications. Using surface extraction algorithms as illustrative examples, and the lazy functional language Haskell, we argue the benefits of clear and concise expression combined with fine-grained, demand-driven computation. Just as visualization provides insight into data, functional abstraction provides new insight into visualization
Multi-level Visualization of Concurrent and Distributed Computation in Erlang
This paper describes a prototype visualization system
for concurrent and distributed applications programmed
using Erlang, providing two levels of granularity of view. Both
visualizations are animated to show the dynamics of aspects of
the computation.
At the low level, we show the concurrent behaviour of the
Erlang schedulers on a single instance of the Erlang virtual
machine, which we call an Erlang node. Typically there will be
one scheduler per core on a multicore system. Each scheduler
maintains a run queue of processes to execute, and we visualize
the migration of Erlang concurrent processes from one run queue
to another as work is redistributed to fully exploit the hardware.
The schedulers are shown as a graph with a circular layout. Next
to each scheduler we draw a variable length bar indicating the
current size of the run queue for the scheduler.
At the high level, we visualize the distributed aspects of the
system, showing interactions between Erlang nodes as a dynamic
graph drawn with a force model. Speci?cally we show message
passing between nodes as edges and lay out nodes according to
their current connections. In addition, we also show the grouping
of nodes into ās_groupsā using an Euler diagram drawn with
circles
Recommended from our members
A Generic Communications Module for Cooperative 3D Visualization and Modelling over the Internet: the Collaborative API
Cooperative three-dimensional visualization and modeling applications allow a distributed group of users to work together with a model they share. To implement this kind of applications the underlying communications system must provide reliable and ordered multicast of users interactions. Due to the high complexity that characterizes the models, network bandwidth requirements have limited their use to intranets or in a few cases to very high-speed Internet connections.
In this paper we present a communications module that solves this problem. The library exposed, which is called Collaborative API, supports the creation of very efficient cooperative 3D visualization and modeling applications by optimizing the use of the network resources.
The Collaborative API, implements a new communications architecture: the dynamic client/server. The communications module presented in this paper is illustrated by two examples of applications that use it to provide cooperative 3D visualization over the Internet
A Distributed Multilevel Force-directed Algorithm
The wide availability of powerful and inexpensive cloud computing services
naturally motivates the study of distributed graph layout algorithms, able to
scale to very large graphs. Nowadays, to process Big Data, companies are
increasingly relying on PaaS infrastructures rather than buying and maintaining
complex and expensive hardware. So far, only a few examples of basic
force-directed algorithms that work in a distributed environment have been
described. Instead, the design of a distributed multilevel force-directed
algorithm is a much more challenging task, not yet addressed. We present the
first multilevel force-directed algorithm based on a distributed vertex-centric
paradigm, and its implementation on Giraph, a popular platform for distributed
graph algorithms. Experiments show the effectiveness and the scalability of the
approach. Using an inexpensive cloud computing service of Amazon, we draw
graphs with ten million edges in about 60 minutes.Comment: Appears in the Proceedings of the 24th International Symposium on
Graph Drawing and Network Visualization (GD 2016
Visualization, Exploration and Data Analysis of Complex Astrophysical Data
In this paper we show how advanced visualization tools can help the
researcher in investigating and extracting information from data. The focus is
on VisIVO, a novel open source graphics application, which blends high
performance multidimensional visualization techniques and up-to-date
technologies to cooperate with other applications and to access remote,
distributed data archives. VisIVO supports the standards defined by the
International Virtual Observatory Alliance in order to make it interoperable
with VO data repositories. The paper describes the basic technical details and
features of the software and it dedicates a large section to show how VisIVO
can be used in several scientific cases.Comment: 32 pages, 15 figures, accepted by PAS
Approximated and User Steerable tSNE for Progressive Visual Analytics
Progressive Visual Analytics aims at improving the interactivity in existing
analytics techniques by means of visualization as well as interaction with
intermediate results. One key method for data analysis is dimensionality
reduction, for example, to produce 2D embeddings that can be visualized and
analyzed efficiently. t-Distributed Stochastic Neighbor Embedding (tSNE) is a
well-suited technique for the visualization of several high-dimensional data.
tSNE can create meaningful intermediate results but suffers from a slow
initialization that constrains its application in Progressive Visual Analytics.
We introduce a controllable tSNE approximation (A-tSNE), which trades off speed
and accuracy, to enable interactive data exploration. We offer real-time
visualization techniques, including a density-based solution and a Magic Lens
to inspect the degree of approximation. With this feedback, the user can decide
on local refinements and steer the approximation level during the analysis. We
demonstrate our technique with several datasets, in a real-world research
scenario and for the real-time analysis of high-dimensional streams to
illustrate its effectiveness for interactive data analysis
Unleashing the Power of Distributed CPU/GPU Architectures: Massive Astronomical Data Analysis and Visualization case study
Upcoming and future astronomy research facilities will systematically
generate terabyte-sized data sets moving astronomy into the Petascale data era.
While such facilities will provide astronomers with unprecedented levels of
accuracy and coverage, the increases in dataset size and dimensionality will
pose serious computational challenges for many current astronomy data analysis
and visualization tools. With such data sizes, even simple data analysis tasks
(e.g. calculating a histogram or computing data minimum/maximum) may not be
achievable without access to a supercomputing facility.
To effectively handle such dataset sizes, which exceed today's single machine
memory and processing limits, we present a framework that exploits the
distributed power of GPUs and many-core CPUs, with a goal of providing data
analysis and visualizing tasks as a service for astronomers. By mixing shared
and distributed memory architectures, our framework effectively utilizes the
underlying hardware infrastructure handling both batched and real-time data
analysis and visualization tasks. Offering such functionality as a service in a
"software as a service" manner will reduce the total cost of ownership, provide
an easy to use tool to the wider astronomical community, and enable a more
optimized utilization of the underlying hardware infrastructure.Comment: 4 Pages, 1 figures, To appear in the proceedings of ADASS XXI, ed.
P.Ballester and D.Egret, ASP Conf. Serie
- ā¦