777 research outputs found
Web-Based Visualization of Very Large Scientific Astronomy Imagery
Visualizing and navigating through large astronomy images from a remote
location with current astronomy display tools can be a frustrating experience
in terms of speed and ergonomics, especially on mobile devices. In this paper,
we present a high performance, versatile and robust client-server system for
remote visualization and analysis of extremely large scientific images.
Applications of this work include survey image quality control, interactive
data query and exploration, citizen science, as well as public outreach. The
proposed software is entirely open source and is designed to be generic and
applicable to a variety of datasets. It provides access to floating point data
at terabyte scales, with the ability to precisely adjust image settings in
real-time. The proposed clients are light-weight, platform-independent web
applications built on standard HTML5 web technologies and compatible with both
touch and mouse-based devices. We put the system to the test and assess the
performance of the system and show that a single server can comfortably handle
more than a hundred simultaneous users accessing full precision 32 bit
astronomy data.Comment: Published in Astronomy & Computing. IIPImage server available from
http://iipimage.sourceforge.net . Visiomatic code and demos available from
http://www.visiomatic.org
Recommended from our members
Clustering Trajectories by Relevant Parts for Air Traffic Analysis
Clustering of trajectories of moving objects by similarity is an important technique in movement analysis. Existing distance functions assess the similarity between trajectories based on properties of the trajectory points or segments. The properties may include the spatial positions, times, and thematic attributes. There may be a need to focus the analysis on certain parts of trajectories, i.e., points and segments that have particular properties. According to the analysis focus, the analyst may need to cluster trajectories by similarity of their relevant parts only. Throughout the analysis process, the focus may change, and different parts of trajectories may become relevant. We propose an analytical workflow in which interactive filtering tools are used to attach relevance flags to elements of trajectories, clustering is done using a distance function that ignores irrelevant elements, and the resulting clusters are summarized for further analysis. We demonstrate how this workflow can be useful for different analysis tasks in three case studies with real data from the domain of air traffic. We propose a suite of generic techniques and visualization guidelines to support movement data analysis by means of relevance-aware trajectory clustering
High-density diffuse optical tomography for imaging human brain function
This review describes the unique opportunities and challenges for noninvasive optical mapping of human brain function. Diffuse optical methods offer safe, portable, and radiation free alternatives to traditional technologies like positron emission tomography or functional magnetic resonance imaging (fMRI). Recent developments in high-density diffuse optical tomography (HD-DOT) have demonstrated capabilities for mapping human cortical brain function over an extended field of view with image quality approaching that of fMRI. In this review, we cover fundamental principles of the diffusion of near infrared light in biological tissue. We discuss the challenges involved in the HD-DOT system design and implementation that must be overcome to acquire the signal-to-noise necessary to measure and locate brain function at the depth of the cortex. We discuss strategies for validation of the sensitivity, specificity, and reliability of HD-DOT acquired maps of cortical brain function. We then provide a brief overview of some clinical applications of HD-DOT. Though diffuse optical measurements of neurophysiology have existed for several decades, tremendous opportunity remains to advance optical imaging of brain function to address a crucial niche in basic and clinical neuroscience: that of bedside and minimally constrained high fidelity imaging of brain function
Indexed dependence metadata and its applications in software performance optimisation
To achieve continued performance improvements, modern microprocessor design is tending to concentrate
an increasing proportion of hardware on computation units with less automatic management
of data movement and extraction of parallelism. As a result, architectures increasingly include multiple
computation cores and complicated, software-managed memory hierarchies. Compilers have
difficulty characterizing the behaviour of a kernel in a general enough manner to enable automatic
generation of efficient code in any but the most straightforward of cases.
We propose the concept of indexed dependence metadata to improve application development and
mapping onto such architectures. The metadata represent both the iteration space of a kernel and the
mapping of that iteration space from a given index to the set of data elements that iteration might
use: thus the dependence metadata is indexed by the kernel’s iteration space. This explicit mapping
allows the compiler or runtime to optimise the program more efficiently, and improves the program
structure for the developer. We argue that this form of explicit interface specification reduces the need
for premature, architecture-specific optimisation. It improves program portability, supports intercomponent
optimisation and enables generation of efficient data movement code.
We offer the following contributions: an introduction to the concept of indexed dependence metadata
as a generalisation of stream programming, a demonstration of its advantages in a component
programming system, the decoupled access/execute model for C++ programs, and how indexed dependence
metadata might be used to improve the programming model for GPU-based designs. Our
experimental results with prototype implementations show that indexed dependence metadata supports
automatic synthesis of double-buffered data movement for the Cell processor and enables aggressive
loop fusion optimisations in image processing, linear algebra and multigrid application case
studies
Representation of statistical sound properties in human auditory cortex
The work carried out in this doctoral thesis investigated the representation of
statistical sound properties in human auditory cortex. It addressed four key aspects in
auditory neuroscience: the representation of different analysis time windows in
auditory cortex; mechanisms for the analysis and segregation of auditory objects;
information-theoretic constraints on pitch sequence processing; and the analysis of
local and global pitch patterns. The majority of the studies employed a parametric
design in which the statistical properties of a single acoustic parameter were altered
along a continuum, while keeping other sound properties fixed.
The thesis is divided into four parts. Part I (Chapter 1) examines principles of
anatomical and functional organisation that constrain the problems addressed. Part II
(Chapter 2) introduces approaches to digital stimulus design, principles of functional
magnetic resonance imaging (fMRI), and the analysis of fMRI data. Part III (Chapters
3-6) reports five experimental studies. Study 1 controlled the spectrotemporal
correlation in complex acoustic spectra and showed that activity in auditory
association cortex increases as a function of spectrotemporal correlation. Study 2
demonstrated a functional hierarchy of the representation of auditory object
boundaries and object salience. Studies 3 and 4 investigated cortical mechanisms for
encoding entropy in pitch sequences and showed that the planum temporale acts as a
computational hub, requiring more computational resources for sequences with high
entropy than for those with high redundancy. Study 5 provided evidence for a
hierarchical organisation of local and global pitch pattern processing in neurologically
normal participants. Finally, Part IV (Chapter 7) concludes with a general discussion
of the results and future perspectives
A Survey of Information Visualization Books
Information visualization is a rapidly evolving field with a growing volume of scientific literature and texts continually published.To keep abreast of the latest developments in the domain, survey papers and state-of-the-art reviews provide valuable tools formanaging the large quantity of scientific literature. Recently a survey of survey papers (SoS) was published to keep track ofthe quantity of refereed survey papers in information visualization conferences and journals. However no such resources existto inform readers of the large volume of books being published on the subject, leaving the possibility of valuable knowledgebeing overlooked. We present the first literature survey of information visualization books that addresses this challenge bysurveying the large volume of books on the topic of information visualization and visual analytics. This unique survey addressessome special challenges associated with collections of books (as opposed to research papers) including searching, browsingand cost. This paper features a novel two-level classification based on both books and chapter topics examined in each book,enabling the reader to quickly identify to what depth a topic of interest is covered within a particular book. Readers can usethis survey to identify the most relevant book for their needs amongst a quickly expanding collection. In indexing the landscapeof information visualization books, this survey provides a valuable resource to both experienced researchers and newcomers inthe data visualization discipline
- …