7,534 research outputs found
GART: The Gesture and Activity Recognition Toolkit
Presented at the 12th International Conference on Human-Computer Interaction, Beijing, China, July 2007.The original publication is available at www.springerlink.comThe Gesture and Activity Recognition Toolit (GART) is
a user interface toolkit designed to enable the development of gesture-based
applications. GART provides an abstraction to machine learning
algorithms suitable for modeling and recognizing different types of
gestures. The toolkit also provides support for the data collection and
the training process. In this paper, we present GART and its machine
learning abstractions. Furthermore, we detail the components of the
toolkit and present two example gesture recognition applications
The Scalable Brain Atlas: instant web-based access to public brain atlases and related content
The Scalable Brain Atlas (SBA) is a collection of web services that provide
unified access to a large collection of brain atlas templates for different
species. Its main component is an atlas viewer that displays brain atlas data
as a stack of slices in which stereotaxic coordinates and brain regions can be
selected. These are subsequently used to launch web queries to resources that
require coordinates or region names as input. It supports plugins which run
inside the viewer and respond when a new slice, coordinate or region is
selected. It contains 20 atlas templates in six species, and plugins to compute
coordinate transformations, display anatomical connectivity and fiducial
points, and retrieve properties, descriptions, definitions and 3d
reconstructions of brain regions. The ambition of SBA is to provide a unified
representation of all publicly available brain atlases directly in the web
browser, while remaining a responsive and light weight resource that
specializes in atlas comparisons, searches, coordinate transformations and
interactive displays.Comment: Rolf K\"otter sadly passed away on June 9th, 2010. He co-initiated
this project and played a crucial role in the design and quality assurance of
the Scalable Brain Atla
From Big Data to Big Displays: High-Performance Visualization at Blue Brain
Blue Brain has pushed high-performance visualization (HPV) to complement its
HPC strategy since its inception in 2007. In 2011, this strategy has been
accelerated to develop innovative visualization solutions through increased
funding and strategic partnerships with other research institutions.
We present the key elements of this HPV ecosystem, which integrates C++
visualization applications with novel collaborative display systems. We
motivate how our strategy of transforming visualization engines into services
enables a variety of use cases, not only for the integration with high-fidelity
displays, but also to build service oriented architectures, to link into web
applications and to provide remote services to Python applications.Comment: ISC 2017 Visualization at Scale worksho
Cross-Platform Presentation of Interactive Volumetric Imagery
Volume data is useful across many disciplines, not just medicine.
Thus, it is very important that researchers have a simple and
lightweight method of sharing and reproducing such volumetric
data. In this paper, we explore some of the challenges associated
with volume rendering, both from a classical sense and from the
context of Web3D technologies. We describe and evaluate the pro-
posed X3D Volume Rendering Component and its associated styles
for their suitability in the visualization of several types of image
data. Additionally, we examine the ability for a minimal X3D node
set to capture provenance and semantic information from outside
ontologies in metadata and integrate it with the scene graph
Surface Projection Method for Visualizing Volumetric Data
The goal of this project was to explore, develop, and implement additional visualization methods for volumetric data within MindSeer. This paper discusses the implementation of one such visualization method, the surface projection method, and compares it to other existing methods
Constructing a gazebo: supporting teamwork in a tightly coupled, distributed task in virtual reality
Many tasks require teamwork. Team members may work concurrently, but there must be some occasions of coming together. Collaborative virtual environments (CVEs) allow distributed teams to come together across distance to share a task. Studies of CVE systems have tended to focus on the sense of presence or copresence with other people. They have avoided studying close interaction between us-ers, such as the shared manipulation of objects, because CVEs suffer from inherent network delays and often have cumbersome user interfaces. Little is known about the ef-fectiveness of collaboration in tasks requiring various forms of object sharing and, in particular, the concurrent manipu-lation of objects.
This paper investigates the effectiveness of supporting teamwork among a geographically distributed group in a task that requires the shared manipulation of objects. To complete the task, users must share objects through con-current manipulation of both the same and distinct at-tributes. The effectiveness of teamwork is measured in terms of time taken to achieve each step, as well as the impression of users. The effect of interface is examined by comparing various combinations of walk-in cubic immersive projection technology (IPT) displays and desktop devices
Sharing 3D city models: an overview
This study describes the computing methods now available to enable the sharing of three-dimensional (3D) data between various stakeholders for the purposes of city modeling and considers the need for a seamless approach for sharing, transmitting, and maintaining 3D city models. The study offers an overview of the technologies and the issues related to remote access, collaboration, and version control. It builds upon previous research on 3D city models where issues were raised on utilizing, updating and maintaining 3D city models and providing access to various stakeholders. This paper will also describe a case study which is currently analyzing the remote access requirements for a sustainable computer model of NewcastleGateshead in England. Options available will be examined and areas of future research will be discussed
VisIVO - Integrated Tools and Services for Large-Scale Astrophysical Visualization
VisIVO is an integrated suite of tools and services specifically designed for
the Virtual Observatory. This suite constitutes a software framework for
effective visual discovery in currently available (and next-generation) very
large-scale astrophysical datasets. VisIVO consists of VisiVO Desktop - a stand
alone application for interactive visualization on standard PCs, VisIVO Server
- a grid-enabled platform for high performance visualization and VisIVO Web - a
custom designed web portal supporting services based on the VisIVO Server
functionality. The main characteristic of VisIVO is support for
high-performance, multidimensional visualization of very large-scale
astrophysical datasets. Users can obtain meaningful visualizations rapidly
while preserving full and intuitive control of the relevant visualization
parameters. This paper focuses on newly developed integrated tools in VisIVO
Server allowing intuitive visual discovery with 3D views being created from
data tables. VisIVO Server can be installed easily on any web server with a
database repository. We discuss briefly aspects of our implementation of VisiVO
Server on a computational grid and also outline the functionality of the
services offered by VisIVO Web. Finally we conclude with a summary of our work
and pointers to future developments
- …