13 research outputs found
Usability testing for improving interactive geovisualization techniques
Usability describes a product’s fitness for use according to a set of predefined criteria.
Whatever the aim of the product, it should facilitate users’ tasks or enhance their performance
by providing appropriate analysis tools. In both cases, the main interest is to satisfy users in
terms of providing relevant functionality which they find fit for purpose. “Testing usability
means making sure that people can find and work with [a product’s] functions to meet their
needs” (Dumas and Redish, 1999: 4). It is therefore concerned with establishing whether
people can use a product to complete their tasks with ease and at the same time help them
complete their jobs more effectively.
This document describes the findings of a usability study carried out on DecisionSite Map
Interaction Services (Map IS). DecisionSite, a product of Spotfire, Inc.,1 is an interactive
system for the visual and dynamic exploration of data designed for supporting decisionmaking.
The system was coupled to ArcExplorer (forming DecisionSite Map IS) to provide
limited GIS functionality (simple user interface, basic tools, and data management) and
support users of spatial data. Hence, this study set out to test the suitability of the coupling
between the two software components (DecisionSite and ArcExplorer) for the purpose of
exploring spatial data. The first section briefly discusses DecisionSite’s visualization
functionality. The second section describes the test goals, its design, the participants and data
used. The following section concentrates on the analysis of results, while the final section
discusses future areas of research and possible development
Dynamic Aggregation to Support Pattern Discovery: A case study with web logs
Rapid growth of digital data collections is overwhelming the
capabilities of humans to comprehend them without aid. The extraction of useful
data from large raw data sets is something that humans do poorly because of the
overwhelming amount of information. Aggregation is a technique that extracts
important aspect from groups of data thus reducing the amount that the user has
to deal with at one time, thereby enabling them to discover patterns, outliers,
gaps, and clusters. Previous mechanisms for interactive exploration with
aggregated data was either too complex to use or too limited in scope. This
paper proposes a new technique for dynamic aggregation that can combine with
dynamic queries to support most of the tasks involved in data manipulation.
(UMIACS-TR-2002-26)
(HCIL-TR-2001-27
Interactive problem solving via algorithm visualization
COMIND is a tool for conceptual design of industrial products. It helps designers define and evaluate the initial design space by using search algorithms to generate sets of feasible solutions. Two algorithm visualization techniques, Kaleidoscope and Lattice, and one visualization of n-dimensional data, MAP, are used to externalize the machine's problem solving strategies and the tradeoffs as a result of using these strategies. After a short training period, users are able to discover tactics to explore design space effectively, evaluate new design solutions, and learn important relationships among design criteria, search speed, and solution quality. We thus propose that visualization can serve as a tool for interactive intelligence, i.e., human-machine collaboration for solving complex problems
Dynamic magical environments : engaging interaction based on the art of illusion
Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1995.Includes bibliographical references (leaves 58-61).by Ishantha Joseph Lokuge.M.S
BROWSING LARGE ONLINE DATA USING GENERALIZED QUERY PREVIEWS
Companies, government agencies, and other organizations are making their data
available to the world over the Internet. These organizations store their data
in large tables. These tables are usually kept in relational databases. Online
access to such databases is common. Users query these databases with different
front-ends. These front-ends use command languages, menus, or form fillin
interfaces. Many of these interfaces rarely give users information about the
contents and distribution of the data. This leads users to waste time and
network resources posing queries that have zero-hit or mega-hit results.
Generalized query previews forms a user interface architecture for efficient
browsing of large online data. Generalized query previews supplies distribution
information to the users. This provides an overview of the data. Generalized
query previews gives continuous feedback about the size of the results as the query is being formed. This provides a preview of the results.
Generalized query previews allows users to visually browse all of the attributes
of the data. Users can select from these attributes to form a view. Views are
used to display the distribution information. Queries are incrementally and
visually formed by selecting items from numerous charts attached to these views.
Users continuously get feedback on the distribution information while they make
their selections. Later, users fetch the desired portions of the data by sending
their queries over the network. As they make informed queries, they can avoid
submitting queries that will generate zero-hit or mega-hit results.
Generalized query previews works on distributions. Distribution information tends to be smaller than raw data. This aspect of generalized query previews
also contributes to better network performance.
This dissertation presents the development of generalized query previews, field
studies on various platforms, and experimental results. It also presents an
architecture of the algorithms and data structures for the generalized query
previews.
There are three contributions of this dissertation. First, this work offers a
general user interface architecture for browsing large online data. Second, it
presents field studies and experimental work that define the application domain
for generalized query previews. Third, it contributes to the field of algorithms and data structures.
(UMIACS-TR-2001-70)
(HCIL-TR-2001-22
Linking focus and context in three-dimensional multiscale environments
The central question behind this dissertation is this: In what ways can 3D multiscale spatial information be presented in an interactive computer graphics environment, such that a human observer can better comprehend it? Toward answering this question, a two-pronged approach is employed that consists of practice within computer user-interface design, and theory grounded in perceptual psychology, bound together by an approach to the question in terms of focus and context as they apply to human attention. The major practical contribution of this dissertation is the development of a novel set of techniques for linking 3D windows to various kinds of reference frames in a virtual scene and to each other---linking one or more focal views with a view that provides context. Central to these techniques is the explicit recognition of the frames of reference inherent in objects, in computer-graphics viewpoint specifications, and in the human perception and cognitive understanding of space. Many of these techniques are incorporated into the GeoZui3D system as major extensions. An empirical evaluation of these techniques confirms the utility of 3D window proxy representations and orientation coupling. The major theoretical contribution is a cognitive systems model that predicts when linked focus and context views should be used over other techniques such as zooming. The predictive power of the model comes from explicit recognition of locations where a user will focus attention, as well as applied interpretations of the limitations of visual working memory. The model\u27s ability to predict performance is empirically validated, while its ability to model user error is empirically founded. Both the model and the results of the related experiments suggest that multiple linked windows can be an effective way of presenting multiscale spatial information, especially in situations involving the comparison of three or more objects. The contributions of the dissertation are discussed in the context of the applications that have motivated them
Acoustic data optimisation for seabed mapping with visual and computational data mining
Oceans cover 70% of Earth’s surface but little is known about their waters.
While the echosounders, often used for exploration of our oceans, have developed at
a tremendous rate since the WWII, the methods used to analyse and interpret the data
still remain the same. These methods are inefficient, time consuming, and often
costly in dealing with the large data that modern echosounders produce. This PhD
project will examine the complexity of the de facto seabed mapping technique by
exploring and analysing acoustic data with a combination of data mining and visual
analytic methods.
First we test the redundancy issues in multibeam echosounder (MBES) data
by using the component plane visualisation of a Self Organising Map (SOM). A total
of 16 visual groups were identified among the 132 statistical data descriptors. The
optimised MBES dataset had 35 attributes from 16 visual groups and represented a
73% reduction in data dimensionality. A combined Principal Component Analysis
(PCA) + k-means was used to cluster both the datasets. The cluster results were
visually compared as well as internally validated using four different internal
validation methods.
Next we tested two novel approaches in singlebeam echosounder (SBES)
data processing and clustering – using visual exploration for outlier detection and
direct clustering of time series echo returns. Visual exploration identified further
outliers the automatic procedure was not able to find. The SBES data were then
clustered directly. The internal validation indices suggested the optimal number of
clusters to be three. This is consistent with the assumption that the SBES time series
represented the subsurface classes of the seabed.
Next the SBES data were joined with the corresponding MBES data based on
identification of the closest locations between MBES and SBES. Two algorithms,
PCA + k-means and fuzzy c-means were tested and results visualised. From visual
comparison, the cluster boundary appeared to have better definitions when compared
to the clustered MBES data only. The results seem to indicate that adding SBES did
in fact improve the boundary definitions.
Next the cluster results from the analysis chapters were validated against
ground truth data using a confusion matrix and kappa coefficients. For MBES, the
classes derived from optimised data yielded better accuracy compared to that of the
original data. For SBES, direct clustering was able to provide a relatively reliable
overview of the underlying classes in survey area. The combined MBES + SBES
data provided by far the best accuracy for mapping with almost a 10% increase in
overall accuracy compared to that of the original MBES data.
The results proved to be promising in optimising the acoustic data and
improving the quality of seabed mapping. Furthermore, these approaches have the
potential of significant time and cost saving in the seabed mapping process. Finally
some future directions are recommended for the findings of this research project with
the consideration that this could contribute to further development of seabed
mapping problems at mapping agencies worldwide
Using Aggregation and Dynamic Queries for Exploring Large Data Sets
When working with large data sets, users perform three primary types of activities: data manipulation, data analysis, and data visualization. The data manipulation process involves the selection and transformation of data prior to viewing. This paper addresses user goals for this process and the interactive interface mechanisms that support them. We consider three classes of data manipulation goals: controlling the scope (selecting the desired portion of the data), selecting the focus of attention (concentrating on the attributes of data that are relevant to current analysis), and choosing the level of detail (creating and decomposing aggregates of data). We use this classification to evaluate the functionality of existing data exploration interface techniques. Based on these results, we have expanded an interface mechanism called the Aggregate Manipulator (AM) and combined it with Dynamic Query (DQ) to provide complete coverage of the data manipulation goals. We use real estate sales data to demonstrate how the AM and DQ synergistically function in our interface
Semantic and Visual Analysis of Metadata to Search and Select Heterogeneous Information Resources
An increasing number of activities in several disciplinary and industrial fields such as the scientific research, the industrial design and the environmental management, rely on the production and employment of informative resources representing objects, information and knowledge. The vast availability of digitalized information resources (documents, images, maps, videos, 3D model) highlights the need for appropriate methods to effectively share and employ all these resources. In particular, tools to search and select information resources produced by third parties are required to successfully achieve our daily work activities. Headway in this direction is made adopting the metadata, a description of the most relevant features characterizing the information resources. However, a plenty of features have to be considered to fully describe the information resources in sophisticated fields as those mentioned. This brings to a complexity of metadata and to a growing need for tools which face with this complexity. The thesis aims at developing methods to analyze metadata easing the search and comparison of information resources. The goal is to select the resources which best fit the user\u27s needs in specific activities. In particular, the thesis faces with the problem of metadata complexity and supports in the discovery of selection criteria which are unknown to the user. The metadata analysis consists of two approaches: visual and semantic analysis. The visual analysis involves the user as much as possible to let him discover the most suitable selection criteria. The semantic analysis supports in the browsing and selection of information resources taking into account the user\u27s knowledge which is properly formalized