300,176 research outputs found
A model of software component interactions using the call graph technique
Interaction information that is related to operations between components is important,
especially when the program needs to be modified and maintained. Therefore, the
affected components must be identified and matched based on the requirement of the system. This information can be obtained through performing the code review technique, which requires an analyst to search for specific information from the source code, which is a very time consuming process. This research proposed a model for representing software component interactions where this information was automatically extracted from the source code in order to provide an effective display for the software components interaction representation. The objective was achieved through applying a research design methodology, which consists of five phases: awareness of the problem, suggestion, development, evaluation, and conclusion. The development phase was
conducted by automatically extracting the componentsâ interaction information using
appropriate reverse engineering tools and supporting programs that were developed in
this research. These tools were used to extract software information, extract the information of component interactions in software programs, and transform this information into the proposed model, which was in the form of a call graph. The produced model was evaluated using a visualization tool and by expert review. The
visualization tool was used to display the call graph from a text format into a graphical
view. The processed model evaluation was conducted through an expert review technique. The findings from the model evaluation show that the produced model can be used and manipulated to visualize the component interactions. It provides a process that allows a visualization display for analysts to view the interaction of software components in order to comprehend the components integrations that are involved. This information can be manipulated and improved the program comprehension, especially for other software maintenance purposes
Text visualization techniques: Taxonomy, visual survey, and community insights
Figure 1: The web-based user interface of our visual survey called Text Visualization Browser. By using the interaction panel on the left hand side, researchers can look for specific visualization techniques and filter out entries with respect to a set of categories (cf. the taxonomy given in Sect. 3). Details for a selected entry are shown by clicking on a thumbnail image in the main view. The survey contains 141 categorized visualization techniques by January 19, 2015. Text visualization has become a growing and increasingly impor-tant subfield of information visualization. Thus, it is getting harder for researchers to look for related work with specific tasks or vi-sual metaphors in mind. In this paper, we present an interactive visual survey of text visualization techniques that can be used for the purposes of search for related work, introduction to the subfield and gaining insight into research trends. We describe the taxonomy used for categorization of text visualization techniques and com-pare it to approaches employed in several other surveys. Finally, we present results of analyses performed on the entries data
Novel developments in SBGN-ED and applications
Systems Biology Graphical Notation (SBGN, http://sbgn.org) [1] is an emerging standard for graphical representations of biochemical and cellular processes studied in systems biology. Three different views (Process Description, Entity Relationship, and Activity Flow) cover several aspects of the represented processes in different levels of detail. SBGN helps to communicate biological knowledge more efficient and accurate between different research communities in the life sciences. However, to support SBGN, methods and tools for editing, validating, and translating of SBGN maps are necessary.
We present methods for these tasks and novel developments in SBGN-ED (www.sbgn-ed.org) [2], a tool which allows to create all three types of SBGN maps from scratch, to validate these maps for syntactical and semantical correctness, to translate maps from the KEGG database into SBGN, and to export SBGN maps into several file and image formats. SBGN-ED is based on VANTED (Visualization and Analysis of NeTworks containing Experimental Data, http://www.vanted.org) [3].
As applications of SBGN and SBGN-ED we present furthermore MetaCrop (http://metacrop.ipk-gatersleben.de) [4], a database that summarizes diverse information about metabolic pathways in crop plants, and RIMAS (Regulatory Interaction Maps of Arabidopsis Seed Development, http://rimas.ipk-gatersleben.de) [5], an information portal that provides a comprehensive overview of regulatory pathways and genetic interactions during Arabidopsis embryo and seed development. 

[1] Le Novère, N. et al. (2009) The Systems Biology Graphical Notation. Nature Biotechnology, 27, 735-741.
[2] Czauderna, T., Klukas, C., Schreiber, F. (2010) Editing, validating, and translating of SBGN maps. Bioinformatics, 26 (18), 2340-2341.
[3] Junker, B.H., Klukas, C., Schreiber, F. (2006) VANTED: A system for advanced data analysis and visualization in the context of biological networks. BMC Bioinformatics, 7, 109+.
[4] Grafahrend-Belau, E., Weise, S., Koschützki, D., Scholz, U., Junker, B.H., Schreiber, F. (2008) MetaCrop - A detailed database of crop plant metabolism. Nucleic Acids Research, 36, D954-D958.
[5] Junker, A., Hartmann, A., Schreiber, F., Bäumlein, H. (2010) An engineer's view on regulation of seed development. Trends in Plant Science, 15(6), 303-307.

ENABLING TECHNIQUES FOR EXPRESSIVE FLOW FIELD VISUALIZATION AND EXPLORATION
Flow visualization plays an important role in many scientific and engineering disciplines such as climate modeling, turbulent combustion, and automobile design. The most common method for flow visualization is to display integral flow lines such as streamlines computed from particle tracing. Effective streamline visualization should capture flow patterns and display them with appropriate density, so that critical flow information can be visually acquired. In this dissertation, we present several approaches that facilitate expressive flow field visualization and exploration. First, we design a unified information-theoretic framework to model streamline selection and viewpoint selection as symmetric problems. Two interrelated information channels are constructed between a pool of candidate streamlines and a set of sample viewpoints. Based on these information channels, we define streamline information and viewpoint information to select best streamlines and viewpoints, respectively. Second, we present a focus+context framework to magnify small features and reduce occlusion around them while compacting the context region in a full view. This framework parititions the volume into blocks and deforms them to guide streamline repositioning. The desired deformation is formulated into energy terms and achieved by minimizing the energy function. Third, measuring the similarity of integral curves is fundamental to many tasks such as feature detection, pattern querying, streamline clustering and hierarchical exploration. We introduce FlowString that extracts shape invariant features from streamlines to form an alphabet of characters, and encodes each streamline into a string. The similarity of two streamline segments then becomes a specially designed edit distance between two strings. Leveraging the suffix tree, FlowString provides a string-based method for exploratory streamline analysis and visualization. A universal alphabet is learned from multiple data sets to capture basic flow patterns that exist in a variety of flow fields. This allows easy comparison and efficient query across data sets. Fourth, for exploration of vascular data sets, which contain a series of vector fields together with multiple scalar fields, we design a web-based approach for users to investigate the relationship among different properties guided by histograms. The vessel structure is mapped from the 3D volume space to a 2D graph, which allow more efficient interaction and effective visualization on websites. A segmentation scheme is proposed to divide the vessel structure based on a user specified property to further explore the distribution of that property over space
3D Multi-user interactive visualization with a shared large-scale display
When the multiple users interact with a virtual environment on a largescale
display there are several issues that need to be addressed to facilitate the
interaction. In the thesis, three main topics for collaborative visualization are
discussed; display setup, interactive visualization, and visual fatigue. The
problems that the author is trying to address in this thesis are how multiple
users can interact with a shared large-scale display depending on the display
setups and how they can interact with the shared visualization in a way that
doesnât lead to visual fatigue.
The first user study (Chapter 3) explores the display setups for multi-user
interaction with a shared large-display. The author describes the design of the
three main display setups (a shared view, a split screen, and a split screen with
navigation information) and a demonstration using these setups. The user
study found that the split screen and the split screen with navigation
information can improve usersâ confidence and reduce frustration level and
are more preferred than a shared view. However, a shared view can still
provide effective interaction and collaboration and the display setups cannot
have a large impact on usability and workload.
From the first study, the author employed a shared view for multi-user
interactive visualization with a shared large-scale display due to the
advantages of the shared view. To improve interactive visualization with a
shared view for multiple users, the author designed and conducted the second
user study (Chapter 4). A conventional interaction technique, the mean
tracking method, was not effective for more than three users. In order to
overcome the limitation of the current multi-user interactive visualization
techniques, two interactive visualization techniques (the Object Shift
Technique and Activity-based Weighted Mean Tracking method) were developed and were evaluated in the second user study. The Object Shift Technique translates the virtual objects in the opposite direction of movement
of the Point of View (PoV) and the Activity-based Weighted Mean Tracking
method assigns the higher weight to active users in comparison with
stationary users to determine the location of the PoV. The results of the user
study showed that these techniques can support collaboration, improve
interactivity, and provide similar visual discomfort compared to the
conventional method.
The third study (Chapter 5) describes how to reduce visual fatigue for 3D
stereoscopic visualization with a single point of view (PoV). When multiple
users interact with 3D stereoscopic VR using multi-user interactive
visualization techniques and they are close to the virtual objects, they can
perceive 3D visual fatigue from the large disparity. To reduce the 3D visual
fatigue, an Adaptive Interpupillary Distance (Adaptive IPD) adjustment
technique was developed. To evaluate the Adaptive IPD method, the author
compared to traditional 3D stereoscopic and the monoscopic visualization
techniques. Through the user experiments, the author was able to confirm that
the proposed method can reduce visual discomfort, yet maintain compelling
depth perception as the result provided the most preferable 3D stereoscopic
visualization experience.
For these studies, the author developed a software framework and designed
a set of experiments (Chapter 6). The framework architecture that contains
the three main ideas are described. A demonstration application for multidimensional
decision making was developed using the framework.
The primary contributions of this thesis include a literature review of multiuser
interaction with a shared large-scale display, deeper insights into three
display setups for multi-user interaction, development of the Object Shift
Techniques, the Activity-based Weighted Mean Tracking method, and the
Adaptive Interpupillary Distance Adjustment technique, the evaluation of the
three novel interaction techniques, development of a framework for
supporting a multi-user interaction with a shared large-scale display and its
application to multi-dimensional decision making VR system
Designing Improved Sediment Transport Visualizations
Monitoring, or more commonly, modeling of sediment transport in the coastal environment is a critical task with relevance to coastline stability, beach erosion, tracking environmental contaminants, and safety of navigation. Increased intensity and regularity of storms such as Superstorm Sandy heighten the importance of our understanding of sediment transport processes. A weakness of current modeling capabilities is the ability to easily visualize the result in an intuitive manner. Many of the available visualization software packages display only a single variable at once, usually as a two-dimensional, plan-view cross-section. With such limited display capabilities, sophisticated 3D models are undermined in both the interpretation of results and dissemination of information to the public. Here we explore a subset of existing modeling capabilities (specifically, modeling scour around man-made structures) and visualization solutions, examine their shortcomings and present a design for a 4D visualization for sediment transport studies that is based on perceptually-focused data visualization research and recent and ongoing developments in multivariate displays. Vector and scalar fields are co-displayed, yet kept independently identifiable utilizing human perception\u27s separation of color, texture, and motion. Bathymetry, sediment grain-size distribution, and forcing hydrodynamics are a subset of the variables investigated for simultaneous representation. Direct interaction with field data is tested to support rapid validation of sediment transport model results. Our goal is a tight integration of both simulated data and real world observations to support analysis and simulation of the impact of major sediment transport events such as hurricanes. We unite modeled results and field observations within a geodatabase designed as an application schema of the Arc Marine Data Model. Our real-world focus is on the Redbird Artificial Reef Site, roughly 18 nautical miles offshor- Delaware Bay, Delaware, where repeated surveys have identified active scour and bedform migration in 27 m water depth amongst the more than 900 deliberately sunken subway cars and vessels. Coincidently collected high-resolution multibeam bathymetry, backscatter, and side-scan sonar data from surface and autonomous underwater vehicle (AUV) systems along with complementary sub-bottom, grab sample, bottom imagery, and wave and current (via ADCP) datasets provide the basis for analysis. This site is particularly attractive due to overlap with the Delaware Bay Operational Forecast System (DBOFS), a model that provides historical and forecast oceanographic data that can be tested in hindcast against significant changes observed at the site during Superstorm Sandy and in predicting future changes through small-scale modeling around the individual reef objects
Effects of Visual Interaction Methods on Simulated Unmanned Aircraft Operator Situational Awareness
The limited field of view of static egocentric visual displays employed in unmanned aircraft controls introduces the soda straw effect on operators, which significantly affects their ability to capture and maintain situational awareness by not depicting peripheral visual data. The problem with insufficient operator situational awareness is the resulting increased potential for error and oversight during operation of unmanned aircraft, leading to accidents and mishaps costing United States taxpayers between 54 million per year. The purpose of this quantitative experimental completely randomized design study was to examine and compare use of dynamic eyepoint to static visual interaction in a simulated stationary egocentric environment to determine which, if any, resulted in higher situational awareness. The theoretical framework for the study established the premise that the amount of visual information available could affect the situational awareness of an operator and that increasing visual information through dynamic eyepoint manipulation may result in higher situational awareness than static visualization. Four experimental dynamic visual interaction methods were examined (analog joystick, head tracker, uninterrupted hat/point of view switch, and incremental hat/point of view switch) and compared to a single static method (the control treatment). The five methods were used in experimental testing with 150 participants to determine if the use of a dynamic eyepoint significantly increased the situational awareness of a user within a stationary egocentric environment, indicating that employing dynamic control would reduce the occurrence or consequences of the soda straw effect. The primary difference between the four dynamic visual interaction methods was their unique manipulation approaches to control the pitch and yaw of the simulated eyepoint. The identification of dynamic visual interaction increasing user SA may lead to the further refinement of human-machine-interface (HMI), teleoperation, and unmanned aircraft control principles, with the pursuit and performance of related research
- âŚ