Virginia Tech - Wake Forest University School of Biomedical Engineering & Sciences
Computer Science Technical Reports @Virginia TechNot a member yet
996 research outputs found
Sort by
Identifying Native Applications with High Assurance
The work described in this paper investigates the problem
of identifying and deterring stealthy malicious processes on
a host. We point out the lack of strong application iden-
tication in main stream operating systems. We solve the
application identication problem by proposing a novel iden-
tication model in which user-level applications are required
to present identication proofs at run time to be authenti-
cated by the kernel using an embedded secret key. The se-
cret key of an application is registered with a trusted kernel
using a key registrar and is used to uniquely authenticate
and authorize the application. We present a protocol for
secure authentication of applications. Additionally, we de-
velop a system call monitoring architecture that uses our
model to verify the identity of applications when making
critical system calls. Our system call monitoring can be
integrated with existing policy specication frameworks to
enforce application-level access rights. We implement and
evaluate a prototype of our monitoring architecture in Linux
as device drivers with nearly no modication of the ker-
nel. The results from our extensive performance evaluation
shows that our prototype incurs low overhead, indicating the
feasibility of our model
Motion Planning of Uncertain Ordinary Differential Equation Systems
This work presents a novel motion planning framework, rooted in nonlinear programming theory, that treats uncertain fully and under-actuated dynamical systems described by ordinary differential equations. Uncertainty in multibody dynamical systems comes from various sources, such as: system parameters, initial conditions, sensor and actuator noise, and external forcing. Treatment of uncertainty in design is of paramount practical importance because all real-life systems are affected by it, and poor robustness and suboptimal performance result if it’s not accounted for in a given design. In this work uncertainties are modeled using Generalized Polynomial Chaos and are solved quantitatively using a least-square collocation method. The computational efficiency of this approach enables the inclusion of uncertainty statistics in the nonlinear programming optimization process. As such, the proposed framework allows the user to pose, and answer, new design questions related to uncertain dynamical systems.
Specifically, the new framework is explained in the context of forward, inverse, and hybrid dynamics formulations. The forward dynamics formulation, applicable to both fully and under-actuated systems, prescribes deterministic actuator inputs which yield uncertain state trajectories. The inverse dynamics formulation is the dual to the forward dynamic, and is only applicable to fully-actuated systems; deterministic state trajectories are prescribed and yield uncertain actuator inputs. The inverse dynamics formulation is more computationally efficient as it requires only algebraic evaluations and completely avoids numerical integration. Finally, the hybrid dynamics formulation is applicable to under-actuated systems where it leverages the benefits of inverse dynamics for actuated joints and forward dynamics for unactuated joints; it prescribes actuated state and unactuated input trajectories which yield uncertain unactuated states and actuated inputs.
The benefits of the ability to quantify uncertainty when planning the motion of multibody dynamic systems are illustrated through several case-studies. The resulting designs determine optimal motion plans—subject to deterministic and statistical constraints—for all possible systems within the probability space
Use of Subimages in Fish Species Identification: A Qualitative Study
Many scholarly tasks involve working with subdocuments, or contextualized fine-grain information, i.e., with information that is part of some larger unit. A digital library (DL) facil- itates management, access, retrieval, and use of collections of data and metadata through services. However, most DLs do not provide infrastructure or services to support working with subdocuments. Superimposed information (SI) refers to new information that is created to reference subdocu- ments in existing information resources. We combine this idea of SI with traditional DL services, to define and develop a DL with SI (SI-DL). We explored the use of subimages and evaluated the use of a prototype SI-DL (SuperIDR) in fish species identification, a scholarly task that involves work- ing with subimages. The contexts and strategies of working with subimages in SuperIDR suggest new and enhanced sup- port (SI-DL services) for scholarly tasks that involve working with subimages, including new ways of querying and search- ing for subimages and associated information. The main contribution of our work are the insights gained from these findings of use of subimages and of SuperIDR (a prototype SI-DL), which lead to recommendations for the design of digital libraries with superimposed information
Between a Rock and a Cell Phone: Social Media Use during Mass Protests in Iran, Tunisia and Egypt
In this paper we examine the use of social media, and especially Twitter, in Iran, Tunisia and Egypt during the mass political demonstrations and protests in June 2009, December 2010 - January 2011, and February 2011, respectively. We compare this usage with methods and findings from other studies on the use of Twitter in emergency situations, such as natural and man-made disasters. We draw on our own experiences and participant-observations as an eyewitness in Iran (first author), and on Twitter data from Iran, Tunisia and Egypt. In these three cases, Twitter filled a unique technology and communication gap at least partially. We summarize suggested directions for future research with a view of placing this work in the larger context of social media use in conditions of crisis and social convergence
FATODE: A Library for Forward, Adjoint, and Tangent Linear Integration of ODEs
FATODE is a FORTRAN library for the integration of ordinary differential equations with direct and adjoint sensitivity analysis capabilities.
The paper describes the capabilities, implementation, code organization, and usage of this package.
FATODE implements four families of methods -- explicit Runge-Kutta for nonstiff problems and fully implicit Runge-Kutta, singly diagonally implicit Runge-Kutta, and Rosenbrock for stiff problems.
Each family contains several methods with different orders of accuracy; users can add new methods by simply providing their coefficients.
For each family the forward, adjoint, and tangent linear models are implemented.
General purpose solvers for dense and sparse linear algebra are used; users can easily incorporate problem-tailored linear algebra routines.
The performance of the package is demonstrated on several test problems.
To the best of our knowledge FATODE is the first publicly available general purpose package that offers forward and adjoint sensitivity
analysis capabilities in the context of Runge Kutta methods. A wide range of applications are expected to benefit from its use; examples include parameter estimation,
data assimilation, optimal control, and uncertainty quantification
Collecting, Analyzing and Visualizing Tweets using Open Source Tools
This tutorial will teach participants how to collect, analyze and visualize results from twitter data. We will demonstrate several different free, open-source web-based tools that participants can use to collect twitter data (e.g., Archivist, 140kit.com, TwapperKeeper), and show them a few different methods, tools or programs they can use to analyze the data in a given collection. Finally, we will show participants visualization tools and programs they can use to present the analyses, such as tag clouds, graphs and other data clustering techniques. As much as possible this will be a hands-on tutorial, so participants can learn by making their own twitter data collection, analysis and visualization as part of the tutorial
Large High Resolution Displays for Co-Located Collaborative Intelligence Analysis
Large, high-resolution vertical displays carry the potential to increase the accuracy of collaborative sensemaking, given correctly designed visual analytics tools. From an exploratory user study using a fictional intelligence analysis task, we investigated how users interact with the display to construct spatial schemas and externalize information, as well as how they establish shared and private territories. We investigated the spatial strategies of users partitioned by tool type used (document- or entity-centric). We classified the types of territorial behavior exhibited in terms of how the users interacted with the display (integrated or independent workspaces). Next, we examined how territorial behavior impacted the common ground between the pairs of users. Finally, we recommend design guidelines for building co-located collaborative visual analytics tools specifically for use on large, high-resolution vertical displays
Parallel Deterministic and Stochastic Global Minimization of Functions with Very Many Minima
The optimization of three problems with high dimensionality and many local minima are investigated
under five different optimization algorithms: DIRECT, simulated annealing, Spall’s SPSA algorithm, the KNITRO
package, and QNSTOP, a new algorithm developed at Indiana University
Supporting Memorization and Problem Solving with Spatial Information Presentations in Virtual Environments
While it has been suggested that immersive virtual environments could provide benefits for educational applications, few studies have formally evaluated how the enhanced perceptual displays of such systems might improve learning. Using simplified memorization and problem-solving tasks as representative approximations of more advanced types of learning, we are investigating the effects of providing supplemental spatial information on the performance of learning-based activities within virtual environments. We performed two experiments to investigate whether users can take advantage of a spatial information presentation to improve performance on cognitive processing activities. In both experiments, information was presented either directly in front of the participant or wrapped around the participant along the walls of a surround display. In our first experiment, we found that the spatial presentation caused better performance on a memorization and recall task. To investigate whether the advantages of spatial information presentation extend beyond memorization to higher level cognitive activities, our second experiment employed a puzzle-like task that required critical thinking using the presented information. The results indicate that no performance improvements or mental workload reductions were gained from the spatial presentation method compared to a non-spatial layout for our problem-solving task. The results of these two experiments suggest that supplemental spatial information can support performance improvements for cognitive processing and learning-based activities, but its effectiveness is dependent on the nature of the task and a meaningful use of space
Content in the Context of 4D-Var Data Assimilation. II: Application to Global Ozone Assimilation
Data assimilation obtains improved estimates of the state of a physical system
by combining imperfect model results with sparse and noisy observations of reality.
Not all observations used in data assimilation are equally valuable. The ability to
characterize the usefulness of different data points is important for analyzing the
effectiveness of the assimilation system, for data pruning, and for the design of future
sensor systems.
In the companion paper [Sandu et al.(2011)] we derived an ensemble-based computational
procedure to estimate the information content of various observations in
the context of 4D-Var. Here we apply this methodology to quantify two information
metrics (the signal and degrees of freedom for signal) for satellite observations
used in a global chemical data assimilation problem with the GEOS-Chem chemical
transport model. The assimilation of a subset of data points characterized by the
highest information content, gives analyses that are comparable in quality with the
one obtained using the entire data set