83 research outputs found
The Polylith Software Bus
We describe a system called POLYLITH that helps programmers prepare and
interconnect mixedlanguage software components for execution in
heterogeneous environments. POLYLITH'S principal benefit is that
programmers are free to implement functional requirements separately from
their treatment of interfacing requirements; this means that once an
application has been developed for use in one execution environment (such
as a distributed network) it can be adapted for reuse in other
environments (such as a share d-memory multiprocessor) by automatic
techniques. This flexibility is provided without loss of performance.
We accomplish this by creating a new run-time organization for software.
An abstract decoupling agent, called the software toolbus, is
introduced between the system components. Heterogeneity in language and
architecture is accommodated since program units are prepared to
interface directly to the toolbus, not to other program units. Programmers
specify application structure in terms of a module interconnection
language (MIL); POLYLITH uses this specification to guide packaging
(static interfacing acti vities such as stub generation, source program
adaptation, compilation and linking). At run time, an implementation of
the toolbus abstraction may assist in message delivery, name service or
system reconfiguration.
(Also cross-referenced as UMIACS-TR-90-65
Detecting runtime anomalies in AJAX applications through trace analysis
AJAX applications are prone to security vulnerabilities due to the ease
of inadvertently entrusting the client with security-critical logic. We
characterize exploits of such vulnerabilities as violations of a
protocol implicitly defined in the client-side code, and we introduce a
method to detect and prevent these protocol violations in middleware,
without having to modify the original application. We accomplish this by
instrumenting the client code to send fragments of execution traces to
the server, allowing the server to efficiently prove that the incoming
message complies with the protocol. By combining replay execution and
constraint solving, our method exploits the componentized structure of
applications to minimize the server computing power and network
bandwidth required to monitor them. A prototype running on the Google
Web Toolkit platform demonstrates our method
Metrics-based investigation of distributed intrusion detection and attack surface reduction
Two distinct but related projects --- titled "Improved product assurance
through automatic trace generation and analysis" and "Improved cyber
security via decentralized intrusion detection and dynamic
reconfiguration" respectively --- have been under way in this laboratory,
both with support from Office of Naval Research, which the authors
gratefully acknowledge. The purpose of this report is to frame the even
broader goal we envision, which is ultimately to understand how to not
just measure properties of a running system which characterize its
susceptibility to vulnerabilities in the eyes of potential intruders,
but also to dynamically adjust the running system so as to either reduce
or remove those vulnerabilities. What is of greatest concern in a
running system is not the vulnerabilities we already know about ---
after all, they would likely have been repaired at the point of
discovery --- but rather the vulnerability that only an intruder
understands. Our hypothesis is that static analysis together with
measurements at run time may telegraph suggestions for dynamic
reconfiguration which might repel an intruder, without loss of service
by the system, long enough for operators to identify and understand
whatever might have been the specific defect that had been probed. The
present report updates our statement of the long term research goals and
presents our status on the two projects under way
Parallel I/O Using a Distributed Disk Cluster: An Exercise in Tailored Prototyping
Tailored prototyping refers to an emerging process for
prototyping software applications, emphasizing a disciplined
experimental approach in order for developers to obtain an
understanding of system characteristics before committing to costly
design decisions. In our approach, the design of software
constituting prototype apparatus is driven by experimental hypotheses
concerning risk, rather than an application's functional requirements.
This paper describes the principles behind tailored prototyping, then
illustrates them in concrete terms by describing their application in
a pilot project. The pilot used in our illustration is a parallel I/O
service --- a mechanism designed to deliver pages, in parallel, from a
cluster of distributed disks. The performance results show that this
parallel I/O system can, in certain circumstances, deliver higher page
throughput from multiple remote disks, than with a single local disk.
The pilot project exemplifies our prototyping method which is
applicable to a wide variety software prototyping activities.
(Also cross-referenced as UMIACS-TR-95-18
Reconfiguration of Hierarchical Tuple-Spaces: Experiments with Linda-Polylith
A hierarchical tuple-space model is proposed for dealing with issues of
complexity faced by programmers who build and manage programs in distributed
networks. We present our research on a Linda-style approach to both
configuration and reconfiguration. After presenting the model used in
our work, we describe an experimental implementation of a programming
system based upon the model.
(Also cross-referenced as UMIACS-TR-93-100
Load Balancing for Parallel Loops in Workstation Clusters
Load imbalance is a serious impediment to achieving good performance in
parallel processing. Global load balancing schemes are not adequately
manage to balance parallel tasks generated from a single application.
Dynamic loop scheduling methods are known to be useful in balancing parallel
loops on shared-memory multiprocessor machines. However, their centralized
nature causes a bottleneck for the relatively small number of processors in
workstation clusters because of order-of-magnitude differences in
communications overheads. Moreover, improvements of basic loop scheduling
methods have not dealt effectively with irregularly distributed workloads in
parallel loops, which commonly occur in applications for workstation clusters.
In this paper, we present a new decentralized balancing method for parallel
loops on workstation clusters.
(Also cross-referenced as UMIACS-TR-96-6
A Tool for Statistical Detection of Faults in Internet Protocol Networks
While the number and variety of hazards to computer security have increased at
an alarming rate, the proliferation of tools to combat this threat has not
grown proportionally. Similarly, most tools currently rely on human
intervention to recognize and diagnose new threats. We propose a general
framework for identifying hazardous computer transactions by analyzing key
metrics in network transactions. While a thorough determination of the
particular traits to track would be a product of the research, we hypothesize
that some or all of the following variables would yield high correlations with
certain undesirable network transactions:
Source Address
Destination Address/Port
Packet Size (overall, header, payload)
Packet Rate (overall, Source, Destination, Source/Destination)
Transaction Frequency (per Address)
By examining statistical correlations between these variables we hope to be
able to distinguish - and normalize for changes over time - a healthy network
from one that is being attacked or performing an attack.
Central to this research is that the class information we are
analyzing is available without intervention on the participants of the network
transactions, and, in reality, can be performed without their knowledge. This
characteristic has the potential to allow Internet service providers or
corporations the ability to identify threats without large-scale deployment of
some kind of intrusion detection mechanism on each system. Furthermore
combining the ability to identify existence and source of a network threat
with common network hardware automatic configuration abilities allows for
rapid reaction to attacks by shutting down connectivity to the originators of
the exploit.
This paper will detail the design of a set of tools - dubbed Culebra -
capable of remotely diagnosing troubled networks. We will then simulate an
attack on a network to gauge the effectiveness Culebra. Ultimately, the type
of data gathered by these tools can be used to develop a database of attack
patterns, which, in turn, could be used to proactively prevent assaults on
networks from remote locations.
UMIACS-TR-2002-7
A Framework for Dynamic Reconfiguration of Distributed Programs
Current techniques for a software engineer to change a computer program
are limited to static activities once the application begins executing,
there are few reliable ways to reconfigure it. We have developed a
general framework for reconfigurating application software dynamically.
A sound method for managing changes in a running program allows
developers to perform maintenance activities without loss of the overall
system's service. The same methods also support some forms of load
balancing in a distributed system, and research in software fault
tolerance. Our goal has been to create an environment for organizing and
effecting software reconfiguration activities dynamically. First we
present the overall framework within which reconfiguration is possible,
then we describe our formal approach for programmers to capture the
state of a process abstractly. Next, we describe our implementation of
this method within an environment for experimenting with program
reconfiguration. We conclude with a summary of the key research problems
that we are continuing to pursue in this area.
(Also cross-referenced as UMIACS-TR-93-78
Software Engineering of Virtual Environments: Integration and Interconnection
Virtual Environments(VEs) are proving to be valuable resources in many
fields, and they are even more useful when they involve multiple users
in distributed environments. Many useful VEs were designed to be
stand-alone applications, without consideration for integrating them
into a distributed VE. Our approach to connecting VEs is to define an
abstract model for the interconnection, use integration tools to do as
much of the work automatically as possible, and use a run-time
environment to support the interconnection. With our experiences to
date, we are learning that certain classes of techniques are common to
all solutions using this approach. We have summarized these in a set
of requirements and are building a system that features these
techniques as first class objects. In the future you will be able to
solve these interconnection problems cheaply, plus engineers of future
VEs will have some guidance on how they should organize their
implementations so that interconnection with other VEs will be easier.
In this paper we coin the phrase "software engineering of virtual
environments" (SEVE) to describe the above activities.
(Also cross-referenced as UMIACS-TR-96-89
- …