46,725 research outputs found
An Infrastructure for the Dynamic Distribution of Web Applications and Services
This paper presents the design and implementation of an infrastructure that enables any Web application, regardless of its current state, to be stopped and uninstalled from a particular server, transferred to a new server, then installed, loaded, and resumed, with all these events occurring "on the fly" and totally transparent to clients. Such functionalities allow entire applications to fluidly move from server to server, reducing the overhead required to administer the system, and increasing its performance in a number of ways: (1) Dynamic replication of new instances of applications to several servers to raise throughput for scalability purposes, (2) Moving applications to servers to achieve load balancing or other resource management goals, (3) Caching entire applications on servers located closer to clients.National Science Foundation (9986397
Repository Replication Using NNTP and SMTP
We present the results of a feasibility study using shared, existing,
network-accessible infrastructure for repository replication. We investigate
how dissemination of repository contents can be ``piggybacked'' on top of
existing email and Usenet traffic. Long-term persistence of the replicated
repository may be achieved thanks to current policies and procedures which
ensure that mail messages and news posts are retrievable for evidentiary and
other legal purposes for many years after the creation date. While the
preservation issues of migration and emulation are not addressed with this
approach, it does provide a simple method of refreshing content with unknown
partners.Comment: This revised version has 24 figures and a more detailed discussion of
the experiments conducted by u
Design and Implementation of a Distributed Middleware for Parallel Execution of Legacy Enterprise Applications
A typical enterprise uses a local area network of computers to perform its
business. During the off-working hours, the computational capacities of these
networked computers are underused or unused. In order to utilize this
computational capacity an application has to be recoded to exploit concurrency
inherent in a computation which is clearly not possible for legacy applications
without any source code. This thesis presents the design an implementation of a
distributed middleware which can automatically execute a legacy application on
multiple networked computers by parallelizing it. This middleware runs multiple
copies of the binary executable code in parallel on different hosts in the
network. It wraps up the binary executable code of the legacy application in
order to capture the kernel level data access system calls and perform them
distributively over multiple computers in a safe and conflict free manner. The
middleware also incorporates a dynamic scheduling technique to execute the
target application in minimum time by scavenging the available CPU cycles of
the hosts in the network. This dynamic scheduling also supports the CPU
availability of the hosts to change over time and properly reschedule the
replicas performing the computation to minimize the execution time. A prototype
implementation of this middleware has been developed as a proof of concept of
the design. This implementation has been evaluated with a few typical case
studies and the test results confirm that the middleware works as expected
A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing
Data Grids have been adopted as the platform for scientific communities that
need to share, access, transport, process and manage large data collections
distributed worldwide. They combine high-end computing technologies with
high-performance networking and wide-area storage management techniques. In
this paper, we discuss the key concepts behind Data Grids and compare them with
other data sharing and distribution paradigms such as content delivery
networks, peer-to-peer networks and distributed databases. We then provide
comprehensive taxonomies that cover various aspects of architecture, data
transportation, data replication and resource allocation and scheduling.
Finally, we map the proposed taxonomy to various Data Grid systems not only to
validate the taxonomy but also to identify areas for future exploration.
Through this taxonomy, we aim to categorise existing systems to better
understand their goals and their methodology. This would help evaluate their
applicability for solving similar problems. This taxonomy also provides a "gap
analysis" of this area through which researchers can potentially identify new
issues for investigation. Finally, we hope that the proposed taxonomy and
mapping also helps to provide an easy way for new practitioners to understand
this complex area of research.Comment: 46 pages, 16 figures, Technical Repor
History of malware
In past three decades almost everything has changed in the field of malware
and malware analysis. From malware created as proof of some security concept
and malware created for financial gain to malware created to sabotage
infrastructure. In this work we will focus on history and evolution of malware
and describe most important malwares.Comment: 11 pages, 8 figures describing history and evolution of PC malware
from first PC malware to Stuxnet, DoQu and Flame. This article has been
withdrawed due some errors in text and publication in the jurnal that asked
to withdraw article from other source
User's and Administrator's Manual of AMGA Metadata Catalog v 2.4.0 (EMI-3)
User's and Administrator's Manual of AMGA Metadata Catalog v 2.4.0 (EMI-3
SLAng: A language for defining service level agreements
Application or web services are increasingly being used across organisational boundaries. Moreover, new services are being introduced at the network and storage level. Languages to specify interfaces for such services have been researched and transferred into industrial practice. We investigate end-to-end quality of service (QoS) and highlight that QoS provision has multiple facets and requires complex agreements between network services, storage services and middleware services. We introduce SLAng, a language for defining Service Level Agreements (SLAs) that accommodates these needs. We illustrate how SLAng is used to specify QoS in a case study that uses a web services specification to support the processing of images across multiple domains and we evaluate our language based on it
Wearable proximity sensors for monitoring a mass casualty incident exercise: a feasibility study
Over the past several decades, naturally occurring and man-made mass casualty
incidents (MCI) have increased in frequency and number, worldwide. To test the
impact of such event on medical resources, simulations can provide a safe,
controlled setting while replicating the chaotic environment typical of an
actual disaster. A standardised method to collect and analyse data from mass
casualty exercises is needed, in order to assess preparedness and performance
of the healthcare staff involved. We report on the use of wearable proximity
sensors to measure proximity events during a MCI simulation. We investigated
the interactions between medical staff and patients, to evaluate the time
dedicated by the medical staff with respect to the severity of the injury of
the victims depending on the roles. We estimated the presence of the patients
in the different spaces of the field hospital, in order to study the patients'
flow. Data were obtained and collected through the deployment of wearable
proximity sensors during a mass casualty incident functional exercise. The
scenario included two areas: the accident site and the Advanced Medical Post
(AMP), and the exercise lasted 3 hours. A total of 238 participants simulating
medical staff and victims were involved. Each participant wore a proximity
sensor and 30 fixed devices were placed in the field hospital. The contact
networks show a heterogeneous distribution of the cumulative time spent in
proximity by participants. We obtained contact matrices based on cumulative
time spent in proximity between victims and the rescuers. Our results showed
that the time spent in proximity by the healthcare teams with the victims is
related to the severity of the patient's injury. The analysis of patients' flow
showed that the presence of patients in the rooms of the hospital is consistent
with triage code and diagnosis, and no obvious bottlenecks were found
- …