2,370 research outputs found
Performance Analysis of Publish/Subscribe Systems
The Desktop Grid offers solutions to overcome several challenges and to
answer increasingly needs of scientific computing. Its technology consists
mainly in exploiting resources, geographically dispersed, to treat complex
applications needing big power of calculation and/or important storage
capacity. However, as resources number increases, the need for scalability,
self-organisation, dynamic reconfigurations, decentralisation and performance
becomes more and more essential. Since such properties are exhibited by P2P
systems, the convergence of grid computing and P2P computing seems natural. In
this context, this paper evaluates the scalability and performance of P2P tools
for discovering and registering services. Three protocols are used for this
purpose: Bonjour, Avahi and Free-Pastry. We have studied the behaviour of
theses protocols related to two criteria: the elapsed time for registrations
services and the needed time to discover new services. Our aim is to analyse
these results in order to choose the best protocol we can use in order to
create a decentralised middleware for desktop grid
Many-Task Computing and Blue Waters
This report discusses many-task computing (MTC) generically and in the
context of the proposed Blue Waters systems, which is planned to be the largest
NSF-funded supercomputer when it begins production use in 2012. The aim of this
report is to inform the BW project about MTC, including understanding aspects
of MTC applications that can be used to characterize the domain and
understanding the implications of these aspects to middleware and policies.
Many MTC applications do not neatly fit the stereotypes of high-performance
computing (HPC) or high-throughput computing (HTC) applications. Like HTC
applications, by definition MTC applications are structured as graphs of
discrete tasks, with explicit input and output dependencies forming the graph
edges. However, MTC applications have significant features that distinguish
them from typical HTC applications. In particular, different engineering
constraints for hardware and software must be met in order to support these
applications. HTC applications have traditionally run on platforms such as
grids and clusters, through either workflow systems or parallel programming
systems. MTC applications, in contrast, will often demand a short time to
solution, may be communication intensive or data intensive, and may comprise
very short tasks. Therefore, hardware and software for MTC must be engineered
to support the additional communication and I/O and must minimize task dispatch
overheads. The hardware of large-scale HPC systems, with its high degree of
parallelism and support for intensive communication, is well suited for MTC
applications. However, HPC systems often lack a dynamic resource-provisioning
feature, are not ideal for task communication via the file system, and have an
I/O system that is not optimized for MTC-style applications. Hence, additional
software support is likely to be required to gain full benefit from the HPC
hardware
Towards an HLA Run-time Infrastructure with Hard Real-time Capabilities
Our work takes place in the context of the HLA standard and its application in real-time systems context. The HLA standard is inadequate for taking into consideration the different constraints involved in real-time computer systems. Many works have been invested in order to providing real-time capabilities to Run Time Infrastructures (RTI) to run real time simulation. Most of these initiatives focus on major issues including QoS guarantee, Worst Case Transit Time (WCTT) knowledge and scheduling services provided by the underlying operating systems. Even if our ultimate objective is to achieve real-time capabilities for distributed HLA federations executions, this paper describes a preliminary work focusing on achieving hard real-time properties for HLA federations running on a single computer under Linux operating systems. Our paper proposes a novel global bottom up approach for designing real-time Run time Infrastructures and a formal model for validation of uni processor to (then) distributed real-time simulation with CERTI
HiTrust: building cross-organizational trust relationship based on a hybrid negotiation tree
Small-world phenomena have been observed in existing peer-to-peer (P2P) networks which has proved useful in the design of P2P file-sharing systems. Most studies of constructing small world behaviours on P2P are based on the concept of clustering peer nodes into groups, communities, or clusters. However, managing additional multilayer topology increases maintenance overhead, especially in highly dynamic environments. In this paper, we present Social-like P2P systems (Social-P2Ps) for object discovery by self-managing P2P topology with human tactics in social networks. In Social-P2Ps, queries are routed intelligently even with limited cached knowledge and node connections. Unlike community-based P2P file-sharing systems, we do not intend to create and maintain peer groups or communities consciously. In contrast, each node connects to other peer nodes with the same interests spontaneously by the result of daily searches
Wireless Sensor Network Virtualization: A Survey
Wireless Sensor Networks (WSNs) are the key components of the emerging
Internet-of-Things (IoT) paradigm. They are now ubiquitous and used in a
plurality of application domains. WSNs are still domain specific and usually
deployed to support a specific application. However, as WSN nodes are becoming
more and more powerful, it is getting more and more pertinent to research how
multiple applications could share a very same WSN infrastructure.
Virtualization is a technology that can potentially enable this sharing. This
paper is a survey on WSN virtualization. It provides a comprehensive review of
the state-of-the-art and an in-depth discussion of the research issues. We
introduce the basics of WSN virtualization and motivate its pertinence with
carefully selected scenarios. Existing works are presented in detail and
critically evaluated using a set of requirements derived from the scenarios.
The pertinent research projects are also reviewed. Several research issues are
also discussed with hints on how they could be tackled.Comment: Accepted for publication on 3rd March 2015 in forthcoming issue of
IEEE Communication Surveys and Tutorials. This version has NOT been
proof-read and may have some some inconsistencies. Please refer to final
version published in IEEE Xplor
A FRAMEWORK FOR BIOPROFILE ANALYSIS OVER GRID
An important trend in modern medicine is towards individualisation of healthcare to tailor
care to the needs of the individual. This makes it possible, for example, to personalise
diagnosis and treatment to improve outcome. However, the benefits of this can only be fully
realised if healthcare and ICT resources are exploited (e.g. to provide access to relevant data,
analysis algorithms, knowledge and expertise). Potentially, grid can play an important role
in this by allowing sharing of resources and expertise to improve the quality of care. The
integration of grid and the new concept of bioprofile represents a new topic in the healthgrid
for individualisation of healthcare.
A bioprofile represents a personal dynamic "fingerprint" that fuses together a person's
current and past bio-history, biopatterns and prognosis. It combines not just data, but also
analysis and predictions of future or likely susceptibility to disease, such as brain diseases
and cancer. The creation and use of bioprofile require the support of a number of healthcare
and ICT technologies and techniques, such as medical imaging and electrophysiology and
related facilities, analysis tools, data storage and computation clusters. The need to share
clinical data, storage and computation resources between different bioprofile centres creates
not only local problems, but also global problems.
Existing ICT technologies are inappropriate for bioprofiling because of the difficulties in the
use and management of heterogeneous IT resources at different bioprofile centres. Grid as an
emerging resource sharing concept fulfils the needs of bioprofile in several aspects, including
discovery, access, monitoring and allocation of distributed bioprofile databases, computation
resoiuces, bioprofile knowledge bases, etc. However, the challenge of how to integrate the
grid and bioprofile technologies together in order to offer an advanced distributed bioprofile
environment to support individualized healthcare remains.
The aim of this project is to develop a framework for one of the key meta-level bioprofile
applications: bioprofile analysis over grid to support individualised healthcare. Bioprofile
analysis is a critical part of bioprofiling (i.e. the creation, use and update of bioprofiles).
Analysis makes it possible, for example, to extract markers from data for diagnosis and to
assess individual's health status. The framework provides a basis for a "grid-based" solution
to the challenge of "distributed bioprofile analysis" in bioprofiling. The main contributions
of the thesis are fourfold:
A. An architecture for bioprofile analysis over grid. The design of a suitable aichitecture
is fundamental to the development of any ICT systems. The architecture creates a
meaiis for categorisation, determination and organisation of core grid components to
support the development and use of grid for bioprofile analysis;
B. A service model for bioprofile analysis over grid. The service model proposes a
service design principle, a service architecture for bioprofile analysis over grid, and
a distributed EEG analysis service model. The service design principle addresses
the main service design considerations behind the service model, in the aspects of
usability, flexibility, extensibility, reusability, etc. The service architecture identifies
the main categories of services and outlines an approach in organising services to
realise certain functionalities required by distributed bioprofile analysis applications.
The EEG analysis service model demonstrates the utilisation and development of
services to enable bioprofile analysis over grid;
C. Two grid test-beds and a practical implementation of EEG analysis over grid. The two
grid test-beds: the BIOPATTERN grid and PlymGRID are built based on existing
grid middleware tools. They provide essential experimental platforms for research in
bioprofiling over grid. The work here demonstrates how resources, grid middleware
and services can be utilised, organised and implemented to support distributed EEG
analysis for early detection of dementia. The distributed Electroencephalography
(EEG) analysis environment can be used to support a variety of research activities in
EEG analysis;
D. A scheme for organising multiple (heterogeneous) descriptions of individual grid
entities for knowledge representation of grid. The scheme solves the compatibility
and adaptability problems in managing heterogeneous descriptions (i.e. descriptions
using different languages and schemas/ontologies) for collaborated representation of
a grid environment in different scales. It underpins the concept of bioprofile analysis
over grid in the aspect of knowledge-based global coordination between components
of bioprofile analysis over grid
- …