7,441 research outputs found
Simplifying the Development, Use and Sustainability of HPC Software
Developing software to undertake complex, compute-intensive scientific
processes requires a challenging combination of both specialist domain
knowledge and software development skills to convert this knowledge into
efficient code. As computational platforms become increasingly heterogeneous
and newer types of platform such as Infrastructure-as-a-Service (IaaS) cloud
computing become more widely accepted for HPC computations, scientists require
more support from computer scientists and resource providers to develop
efficient code and make optimal use of the resources available to them. As part
of the libhpc stage 1 and 2 projects we are developing a framework to provide a
richer means of job specification and efficient execution of complex scientific
software on heterogeneous infrastructure. The use of such frameworks has
implications for the sustainability of scientific software. In this paper we
set out our developing understanding of these challenges based on work carried
out in the libhpc project.Comment: 4 page position paper, submission to WSSSPE13 worksho
INDEMICS: An Interactive High-Performance Computing Framework for Data Intensive Epidemic Modeling
We describe the design and prototype implementation of Indemics (_Interactive; Epi_demic; _Simulation;)âa modeling environment utilizing high-performance computing technologies for supporting complex epidemic simulations. Indemics can support policy analysts and epidemiologists interested in planning and control of pandemics. Indemics goes beyond traditional epidemic simulations by providing a simple and powerful way to represent and analyze policy-based as well as individual-based adaptive interventions. Users can also stop the simulation at any point, assess the state of the simulated system, and add additional interventions. Indemics is available to end-users via a web-based interface.
Detailed performance analysis shows that Indemics greatly enhances the capability and productivity of simulating complex intervention strategies with a marginal decrease in performance. We also demonstrate how Indemics was applied in some real case studies where complex interventions were implemented
Fine Grained Component Engineering of Adaptive Overlays: Experiences and Perspectives
Recent years have seen significant research being carried out into peer-to-peer (P2P) systems. This work has focused on the styles and applications of P2P computing, from grid computation to content distribution; however, little investigation has been performed into how these systems are built. Component based engineering is an approach that has seen successful deployment in the field of middleware development; functionality is encapsulated in âbuilding blocksâ that can be dynamically plugged together to form complete systems. This allows efficient, flexible and adaptable systems to be built with lower overhead and development complexity. This paper presents an investigation into the potential of using component based engineering in the design and construction of peer-to-peer overlays. It is highlighted that the quality of these properties is dictated by the component architecture used to implement the system. Three reusable decomposition architectures are designed and evaluated using Chord and Pastry case studies. These demonstrate that significant improvements can be made over traditional design approaches resulting in much more reusable, (re)configurable and extensible systems
Performance Modelling and Optimisation of Multi-hop Networks
A major challenge in the design of large-scale networks is to predict and optimise the
total time and energy consumption required to deliver a packet from a source node to a
destination node. Examples of such complex networks include wireless ad hoc and sensor
networks which need to deal with the effects of node mobility, routing inaccuracies, higher
packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the
computational limitations of the nodes. They also include more reliable communication
environments, such as wired networks, that are susceptible to random failures, security
threats and malicious behaviours which compromise their quality of service (QoS) guarantees.
In such networks, packets traverse a number of hops that cannot be determined
in advance and encounter non-homogeneous network conditions that have been largely
ignored in the literature. This thesis examines analytical properties of packet travel in
large networks and investigates the implications of some packet coding techniques on both
QoS and resource utilisation.
Specifically, we use a mixed jump and diffusion model to represent packet traversal
through large networks. The model accounts for network non-homogeneity regarding
routing and the loss rate that a packet experiences as it passes successive segments of a
source to destination route. A mixed analytical-numerical method is developed to compute
the average packet travel time and the energy it consumes. The model is able to capture
the effects of increased loss rate in areas remote from the source and destination, variable
rate of advancement towards destination over the route, as well as of defending against
malicious packets within a certain distance from the destination. We then consider sending
multiple coded packets that follow independent paths to the destination node so as to
mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium
and obtain the time-dependent properties of the packetâs travel process, allowing us to
compare the merits and limitations of coding, both in terms of delivery times and energy
efficiency. Finally, we propose models that can assist in the analysis and optimisation
of the performance of inter-flow network coding (NC). We analyse two queueing models
for a router that carries out NC, in addition to its standard packet routing function. The
approach is extended to the study of multiple hops, which leads to an optimisation problem
that characterises the optimal time that packets should be held back in a router, waiting
for coding opportunities to arise, so that the total packet end-to-end delay is minimised
Managed ecosystems of networked objects
Small embedded devices such as sensors and actuators will become the cornerstone of the Future Internet. To this end, generic, open and secure communication and service platforms are needed in order to be able to exploit the new business opportunities these devices bring. In this paper, we evaluate the current efforts to integrate sensors and actuators into the Internet and identify the limitations at the level of cooperation of these Internet-connected objects and the possible intelligence at the end points. As a solution, we propose the concept of Managed Ecosystem of Networked Objects, which aims to create a smart network architecture for groups of Internet-connected objects by combining network virtualization and clean-slate end-to-end protocol design. The concept maps to many real-life scenarios and should empower application developers to use sensor data in an easy and natural way. At the same time, the concept introduces many new challenging research problems, but their realization could offer a meaningful contribution to the realization of the Internet of Things
- âŠ