15,941 research outputs found
Dynamic Model-based Management of Service-Oriented Infrastructure.
Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation
Bayesian photon counting with electron-multiplying charge coupled devices (EMCCDs)
The EMCCD is a CCD type that delivers fast readout and negligible detector
noise, making it an ideal detector for high frame rate applications. Because of
the very low detector noise, this detector can potentially count single
photons. Considering that an EMCCD has a limited dynamical range and negligible
detector noise, one would typically apply an EMCCD in such a way that multiple
images of the same object are available, for instance, in so called lucky
imaging. The problem of counting photons can then conveniently be viewed as
statistical inference of flux or photon rates, based on a stack of images. A
simple probabilistic model for the output of an EMCCD is developed. Based on
this model and the prior knowledge that photons are Poisson distributed, we
derive two methods for estimating the most probable flux per pixel, one based
on thresholding, and another based on full Bayesian inference. We find that it
is indeed possible to derive such expressions, and tests of these methods show
that estimating fluxes with only shot noise is possible, up to fluxes of about
one photon per pixel per readout.Comment: Fixed a few typos compared to the published versio
Enabling Adaptive Grid Scheduling and Resource Management
Wider adoption of the Grid concept has led to an increasing amount of federated
computational, storage and visualisation resources being available to scientists and
researchers. Distributed and heterogeneous nature of these resources renders most of the
legacy cluster monitoring and management approaches inappropriate, and poses new
challenges in workflow scheduling on such systems. Effective resource utilisation monitoring
and highly granular yet adaptive measurements are prerequisites for a more efficient Grid
scheduler. We present a suite of measurement applications able to monitor per-process
resource utilisation, and a customisable tool for emulating observed utilisation models. We
also outline our future work on a predictive and probabilistic Grid scheduler. The research is
undertaken as part of UK e-Science EPSRC sponsored project SO-GRM (Self-Organising
Grid Resource Management) in cooperation with BT
A coordination protocol for user-customisable cloud policy monitoring
Cloud computing will see a increasing demand for end-user customisation and personalisation of multi-tenant cloud service offerings. Combined with an identified need to address QoS and governance aspects in cloud computing, a need to provide user-customised QoS and governance policy management and monitoring as part of an SLA management infrastructure for clouds arises. We propose a user-customisable policy definition solution that can be enforced in multi-tenant cloud offerings through an automated instrumentation and monitoring technique. We in particular allow service processes that are run by cloud and SaaS providers to be made policy-aware in a transparent way
Recommended from our members
Challenges to the Integration of Renewable Resources at High System Penetration
Successfully integrating renewable resources into the electric grid at penetration levels to meet a 33 percent Renewables Portfolio Standard for California presents diverse technical and organizational challenges. This report characterizes these challenges by coordinating problems in time and space, balancing electric power on a range of scales from microseconds to decades and from individual homes to hundreds of miles. Crucial research needs were identified related to grid operation, standards and procedures, system design and analysis, and incentives, and public engagement in each scale of analysis. Performing this coordination on more refined scales of time and space independent of any particular technology, is defined as a âsmart grid.â âSmartâ coordination of the grid should mitigate technical difficulties associated with intermittent and distributed generation, support grid stability and reliability, and maximize benefits to California ratepayers by using the most economic technologies, design and operating approaches
A Compiler and Runtime Infrastructure for Automatic Program Distribution
This paper presents the design and the implementation of a compiler and runtime infrastructure for automatic program distribution. We are building a research infrastructure that enables experimentation with various program partitioning and mapping strategies and the study of automatic distribution's effect on resource consumption (e.g., CPU, memory, communication). Since many optimization techniques are faced with conflicting optimization targets (e.g., memory and communication), we believe that it is important to be able to study their interaction.
We present a set of techniques that enable flexible resource modeling and program distribution. These are: dependence analysis, weighted graph partitioning, code and communication generation, and profiling. We have developed these ideas in the context of the Java language. We present in detail the design and implementation of each of the techniques as part of our compiler and runtime infrastructure. Then, we evaluate our design and present preliminary experimental data for each component, as well as for the entire system
Usability and open source software.
Open source communities have successfully developed many pieces of software although most computer users only use proprietary applications. The usability of open source software is often regarded as one reason for this limited distribution. In this paper we review the existing evidence of the usability of open source software and discuss how the characteristics of open-source development influence usability. We describe how existing human-computer interaction techniques can be used to leverage distributed networked communities, of developers and users, to address issues of usability
Design and construction of a carbon fiber gondola for the SPIDER balloon-borne telescope
We introduce the light-weight carbon fiber and aluminum gondola designed for
the SPIDER balloon-borne telescope. SPIDER is designed to measure the
polarization of the Cosmic Microwave Background radiation with unprecedented
sensitivity and control of systematics in search of the imprint of inflation: a
period of exponential expansion in the early Universe. The requirements of this
balloon-borne instrument put tight constrains on the mass budget of the
payload. The SPIDER gondola is designed to house the experiment and guarantee
its operational and structural integrity during its balloon-borne flight, while
using less than 10% of the total mass of the payload. We present a construction
method for the gondola based on carbon fiber reinforced polymer tubes with
aluminum inserts and aluminum multi-tube joints. We describe the validation of
the model through Finite Element Analysis and mechanical tests.Comment: 16 pages, 11 figures. Presented at SPIE Ground-based and Airborne
Telescopes V, June 23, 2014. To be published in Proceedings of SPIE Volume
914
- âŠ