165,728 research outputs found
Finer garbage collection in LINDACAP.
As open systems persist, garbage collection (GC) can be a vital aspect in managing system resources. Although garbage collection has been proposed for the standard Linda, it was a rather course-grained mechanism. This finer-grained method is offered in Lindacap, a capability-based coordination system for open distributed systems. Multicapabilities in Lindacap enable tuples to be uniquely referenced, thus providing sufficient information on the usability of tuples (data) within the tuple-space. This paper describes the garbage collection mechanism deployed in Lindacap, which involves selectively garbage collecting tuples within tuple-spaces. The authors present the approach using reference counting, followed by the tracing (mark-and-sweep) algorithm to garbage collect cyclic structures. A time-to-idle (TTI) technique is also proposed, which allows for garbage collection of multicapability regions that are being referred to by agents but are not used in a specified length of time. The performance results indicate that the incorporation of garbage collection techniques adds little overhead to the overall performance of the system. The difference between the average overhead caused by the mark-and-sweep and reference counting is small, and can be considered insignificant if the benefits brought by the mark-and-sweep is taken into account
Modelling electronic service systems using UML
This paper presents a profile for modelling systems of electronic
services using UML. Electronic services encapsulate business services,
an organisational unit focused on delivering benefit to a consumer,
to enhance communication, coordination and information management.
Our profile is based on a formal, workflow-oriented description of electronic
services that is abstracted from particular implementation technologies.
Resulting models provide the basis for a formal analysis to verify
behavioural properties of services. The models can also relate services to
management components, including workflow managers and Electronic
Service Management Systems (ESMSs), a novel concept drawn from experience
of HP Service Composer and DySCo (Dynamic Service Composer),
providing the starting point for integration and implementation
tasks. Their UML basis and platform-independent nature is consistent
with a Model-Driven Architecture (MDA) development strategy, appropriate
to the challenge of developing electronic service systems using
heterogeneous technology, and incorporating legacy systems
Pattern Reification as the Basis for Description-Driven Systems
One of the main factors driving object-oriented software development for
information systems is the requirement for systems to be tolerant to change. To
address this issue in designing systems, this paper proposes a pattern-based,
object-oriented, description-driven system (DDS) architecture as an extension
to the standard UML four-layer meta-model. A DDS architecture is proposed in
which aspects of both static and dynamic systems behavior can be captured via
descriptive models and meta-models. The proposed architecture embodies four
main elements - firstly, the adoption of a multi-layered meta-modeling
architecture and reflective meta-level architecture, secondly the
identification of four data modeling relationships that can be made explicit
such that they can be modified dynamically, thirdly the identification of five
design patterns which have emerged from practice and have proved essential in
providing reusable building blocks for data management, and fourthly the
encoding of the structural properties of the five design patterns by means of
one fundamental pattern, the Graph pattern. A practical example of this
philosophy, the CRISTAL project, is used to demonstrate the use of
description-driven data objects to handle system evolution.Comment: 20 pages, 10 figure
A Practical Cooperative Multicell MIMO-OFDMA Network Based on Rank Coordination
An important challenge of wireless networks is to boost the cell edge
performance and enable multi-stream transmissions to cell edge users.
Interference mitigation techniques relying on multiple antennas and
coordination among cells are nowadays heavily studied in the literature.
Typical strategies in OFDMA networks include coordinated scheduling,
beamforming and power control. In this paper, we propose a novel and practical
type of coordination for OFDMA downlink networks relying on multiple antennas
at the transmitter and the receiver. The transmission ranks, i.e.\ the number
of transmitted streams, and the user scheduling in all cells are jointly
optimized in order to maximize a network utility function accounting for
fairness among users. A distributed coordinated scheduler motivated by an
interference pricing mechanism and relying on a master-slave architecture is
introduced. The proposed scheme is operated based on the user report of a
recommended rank for the interfering cells accounting for the receiver
interference suppression capability. It incurs a very low feedback and backhaul
overhead and enables efficient link adaptation. It is moreover robust to
channel measurement errors and applicable to both open-loop and closed-loop
MIMO operations. A 20% cell edge performance gain over uncoordinated LTE-A
system is shown through system level simulations.Comment: IEEE Transactions or Wireless Communications, Accepted for
Publicatio
Recommended from our members
Challenges to the Integration of Renewable Resources at High System Penetration
Successfully integrating renewable resources into the electric grid at penetration levels to meet a 33 percent Renewables Portfolio Standard for California presents diverse technical and organizational challenges. This report characterizes these challenges by coordinating problems in time and space, balancing electric power on a range of scales from microseconds to decades and from individual homes to hundreds of miles. Crucial research needs were identified related to grid operation, standards and procedures, system design and analysis, and incentives, and public engagement in each scale of analysis. Performing this coordination on more refined scales of time and space independent of any particular technology, is defined as a âsmart grid.â âSmartâ coordination of the grid should mitigate technical difficulties associated with intermittent and distributed generation, support grid stability and reliability, and maximize benefits to California ratepayers by using the most economic technologies, design and operating approaches
Collaborative signal and information processing for target detection with heterogeneous sensor networks
In this paper, an approach for target detection and acquisition with heterogeneous sensor networks through strategic resource allocation and coordination is presented. Based on sensor management and collaborative signal and information processing, low-capacity low-cost sensors are strategically deployed to guide and cue scarce high performance sensors in the network to improve the data quality, with which the mission is eventually completed more efficiently with lower cost. We focus on the problem of designing such a network system in which issues of resource selection and allocation, system behaviour and capacity, target behaviour and patterns, the environment, and multiple constraints such as the cost must be addressed simultaneously. Simulation results offer significant insight into sensor selection and network operation, and demonstrate the great benefits introduced by guided search in an application of hunting down and capturing hostile vehicles on the battlefield
Increasing Distributed Generation Penetration using Soft Normally-Open Points
This paper considers the effects of various voltage control solutions on facilitating an increase in allowable levels of distributed generation installation before voltage violations occur. In particular, the voltage control solution that is focused on is the implementation of `soft' normally-open points (SNOPs), a term which refers to power electronic devices installed in place of a normally-open point in a medium-voltage distribution network which allows for control of real and reactive power flows between each end point of its installation sites. While other benefits of SNOP installation are discussed, the intent of this paper is to determine whether SNOPs are a viable alternative to other voltage control strategies for this particular application. As such, the SNOPs ability to affect the voltage profile along feeders within a distribution system is focused on with other voltage control options used for comparative purposes. Results from studies on multiple network models with varying topologies are presented and a case study which considers economic benefits of increasing feasible DG penetration is also given
OpenKnowledge at work: exploring centralized and decentralized information gathering in emergency contexts
Real-world experience teaches us that to manage emergencies, efficient crisis response coordination is crucial; ICT infrastructures are effective in supporting the people involved in such contexts, by supporting effective ways of interaction. They also should provide innovative means of communication and information management. At present, centralized architectures are mostly used for this purpose; however, alternative infrastructures based on the use of distributed information sources, are currently being explored, studied and analyzed. This paper aims at investigating the capability of a novel approach (developed within the European project OpenKnowledge1) to support centralized as well as decentralized architectures for information gathering. For this purpose we developed an agent-based e-Response simulation environment fully integrated with the OpenKnowledge infrastructure and through which existing emergency plans are modelled and simulated. Preliminary results show the OpenKnowledge capability of supporting the two afore-mentioned architectures and, under ideal assumptions, a comparable performance in both cases
International White Book on DER Protection : Review and Testing Procedures
This white book provides an insight into the issues surrounding the impact of increasing levels of DER on the generator and network protection and the resulting necessary improvements in protection testing practices. Particular focus is placed on ever increasing inverter-interfaced DER installations and the challenges of utility network integration. This white book should also serve as a starting point for specifying DER protection testing requirements and procedures. A comprehensive review of international DER protection practices, standards and recommendations is presented. This is accompanied by the identiïŹ cation of the main performance challenges related to these protection schemes under varied network operational conditions and the nature of DER generator and interface technologies. Emphasis is placed on the importance of dynamic testing that can only be delivered through laboratory-based platforms such as real-time simulators, integrated substation automation infrastructure and ïŹ exible, inverter-equipped testing microgrids. To this end, the combination of ïŹ exible network operation and new DER technologies underlines the importance of utilising the laboratory testing facilities available within the DERlab Network of Excellence. This not only informs the shaping of new protection testing and network integration practices by end users but also enables the process of de-risking new DER protection technologies. In order to support the issues discussed in the white paper, a comparative case study between UK and German DER protection and scheme testing practices is presented. This also highlights the level of complexity associated with standardisation and approval mechanisms adopted by different countries
- âŠ