27,212 research outputs found

    Planning for the semiconductor manufacturer of the future

    Get PDF
    Texas Instruments (TI) is currently contracted by the Air Force Wright Laboratory and the Defense Advanced Research Projects Agency (DARPA) to develop the next generation flexible semiconductor wafer fabrication system called Microelectronics Manufacturing Science & Technology (MMST). Several revolutionary concepts are being pioneered on MMST, including the following: new single-wafer rapid thermal processes, in-situ sensors, cluster equipment, and advanced Computer Integrated Manufacturing (CIM) software. The objective of the project is to develop a manufacturing system capable of achieving an order of magnitude improvement in almost all aspects of wafer fabrication. TI was awarded the contract in Oct., 1988, and will complete development with a fabrication facility demonstration in April, 1993. An important part of MMST is development of the CIM environment responsible for coordinating all parts of the system. The CIM architecture being developed is based on a distributed object oriented framework made of several cooperating subsystems. The software subsystems include the following: process control for dynamic control of factory processes; modular processing system for controlling the processing equipment; generic equipment model which provides an interface between processing equipment and the rest of the factory; specification system which maintains factory documents and product specifications; simulator for modelling the factory for analysis purposes; scheduler for scheduling work on the factory floor; and the planner for planning and monitoring of orders within the factory. This paper first outlines the division of responsibility between the planner, scheduler, and simulator subsystems. It then describes the approach to incremental planning and the way in which uncertainty is modelled within the plan representation. Finally, current status and initial results are described

    A CIM framework for standard-based system monitoring using nagios plug-ins

    Get PDF
    The Common Information Model is a widely-accepted industry standard to model distributed system objects as well as their behaviors and interactions to realize system management tasks. It is endorsed by the Distributed Management Task Force and appears as the preferred manageability solution to deal with the ever increasing heterogeneity characterizing today’s datacenters. However, a number of enterprise-class system management products, like Nagios, are not compliant with this standard. Nagios is among the top open source monitoring tools with the power of a large community of developers producing plug-ins to manage a variety of enterprise systems. As part of the endeavor to accelerate CIM adoption, an extension framework, called Plugin Extension for CIM, has been developed in order to expose Nagios and other third-party plug-ins thru CIM, thus enhancing the capabilities of standard-based system management tools by the transparent use of the extensive variety of existing plug-ins. This paper describes the developed framework as well as its acceptance within the open source manageability community.IV Workshop Arquitectura, Redes y Sistemas Operativos (WARSO)Red de Universidades con Carreras en Informática (RedUNCI

    Distributed systems : architecture-driven specification using extended LOTOS

    Get PDF
    The thesis uses the LOTOS language (ISO International Standard ISO 8807) as a basis for the formal specification of distributed systems. Contributions are made to two key research areas: architecture-driven specification and LOTOS language extensions. The notion of architecture-driven specification is to guide the specification process by providing a reference-base of pre-defined domain-specific components. The thesis builds an infra-structure of architectural elements, and provides Extended LOTOS (XL) definitions of these elements. The thesis develops Extended LOTOS (XI.) for the specification of distributed systems. XL- is LOTOS enhanced with features for the formal specification of quantitative timing. probabilistic and priority requirements. For distributed systems, the specification of these ‘performance’ requirements, ran be as important as the specification of the associated functional requirements. To support quantitative timing features, the XL semantics define a global, discrete clock which can be used both to force events to occur at specific times, and to measure Intervals between event occurrences. XL introduces time policy operators ASAP (as soon as possible’ corresponding to “maximal progress semantics") and ALAP (late as possible'). Special internal transitions are introduced in XL semantics for the specification of probability, Conformance relations based on a notion of probabilization, together with a testing framework, are defined to support reasoning about probabilistic XL specifications. Priority within the XL semantics ensures that permitted events with the highest priority weighting of their class are allowed first. Both functional and performance specification play important roles in CIM (Computer Integrated Manufacturing) systems. The thesis uses a CIM system known as the CIM- OSA lntegrating Infrastructure as a case study of architecture-driven specification using XL. The thesis thus constitutes a step in the evolution of distributed system specification methods that have both an architectural basis and a formal basis

    Exploiting multi-agent system technology within an autonomous regional active network management system

    Get PDF
    This paper describes the proposed application of multi-agent system (MAS) technology within AuRA-NMS, an autonomous regional network management system currently being developed in the UK through a partnership between several UK universities, distribution network operators (DNO) and a major equipment manufacturer. The paper begins by describing the challenges facing utilities and why those challenges have led the utilities, a major manufacturer and the UK government to invest in the development of a flexible and extensible active network management system. The requirements the utilities have for a network automation system they wish to deploy on their distribution networks are discussed in detail. With those requirements in mind the rationale behind the use of multi-agent systems (MAS) within AuRA-NMS is presented and the inherent research and design challenges highlighted including: the issues associated with robustness of distributed MAS platforms; the arbitration of different control functions; and the relationship between the ontological requirements of Foundation for Intelligent Physical Agent (FIPA) compliant multi-agent systems, legacy protocols and standards such as IEC 61850 and the common information model (CIM)

    A CIM framework for standard-based system monitoring using nagios plug-ins

    Get PDF
    The Common Information Model is a widely-accepted industry standard to model distributed system objects as well as their behaviors and interactions to realize system management tasks. It is endorsed by the Distributed Management Task Force and appears as the preferred manageability solution to deal with the ever increasing heterogeneity characterizing today’s datacenters. However, a number of enterprise-class system management products, like Nagios, are not compliant with this standard. Nagios is among the top open source monitoring tools with the power of a large community of developers producing plug-ins to manage a variety of enterprise systems. As part of the endeavor to accelerate CIM adoption, an extension framework, called Plugin Extension for CIM, has been developed in order to expose Nagios and other third-party plug-ins thru CIM, thus enhancing the capabilities of standard-based system management tools by the transparent use of the extensive variety of existing plug-ins. This paper describes the developed framework as well as its acceptance within the open source manageability community.IV Workshop Arquitectura, Redes y Sistemas Operativos (WARSO)Red de Universidades con Carreras en Informática (RedUNCI

    Smart grid interoperability use cases for extending electricity storage modeling within the IEC Common Information Model

    Get PDF
    Copyright @ 2012 IEEEThe IEC Common Information Model (CIM) is recognized as a core standard, supporting electricity transmission system interoperability. Packages of UML classes make up its domain ontology to enable a standardised abstraction of network topology and proprietary power system models. Since the early days of its design, the CIM has grown to reflect the widening scope and detail of utility information use cases as the desire to interoperate between a greater number of systems has increased. The cyber-physical nature of the smart grid places even greater demand upon the CIM to model future scenarios for power system operation and management that are starting to arise. Recent developments of modern electricity networks have begun to implement electricity storage (ES) technologies to provide ancillary balancing services, useful to grid integration of large-scale renewable energy systems. In response to this we investigate modeling of grid-scale electricity storage, by drawing on information use cases for future smart grid operational scenarios at National Grid, the GB Transmission System Operator. We find current structures within the CIM do not accommodate the informational requirements associated with novel ES systems and propose extensions to address this requirement.This study is supported by the UK National Grid and Brunel Universit

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Dynamic Model-based Management of Service-Oriented Infrastructure.

    Get PDF
    Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation
    • 

    corecore