122 research outputs found

    Component-based control system development for agile manufacturing machine systems

    Get PDF
    It is now a common sense that manufactures including machine suppliers and system integrators of the 21 st century will need to compete on global marketplaces, which are frequently shifting and fragmenting, with new technologies continuously emerging. Future production machines and manufacturing systems need to offer the "agility" required in providing responsiveness to product changes and the ability to reconfigure. The primary aim for this research is to advance studies in machine control system design, in the context of the European project VIR-ENG - "Integrated Design, Simulation and Distributed Control of Agile Modular Machinery"

    Distributed Control Architecture

    Get PDF
    This document describes the development and testing of a novel Distributed Control Architecture (DCA). The DCA developed during the study is an attempt to turn the components used to construct unmanned vehicles into a network of intelligent devices, connected using standard networking protocols. The architecture exists at both a hardware and software level and provides a communication channel between control modules, actuators and sensors. A single unified mechanism for connecting sensors and actuators to the control software will reduce the technical knowledge required by platform integrators and allow control systems to be rapidly constructed in a Plug and Play manner. DCA uses standard networking hardware to connect components, removing the need for custom communication channels between individual sensors and actuators. The use of a common architecture for the communication between components should make it easier for software to dynamically determine the vehicle s current capabilities and increase the range of processing platforms that can be utilised. Implementations of the architecture currently exist for Microsoft Windows, Windows Mobile 5, Linux and Microchip dsPIC30 microcontrollers. Conceptually, DCA exposes the functionality of each networked device as objects with interfaces and associated methods. Allowing each object to expose multiple interfaces allows for future upgrades without breaking existing code. In addition, the use of common interfaces should help facilitate component reuse, unit testing and make it easier to write generic reusable software

    Integrating legacy mainframe systems: architectural issues and solutions

    Get PDF
    For more than 30 years, mainframe computers have been the backbone of computing systems throughout the world. Even today it is estimated that some 80% of the worlds' data is held on such machines. However, new business requirements and pressure from evolving technologies, such as the Internet is pushing these existing systems to their limits and they are reaching breaking point. The Banking and Financial Sectors in particular have been relying on mainframes for the longest time to do their business and as a result it is they that feel these pressures the most. In recent years there have been various solutions for enabling a re-engineering of these legacy systems. It quickly became clear that to completely rewrite them was not possible so various integration strategies emerged. Out of these new integration strategies, the CORBA standard by the Object Management Group emerged as the strongest, providing a standards based solution that enabled the mainframe applications become a peer in a distributed computing environment. However, the requirements did not stop there. The mainframe systems were reliable, secure, scalable and fast, so any integration strategy had to ensure that the new distributed systems did not lose any of these benefits. Various patterns or general solutions to the problem of meeting these requirements have arisen and this research looks at applying some of these patterns to mainframe based CORBA applications. The purpose of this research is to examine some of the issues involved with making mainframebased legacy applications inter-operate with newer Object Oriented Technologies

    Security in a Distributed Processing Environment

    Get PDF
    Distribution plays a key role in telecommunication and computing systems today. It has become a necessity as a result of deregulation and anti-trust legislation, which has forced businesses to move from centralised, monolithic systems to distributed systems with the separation of applications and provisioning technologies, such as the service and transportation layers in the Internet. The need for reliability and recovery requires systems to use replication and secondary backup systems such as those used in ecommerce. There are consequences to distribution. It results in systems being implemented in heterogeneous environment; it requires systems to be scalable; it results in some loss of control and so this contributes to the increased security issues that result from distribution. Each of these issues has to be dealt with. A distributed processing environment (DPE) is middleware that allows heterogeneous environments to operate in a homogeneous manner. Scalability can be addressed by using object-oriented technology to distribute functionality. Security is more difficult to address because it requires the creation of a distributed trusted environment. The problem with security in a DPE currently is that it is treated as an adjunct service, i.e. and after-thought that is the last thing added to the system. As a result, it is not pervasive and therefore is unable to fully support the other DPE services. DPE security needs to provide the five basic security services, authentication, access control, integrity, confidentiality and non-repudiation, in a distributed environment, while ensuring simple and usable administration. The research, detailed in this thesis, starts by highlighting the inadequacies of the existing DPE and its services. It argues that a new management structure was introduced that provides greater flexibility and configurability, while promoting mechanism and service independence. A new secure interoperability framework was introduced which provides the ability to negotiate common mechanism and service level configurations. New facilities were added to the non-repudiation and audit services. The research has shown that all services should be security-aware, and therefore would able to interact with the Enhanced Security Service in order to provide a more secure environment within a DPE. As a proof of concept, the Trader service was selected. Its security limitations were examined, new security behaviour policies proposed and it was then implemented as a Security-aware Trader, which could counteract the existing security limitations.IONA TECHNOLOGIES PLC & ORANG

    Adaptive object management for distributed systems

    Get PDF
    This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system
    • …
    corecore