104 research outputs found
PROPOSED MIDDLEWARE SOLUTION FOR RESOURCE-CONSTRAINED DISTRIBUTED EMBEDDED NETWORKS
The explosion in processing power of embedded systems has enabled distributed embedded networks to perform more complicated tasks. Middleware are sets of encapsulations of common and network/operating system-specific functionality into generic, reusable frameworks to manage such distributed networks. This thesis will survey and categorize popular middleware implementations into three adapted layers: host-infrastructure, distribution, and common services. This thesis will then apply a quantitative approach to grading and proposing a single middleware solution from all layers for two target platforms: CubeSats and autonomous unmanned aerial vehicles (UAVs). CubeSats are 10x10x10cm nanosatellites that are popular university-level space missions, and impose power and volume constraints. Autonomous UAVs are similarly-popular hobbyist-level vehicles that exhibit similar power and volume constraints. The MAVLink middleware from the host-infrastructure layer is proposed as the middleware to manage the distributed embedded networks powering these platforms in future projects. Finally, this thesis presents a performance analysis on MAVLink managing the ARM Cortex-M 32-bit processors that power the target platforms
IDL-XML based information sharing model for enterprise integration
CJM is a mechanized approach to problem solving in an enterprise. Its basis is intercommunication between information systems, in order to provide faster and more effective decision making process. These results help minimize human error, improve overall productivity and guarantee customer satisfaction. Most enterprises or corporations started implementing integration by adopting automated solutions in a particular process, department, or area, in isolation from the rest of the physical or intelligent process resulting in the incapability for systems and equipment to share information with each other and with other computer systems. The goal in a manufacturing environment is to have a set of systems that will interact seamlessly with each other within a heterogeneous object framework overcoming the many barriers (language, platforms, and even physical location) that do not grant information sharing. This study identifies the data needs of several information systems of a corporation and proposes a conceptual model to improve the information sharing process and thus Computer Integrated Manufacturing. The architecture proposed in this work provides a methodology for data storage, data retrieval, and data processing in order to provide integration at the enterprise level. There are four layers of interaction in the proposed IXA architecture. The name TXA (DDL - XML Architecture for Enterprise Integration) is derived from the standards and technologies used to define the layers and corresponding functions of each layer. The first layer addresses the systems and applications responsible for data manipulation. The second layer provides the interface definitions to facilitate the interaction between the applications on the first layer. The third layer is where data would be structured using XML to be stored and the fourth layer is a central repository and its database management system
Integrating legacy mainframe systems: architectural issues and solutions
For more than 30 years, mainframe computers have been the backbone of computing systems throughout the world. Even today it is estimated that some 80% of the worlds' data is held on such machines. However, new business requirements and pressure from evolving technologies, such as the Internet is pushing these existing systems to their limits and they are reaching breaking point. The Banking and Financial Sectors in particular have been relying on mainframes for the longest time to do their business and as a result it is they that feel these pressures the most.
In recent years there have been various solutions for enabling a re-engineering of these legacy systems. It quickly became clear that to completely rewrite them was not possible so various integration strategies emerged.
Out of these new integration strategies, the CORBA standard by the Object Management Group emerged as the strongest, providing a standards based solution that enabled the mainframe applications become a peer in a distributed computing environment.
However, the requirements did not stop there. The mainframe systems were reliable, secure, scalable and fast, so any integration strategy had to ensure that the new distributed systems did not lose any of these benefits. Various patterns or general solutions to the problem of meeting these requirements have arisen and this research looks at applying some of these patterns to mainframe based CORBA applications.
The purpose of this research is to examine some of the issues involved with making mainframebased legacy applications inter-operate with newer Object Oriented Technologies
Recommended from our members
The use of agents and objects to integrate virtual enterprises
The manufacturing complex for the Department of Energy (DOE) is distributed: design laboratories, manufacturing facilities, and industrial partners. Designers must have a concurrent engineering environment to support all aspects of the cradle-to-grave product realization process across the distributed sites. Engineers must be able to analyze and simulate processes, retrieve and process heterogeneous information, both archived and current, and access multiple databases. Manufacturers must be able to coordinate activities of various manufacturing centers, which may involve a negotiation process. Furthermore, Sandia must be able to export manufacturing capabilities, such as on-machine acceptance, to outside suppliers. A key element to making this a reality is a flexible information architecture. The DOE information architecture must support a wide-area virtual enterprise, with distributed intelligent software components. The architecture must provide for asynchronous communication; multiple programming languages and operating systems; incorporation of geographically distributed manufacturing services; various hardware platforms; and heterogeneous workstations, PC`s, machine tool controllers, and special-purpose compute engines. Further, it is critical that manufacturing facilities are not isolated from design, planning, and other business activities and that information flows easily and bidirectionally between these activities. To accomplish this seamlessly, heterogeneous knowledge must be exchanged across both domain and organizational boundaries. Distributed object and software agent technologies are two methods for connecting such engineering and manufacturing systems. The two technologies have overlapping goals - interoperability and architectural support for integrating software components - though to date little or no integration of the two technologies has been made
A Real-Time Service-Oriented Architecture for Industrial Automation
Industrial automation platforms are experiencing a paradigm shift. New technologies are making their way in the area, including embedded real-time systems, standard local area networks like Ethernet, Wi-Fi and ZigBee, IP-based communication protocols, standard service oriented architectures (SOAs) and Web services. An automation system will be composed of flexible autonomous components with plug & play functionality, self configuration and diagnostics, and autonomic local control that communicate through standard networking technologies. However, the introduction of these new technologies raises important problems that need to be properly solved, one of these being the need to support real-time and quality-of-service (QoS) for real-time applications. This paper describes a SOA enhanced with real-time capabilities for industrial automation. The proposed architecture allows for negotiation of the QoS requested by clients from Web services, and provides temporal encapsulation of individual activities. This way, it is possible to perform an a priori analysis of the temporal behavior of each service, and to avoid unwanted interference among them. After describing the architecture, experimental results gathered on a real implementation of the framework (which leverages a soft real-time scheduler for the Linux kernel) are presented, showing the effectiveness of the proposed solution. The experiments were performed on simple case studies designed in the context of industrial automation applications
Web service control of component-based agile manufacturing systems
Current global business competition has resulted in significant challenges for
manufacturing and production sectors focused on shorter product lifecyc1es, more diverse
and customized products as well as cost pressures from competitors and customers. To
remain competitive, manufacturers, particularly in automotive industry, require the next
generation of manufacturing paradigms supporting flexible and reconfigurable production
systems that allow quick system changeovers for various types of products. In addition,
closer integration of shop floor and business systems is required as indicated by the
research efforts in investigating "Agile and Collaborative Manufacturing Systems" in
supporting the production unit throughout the manufacturing lifecycles.
The integration of a business enterprise with its shop-floor and lifecycle supply partners
is currently only achieved through complex proprietary solutions due to differences in
technology, particularly between automation and business systems. The situation is
further complicated by the diverse types of automation control devices employed.
Recently, the emerging technology of Service Oriented Architecture's (SOA's) and Web
Services (WS) has been demonstrated and proved successful in linking business
applications. The adoption of this Web Services approach at the automation level, that
would enable a seamless integration of business enterprise and a shop-floor system, is an
active research topic within the automotive domain. If successful, reconfigurable
automation systems formed by a network of collaborative autonomous and open control
platform in distributed, loosely coupled manufacturing environment can be realized
through a unifying platform of WS interfaces for devices communication.
The adoption of SOA- Web Services on embedded automation devices can be achieved
employing Device Profile for Web Services (DPWS) protocols which encapsulate device
control functionality as provided services (e.g. device I/O operation, device state
notification, device discovery) and business application interfaces into physical control
components of machining automation. This novel approach supports the possibility of
integrating pervasive enterprise applications through unifying Web Services interfaces
and neutral Simple Object Access Protocol (SOAP) message communication between
control systems and business applications over standard Ethernet-Local Area Networks
(LAN's). In addition, the re-configurability of the automation system is enhanced via the
utilisation of Web Services throughout an automated control, build, installation, test,
maintenance and reuse system lifecycle via device self-discovery provided by the DPWS
protocol...cont'd
A Generic Network and System Management Framework
Networks and distributed systems have formed the basis of an ongoing communications revolution
that has led to the genesis of a wide variety of services. The constantly increasing size and
complexity of these systems does not come without problems. In some organisations, the
deployment of Information Technology has reached a state where the benefits from downsizing and
rightsizing by adding new services are undermined by the effort required to keep the system
running.
Management of networks and distributed systems in general has a straightforward goal: to provide
a productive environment in which work can be performed effectively. The work required for
management should be a small fraction of the total effort. Most IT systems are still managed in an
ad hoc style without any carefully elaborated plan. In such an environment the success of
management decisions depends totally on the qualification and knowledge of the administrator.
The thesis provides an analysis of the state of the art in the area of Network and System
Management and identifies the key requirements that must be addressed for the provisioning of
Integrated Management Services. These include the integration of the different management related
aspects (i.e. integration of heterogeneous Network, System and Service Management).
The thesis then proposes a new framework, INSMware, for the provision of Management Services.
It provides a fundamental basis for the realisation of a new approach to Network and System
Management. It is argued that Management Systems can be derived from a set of pre-fabricated
and reusable Building Blocks that break up the required functionality into a number of separate
entities rather than being developed from scratch. It proposes a high-level logical model in order to
accommodate the range of requirements and environments applicable to Integrated Network and
System Management that can be used as a reference model.
A development methodology is introduced that reflects principles of the proposed approach, and
provides guidelines to structure the analysis, design and implementation phases of a management
system. The INSMware approach can further be combined with the componentware paradigm for
the implementation of the management system. Based on these principles, a prototype for the
management of SNMP systems has been implemented using industry standard middleware
technologies. It is argued that development of a management system based on Componentware
principles can offer a number of benefits. INSMware Components may be re-used and system
solutions will become more modular and thereby easier to construct and maintain
Plug-and-Participate for Limited Devices in the Field of Industrial Automation
Ausgangspunkt und gleichzeitig Motivation dieser
Arbeit ist die heutige Marktsituation: Starke Kundenbedürfnisse
nach individuellen Gütern stehen oftmals eher auf
Massenproduktion ausgerichteten Planungs- und
Automatisierungssystemen gegenüber - die Befriedigung
individueller Kundenbedürfnisse setzt aber Flexibilität und
Anpassungsfähigkeit voraus. Ziel dieser Arbeit ist es daher,
einen Beitrag zu leisten, der es Unternehmen ermöglichen soll,
auf diese individuellen Bedürfnisse flexibel reagieren zu
können. Hierbei kann es im Rahmen der Dissertation natürlich
nicht um eine Revolutionierung der gesamten Automatisierungs-
und Planungslandschaft gehen; vielmehr ist die Lösung, die der
Autor der Arbeit präsentiert, ein integraler Bestandteil eines
Automatisierungskonzeptes, das im Rahmen des PABADIS Projektes
entwickelt wurde: Während PABADIS das gesamte Spektrum von
Planung und Maschineninfrastruktur zum Inhalt hat, bezieht sich
der Kern dieser Arbeit weitestgehend auf den letztgenannten
Punkt - Maschineninfrastruktur. Ziel war es, generische
Maschinenfunktionalität in einem Netzwerk anzubieten, durch das
Fertigungsaufträge selbstständig navigieren. Als Lösung
präsentiert diese Dissertation ein Plug-and-Participate
basiertes Konzept, welches beliebige Automatisierungsfunktionen
in einer spontanen Gemeinschaft bereitstellt. Basis ist ein
generisches Interface, in dem die generellen Anforderungen
solcher ad-hoc Infrastrukturen aggregiert sind. Die
Implementierung dieses Interfaces in der PABADIS
Referenzimplementierung sowie die Gegenüberstellung der
Systemanforderungen und Systemvoraussetzungen zeigte, das
klassische Plug-and-Participate Technologien wie Jini und UPnP
aufgrund ihrer Anforderungen nicht geeignet sind -
Automatisierungsgeräte stellen oftmals nur eingeschränkte
Ressourcen bereit. Daher wurde als zweites Ergebnis neben dem
Plug-and-Participate basierten Automatisierungskonzept eine
Plug-and-Participate Technologie entwickelt - Pini - die den
Gegebenheiten der Automatisierungswelt gerecht wird und
schließlich eine Anwendung von PABADIS auf heutigen
Automatisierungsanlagen erlaubt. Grundlegende Konzepte von
Pini, die dies ermöglichen, sind die gesamte Grundarchitektur
auf Basis eines verteilten Lookup Service, die Art und Weise
der Dienstrepräsentation sowie die effiziente Nutzung der
angebotenen Dienste. Mit Pini und darauf aufbauenden Konzepten
wie PLAP ist es nun insbesondere möglich,
Automatisierungssysteme wie PABADIS auf heutigen Anlagen zu
realisieren. Das wiederum ist ein Schritt in Richtung
Kundenorientierung - solche Systeme sind mit Hinblick auf
Flexibilität und Anpassungsfähigkeit gestaltet worden, um
Kundenbedürfnissen effizient gerecht zu werden
Adaptive object management for distributed systems
This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system
- …