265 research outputs found

    Machine Tool Communication (MTComm) Method and Its Applications in a Cyber-Physical Manufacturing Cloud

    Get PDF
    The integration of cyber-physical systems and cloud manufacturing has the potential to revolutionize existing manufacturing systems by enabling better accessibility, agility, and efficiency. To achieve this, it is necessary to establish a communication method of manufacturing services over the Internet to access and manage physical machines from cloud applications. Most of the existing industrial automation protocols utilize Ethernet based Local Area Network (LAN) and are not designed specifically for Internet enabled data transmission. Recently MTConnect has been gaining popularity as a standard for monitoring status of machine tools through RESTful web services and an XML based messaging structure, but it is only designed for data collection and interpretation and lacks remote operation capability. This dissertation presents the design, development, optimization, and applications of a service-oriented Internet-scale communication method named Machine Tool Communication (MTComm) for exchanging manufacturing services in a Cyber-Physical Manufacturing Cloud (CPMC) to enable manufacturing with heterogeneous physically connected machine tools from geographically distributed locations over the Internet. MTComm uses an agent-adapter based architecture and a semantic ontology to provide both remote monitoring and operation capabilities through RESTful services and XML messages. MTComm was successfully used to develop and implement multi-purpose applications in in a CPMC including remote and collaborative manufacturing, active testing-based and edge-based fault diagnosis and maintenance of machine tools, cross-domain interoperability between Internet-of-things (IoT) devices and supply chain robots etc. To improve MTComm’s overall performance, efficiency, and acceptability in cyber manufacturing, the concept of MTComm’s edge-based middleware was introduced and three optimization strategies for data catching, transmission, and operation execution were developed and adopted at the edge. Finally, a hardware prototype of the middleware was implemented on a System-On-Chip based FPGA device to reduce computational and transmission latency. At every stage of its development, MTComm’s performance and feasibility were evaluated with experiments in a CPMC testbed with three different types of manufacturing machine tools. Experimental results demonstrated MTComm’s excellent feasibility for scalable cyber-physical manufacturing and superior performance over other existing approaches

    Machine Tool Communication (MTComm) Method and Its Applications in a Cyber-Physical Manufacturing Cloud

    Get PDF
    The integration of cyber-physical systems and cloud manufacturing has the potential to revolutionize existing manufacturing systems by enabling better accessibility, agility, and efficiency. To achieve this, it is necessary to establish a communication method of manufacturing services over the Internet to access and manage physical machines from cloud applications. Most of the existing industrial automation protocols utilize Ethernet based Local Area Network (LAN) and are not designed specifically for Internet enabled data transmission. Recently MTConnect has been gaining popularity as a standard for monitoring status of machine tools through RESTful web services and an XML based messaging structure, but it is only designed for data collection and interpretation and lacks remote operation capability. This dissertation presents the design, development, optimization, and applications of a service-oriented Internet-scale communication method named Machine Tool Communication (MTComm) for exchanging manufacturing services in a Cyber-Physical Manufacturing Cloud (CPMC) to enable manufacturing with heterogeneous physically connected machine tools from geographically distributed locations over the Internet. MTComm uses an agent-adapter based architecture and a semantic ontology to provide both remote monitoring and operation capabilities through RESTful services and XML messages. MTComm was successfully used to develop and implement multi-purpose applications in in a CPMC including remote and collaborative manufacturing, active testing-based and edge-based fault diagnosis and maintenance of machine tools, cross-domain interoperability between Internet-of-things (IoT) devices and supply chain robots etc. To improve MTComm’s overall performance, efficiency, and acceptability in cyber manufacturing, the concept of MTComm’s edge-based middleware was introduced and three optimization strategies for data catching, transmission, and operation execution were developed and adopted at the edge. Finally, a hardware prototype of the middleware was implemented on a System-On-Chip based FPGA device to reduce computational and transmission latency. At every stage of its development, MTComm’s performance and feasibility were evaluated with experiments in a CPMC testbed with three different types of manufacturing machine tools. Experimental results demonstrated MTComm’s excellent feasibility for scalable cyber-physical manufacturing and superior performance over other existing approaches

    Market fields structure & dynamics in industrial automation

    Get PDF
    There is a research tradition in the economics of standards which addresses standards wars, antitrust concerns or positive externalities from standards. Recent research has also dealt with the process characteristics of standardisation, de facto standard-setting consortia and intellectual property concerns in the technology specification or implementation phase. Nonetheless, there are no studies which analyse capabilities, comparative industry dynamics or incentive structures sufficiently in the context of standard-setting. In my study, I address the characteristics of collaborative research and standard-setting as a new mode of deploying assets beyond motivations well-known from R&D consortia or market alliances. On the basis of a case study of a leading user organisation in the market for industrial automation technology, but also a descriptive network analysis of cross-community affiliations, I demonstrate that there must be a paradoxical relationship between cooperation and competition. More precisely, I explain how there can be a dual relationship between value creation and value capture respecting exploration and exploitation. My case study emphasises the dynamics between knowledge stocks (knowledge alignment, narrowing and deepening) produced by collaborative standard setting and innovation; it also sheds light on an evolutional relationship between the exploration of assets and use cases and each firm's exploitation activities in the market. I derive standard-setting capabilities from an empirical analysis of membership structures, policies and incumbent firm characteristics in selected, but leading, user organisations. The results are as follows: the market for industrial automation technology is characterised by collaboration on standards, high technology influences of other industries and network effects on standards. Further, system integrators play a decisive role in value creation in the customer-specific business case. Standard-setting activities appear to be loosely coupled to the products offered on the market. Core leaders in world standards in industrial automation own a variety of assets and they are affiliated to many standard-setting communities rather than exclusively committed to a few standards. Furthermore, their R&D ratios outperform those of peripheral members and experience in standard-setting processes can be assumed. Standard-setting communities specify common core concepts as the basis for the development of each member's proprietary products, complementary technologies and industrial services. From a knowledge-based perspective, the targeted disclosure of certain knowledge can be used to achieve high innovation returns through systemic products which add proprietary features to open standards. Finally, the interplay between exploitation and exploration respecting the deployment of standard-setting capabilities linked to cooperative, pre-competitive processes leads to an evolution in common technology owned and exploited by the standard-setting community as a particular kind of innovation ecosystem. --standard-setting,innovation,industry dynamics and context,industrial automation

    A Diagnostics Model for Industrial Communications Networks

    Get PDF
    Over the past twenty years industrial communications networks have become common place in most industrial plants. The high availability of these networks is crucial in smooth plant operations. Therefore local and remote diagnostics of these networks is of primary importance in solving any existing or emerging network problems. Users for most part consider the “plant networks” as black boxes, and often not sure of the actual health of the networks. The major part of the work outlined in this research concentrates on the proposed “Network Diagnostics Model” for local and remote monitoring. The main objective of the research is to aid the establishment of tools and techniques for diagnosis of the industrial networks, with particular emphasis on PROFIBUS and PROFINET. Additionally this research has resulted in development of a number of devices to aid in network diagnostics. The work outlined in this submission contributes to the developments in the area of online diagnostics systems. The development work was conducted in the following phases: 1. Development of Function Block (FB) for diagnosing PROFIBUS network for implementation on PLC. 2. Development of OPC server for diagnosing PROFIBUS network for implementation on PC. 3. Development of a web based diagnostic software for multiple fieldbuses for implementation on imbedded XP platform. 4. Development of OPC server for diagnosing PROFINET network for implementation on PC 5. Conformance testing of masters (PLC) in PROFIBUS network to increase the health of the network. 6. Use of diagnostics tools for performance analysis of fieldbuses networks for high performance applications. The research work outlined in this submission has made a significant and coherent contribution to online diagnostics of fieldbus communications networks, and has paved the way for the introduction of the online diagnostics devices to the market place. It has shown that the proposed model provides a uniform framework for research and development of diagnostics tools and techniques for fieldbus networks. Organizations that use fieldbus should consider installing advanced online diagnostic systems to boost maintenance efficiency and reduce operating costs, and maintain the availability of plant resources. Based on the experience gained over a number of years a multilayer model is proposed for future development of diagnostics tools

    Teollisen Internetin kÀyttöönotto automaatiolaitteissa

    Get PDF
    Industrial Internet is a term that is used to describe digitalization of industry. It is a research direction in Finland, where there are already various groups studying it. Despite this, the term Industrial Internet is still relatively vague and there is a lack of concreteness around the topic. The objective of this thesis is to explore the current status of Industrial Internet and study the capabilities of automation devices from an Industrial Internet point of view. I explore Industrial Internet through a literary review where I study various use cases. The use cases of Industrial Internet are divided into two main types: platform centric and machine to machine (M2M) communication centric. The use cases provide a list of characteristics and requirements for Industrial Internet from these two perspectives. General requirements are, for example scalability and flexibility, which are achieved through various IT technologies, such as Service-Oriented-Architecture. This thesis also consists of a practical part where I configured the control logic and data collection for a test bed that simulates drop tests of active magnetic bearings. The control logic consists of a programmable logic controller and corresponding software. The data collection consists of software for collecting and analyzing measurement data and the measuring equipment. After the literary review and practical part, I propose the creation of a cloud based Industrial Internet platform around the active magnetic test bed. The purpose of the platform is to provide a direction for further research. The creation of the platform consists of two phases: first phase includes the creation of the platform so that the test bed achieves current functionality but cloud based. The second phase consists of changing the platform to meet the requirements of the literature review. The end results will be an application independent system solution for Industrial Internet.Teollinen Internet on termi, jolla kuvataan teollisuuden digitalisaatiota. Aihe on kasvavan kiinnostuksen kohde ja esim. Suomessa on useita tahoja, jotka panostavat aiheen tutkimukseen. Siltikin Teollinen Internet on kÀsitteenÀ epÀselvÀ ja sitÀ vaivaa konkretian puute. TÀmÀn työn tarkoituksena on tutustua Teollisen Internetin nykytilaan ja automaatiolaitteiden ominaisuuksiin Teollisen Internetin nÀkökulmasta. Teollisen Internetin esimerkit jakautuvat pÀÀasiassa kahteen luokkaan: alustalÀhtöisiin ja koneiden vÀliseen kommunikaatioon (M2M-kommunikaatio). Esimerkit tarjoavat listan ominaisuuksia ja vaatimuksia Teolliselle Internetille kummastakin nÀkökulmasta. YleisiÀ ominaisuuksia ovat esimerkiksi skaalattavuus ja joustavuus, jotka saavutetaan erilaisilla tietoteknisillÀ vaatimuksilla, esim. palvelukeskeisellÀ arkkitehtuurilla. LisÀksi työhön kuuluu kÀytÀnnön osuus, jossa kirjoitin ohjainlogiikan ja datankerÀyksen testilaitteeseen, joka simuloi aktiivimagneettilaakerien pudotuskokeita. Ohjainlogiikka koostui PLC-laitteesta ja siihen liittyvistÀ ohjelmistoista. Datan kerÀys koostui mittausdatan kerÀykseen ja purkamiseen vaadittavista ohjelmistoista sekÀ laitteistosta. Kirjallisuudesta kerÀttyjen vaatimusten ja kÀytÀnnön kokemuksien perusteella esitÀn pilvipohjaisen, Teolliseen Internetiin suunnatun ohjelmistoalustan kehittÀmistÀ testilaitteen ympÀrille. Ohjelmistoalusta voi toimia yliopistollisen jatkotutkimuksen pohjana. Ohjelmistoalustan toteuttaminen tapahtuu kahdessa vaiheessa: ensimmÀisessÀ vaiheessa kehitetÀÀn pilvipohjainen alusta, joka saavuttaa testilaitteiston nykyisen toiminnallisuuden. Toisessa vaiheessa ohjelmistoalusta muutetaan vastaamaan Teollisen Internetin vaatimuksia, jolla saavutetaan sovellusriippumaton jÀrjestelmÀratkaisu

    Web service control of component-based agile manufacturing systems

    Get PDF
    Current global business competition has resulted in significant challenges for manufacturing and production sectors focused on shorter product lifecyc1es, more diverse and customized products as well as cost pressures from competitors and customers. To remain competitive, manufacturers, particularly in automotive industry, require the next generation of manufacturing paradigms supporting flexible and reconfigurable production systems that allow quick system changeovers for various types of products. In addition, closer integration of shop floor and business systems is required as indicated by the research efforts in investigating "Agile and Collaborative Manufacturing Systems" in supporting the production unit throughout the manufacturing lifecycles. The integration of a business enterprise with its shop-floor and lifecycle supply partners is currently only achieved through complex proprietary solutions due to differences in technology, particularly between automation and business systems. The situation is further complicated by the diverse types of automation control devices employed. Recently, the emerging technology of Service Oriented Architecture's (SOA's) and Web Services (WS) has been demonstrated and proved successful in linking business applications. The adoption of this Web Services approach at the automation level, that would enable a seamless integration of business enterprise and a shop-floor system, is an active research topic within the automotive domain. If successful, reconfigurable automation systems formed by a network of collaborative autonomous and open control platform in distributed, loosely coupled manufacturing environment can be realized through a unifying platform of WS interfaces for devices communication. The adoption of SOA- Web Services on embedded automation devices can be achieved employing Device Profile for Web Services (DPWS) protocols which encapsulate device control functionality as provided services (e.g. device I/O operation, device state notification, device discovery) and business application interfaces into physical control components of machining automation. This novel approach supports the possibility of integrating pervasive enterprise applications through unifying Web Services interfaces and neutral Simple Object Access Protocol (SOAP) message communication between control systems and business applications over standard Ethernet-Local Area Networks (LAN's). In addition, the re-configurability of the automation system is enhanced via the utilisation of Web Services throughout an automated control, build, installation, test, maintenance and reuse system lifecycle via device self-discovery provided by the DPWS protocol...cont'd

    A Dual-Rate Model Predictive Controller for Fieldbus Based Distributed Control Systems

    Get PDF
    In modern Distributed Control Systems (DCS), an industrial computer network protocol known as fieldbus is used in chemical, petro-chemical and other process industries for real-time communication between digital controllers, sensors, actuators and other smart devices. In a closed-loop digital control system, data is transferred from sensor to controller and controller to actuator cyclically in a timely but discontinuous fashion at a specific rate known as sampling-rate or macrocycle through fieldbus. According to the current trend of fieldbus technology, in most industrial control systems, the sampling-rate or macrocycle is fixed at the time of system configuration. This fixed sampling-rate makes it impossible to use a multi-rate controller that can automatically switch between multiple sampling-rates at run time to gain some advantages, such as network bandwidth conservation, energy conservation and reduction of mechanical wear in actuators. This thesis is concerned about design and implementation of a dual-rate controller which automatically switches between the two sampling-rates depending on system’s dynamic state. To be more precise, the controller uses faster sampling-rate when the process goes through transient states and slower sampling-rate when the process is at steady-state operation. The controller is based on a Model Predictive Control (MPC) algorithm and a Kalman filter based observer. This thesis starts with theoretical development of the dual-rate controller design. Subsequently, the developed controller is implemented on a Siemens PCS 7 system for controlling a physical process. The investigation has concluded that this control strategy can indeed lead to conservation of network bandwidth, energy savings in field devices and reduction of wear in mechanical actuators in fieldbus based distributed control systems

    Device Information Modeling in Automation - A Computer-Scientific Approach

    Get PDF
    This thesis presents an approach for device information modeling that is meant to ease the challenges of device manufacturers in the automation domain. The basis for this approach are semantic models of the application domain. The author discusses the challenges for integration in the automation domain and especially regarding field devices, device description languages and fieldbuses. A method for the generation of semantic models is presented and an approach is discussed that is meant to help the generation of device descriptions for different device description languages. The approach is then evaluated

    Hierarchical Control of the ATLAS Experiment

    Get PDF
    Control systems at High Energy Physics (HEP) experiments are becoming increasingly complex mainly due to the size, complexity and data volume associated to the front-end instrumentation. In particular, this becomes visible for the ATLAS experiment at the LHC accelerator at CERN. ATLAS will be the largest particle detector ever built, result of an international collaboration of more than 150 institutes. The experiment is composed of 9 different specialized sub-detectors that perform different tasks and have different requirements for operation. The system in charge of the safe and coherent operation of the whole experiment is called Detector Control System (DCS). This thesis presents the integration of the ATLAS DCS into a global control tree following the natural segmentation of the experiment into sub-detectors and smaller sub-systems. The integration of the many different systems composing the DCS includes issues such as: back-end organization, process model identification, fault detection, synchronization with external systems, automation of processes and supervisory control. Distributed control modeling is applied to the widely distributed devices that coexist in ATLAS. Thus, control is achieved by means of many distributed, autonomous and co-operative entities that are hierarchically organized and follow a finite-state machine logic. The key to integration of these systems lies in the so called Finite State Machine tool (FSM), which is based on two main enabling technologies: a SCADA product, and the State Manager Interface (SMI++) toolkit. The SMI++ toolkit has been already used with success in two previous HEP experiments providing functionality such as: an object-oriented language, a finite-state machine logic, an interface to develop expert systems, and a platform-independent communication protocol. This functionality is then used at all levels of the experiment operation process, ranging from the overall supervision down to device integration, enabling the overall sequencing and automation of the experiment. Although the experience gained in the past is an important input for the design of the detector's control hierarchy, further requirements arose due to the complexity and size of ATLAS. In total, around 200.000 channels will be supervised by the DCS and the final control tree will be hundreds of times bigger than any of the antecedents. Thus, in order to apply a hierarchical control model to the ATLAS DCS, a common approach has been proposed to ensure homogeneity between the large-scale distributed software ensembles of sub-detectors. A standard architecture and a human interface have been defined with emphasis on the early detection, monitoring and diagnosis of faults based on a dynamic fault-data mechanism. This mechanism relies on two parallel communication paths that manage the faults while providing a clear description of the detector conditions. The DCS information is split and handled by different types of SMI++ objects; whilst one path of objects manages the operational mode of the system, the other is to handle eventual faults. The proposed strategy has been validated through many different tests with positive results in both functionality and performance. This strategy has been successfully implemented and constitutes the ATLAS standard to build the global control tree. During the operation of the experiment, the DCS, responsible for the detector operation, must be synchronized with the data acquisition system which is in charge of the physics data taking process. The interaction between both systems has so far been limited, but becomes increasingly important as the detector nears completion. A prototype implementation, ready to be used during the sub-detector integration, has achieved data reconciliation by mapping the different segments of the data acquisition system into the DCS control tree. The adopted solution allows the data acquisition control applications to command different DCS sections independently and prevents incorrect physics data taking caused by a failure in a detector part. Finally, the human-machine interface presents and controls the DCS data in the ATLAS control room. The main challenges faced during the design and development phases were: how to support the operator in controlling this large system, how to maintain integration across many displays, and how to provide an effective navigation. These issues have been solved by combining the functionalities provided by both, the SCADA product and the FSM tool. The control hierarchy provides an intuitive structure for the organization of many different displays that are needed for the visualization of the experiment conditions. Each node in the tree represents a workspace that contains the functional information associated with its abstraction level within the hierarchy. By means of an effective navigation, any workspace of the control tree is accessible by the operator or detector expert within a common human interface layout. The interface is modular and flexible enough to be accommodated to new operational scenarios, fulfil the necessities of the different kind of users and facilitate the maintenance during the long lifetime of the detector of up to 20 years. The interface is in use since several months, and the sub-detector's control hierarchies, together with their associated displays, are currently being integrated into the common human-machine interface
    • 

    corecore