120,857 research outputs found

    Multi Site Coordination using a Multi-Agent System

    Get PDF
    A new approach of coordination of decisions in a multi site system is proposed. It is based this approach on a multi-agent concept and on the principle of distributed network of enterprises. For this purpose, each enterprise is defined as autonomous and performs simultaneously at the local and global levels. The basic component of our approach is a so-called Virtual Enterprise Node (VEN), where the enterprise network is represented as a set of tiers (like in a product breakdown structure). Within the network, each partner constitutes a VEN, which is in contact with several customers and suppliers. Exchanges between the VENs ensure the autonomy of decision, and guarantiee the consistency of information and material flows. Only two complementary VEN agents are necessary: one for external interactions, the Negotiator Agent (NA) and one for the planning of internal decisions, the Planner Agent (PA). If supply problems occur in the network, two other agents are defined: the Tier Negotiator Agent (TNA) working at the tier level only and the Supply Chain Mediator Agent (SCMA) working at the level of the enterprise network. These two agents are only active when the perturbation occurs. Otherwise, the VENs process the flow of information alone. With this new approach, managing enterprise network becomes much more transparent and looks like managing a simple enterprise in the network. The use of a Multi-Agent System (MAS) allows physical distribution of the decisional system, and procures a heterarchical organization structure with a decentralized control that guaranties the autonomy of each entity and the flexibility of the network

    Linking design and manufacturing domains via web-based and enterprise integration technologies

    Get PDF
    The manufacturing industry faces many challenges such as reducing time-to-market and cutting costs. In order to meet these increasing demands, effective methods are need to support the early product development stages by bridging the gap of communicating early design ideas and the evaluation of manufacturing performance. This paper introduces methods of linking design and manufacturing domains using disparate technologies. The combined technologies include knowledge management supporting for product lifecycle management (PLM) systems, enterprise resource planning (ERP) systems, aggregate process planning systems, workflow management and data exchange formats. A case study has been used to demonstrate the use of these technologies, illustrated by adding manufacturing knowledge to generate alternative early process plan which are in turn used by an ERP system to obtain and optimise a rough-cut capacity plan

    Impact Evaluation of Interoperability Decision Variables on P2P Collaboration Performances

    Get PDF
    This article deals with the impact evaluation of interoperability decision variables on performance indicators of business processes. The case of partner companies is studied to show the interest of an Interoperability Service Utility (ISU) on business processes in a peer to peer (P2P) collaboration. Information described in the format and the ontology of a broadcasting entity is transformed by ISU into information with the format and the ontology of the receiving entity depending on the available resources of interoperation. These resources can be human operators with defined skill level or software modules of transformation in predefined languages. A design methodology of a global simulation model for estimating the impact of interoperability decision variables on performance indicators of business processes is proposed. Its implementation in an industrial case of collaboration shows its efficiency and its interest to motivate an investment in the technologies of enterprise interoperability

    A scalable application server on Beowulf clusters : a thesis presented in partial fulfilment of the requirement for the degree of Master of Information Science at Albany, Auckland, Massey University, New Zealand

    Get PDF
    Application performance and scalability of a large distributed multi-tiered application is a core requirement for most of today's critical business applications. I have investigated the scalability of a J2EE application server using the standard ECperf benchmark application in the Massey Beowulf Clusters namely the Sisters and the Helix. My testing environment consists of Open Source software: The integrated JBoss-Tomcat as the application server and the web server, along with PostgreSQL as the database. My testing programs were run on the clustered application server, which provide replication of the Enterprise Java Bean (EJB) objects. I have completed various centralized and distributed tests using the JBoss Cluster. I concluded that clustering of the application server and web server will effectively increase the performance of the application running on them given sufficient system resources. The application performance will scale to a point where a bottleneck has occurred in the testing system, the bottleneck could be any resources included in the testing environment: the hardware, software, network and the application that is running. Performance tuning for a large-scale J2EE application is a complicated issue, which is related to the resources available. However, by carefully identifying the performance bottleneck in the system with hardware, software, network, operating system and application configuration. I can improve the performance of the J2EE applications running in a Beowulf Cluster. The software bottleneck can be solved by changing the default settings, on the other hand, hardware bottlenecks are harder unless more investment are made to purchase higher speed and capacity hardware
    • 

    corecore