3,963 research outputs found

    Effective knowledge transfer to SMEs

    Get PDF
    EIM examined to what extent small and medium-sized enterprises may be stimulated to absorb more know-how in respect of for instance new process technology to use that know-how for in-company business process upgrading. The study focuses primarily on the cluster of businesses hardly involved in technological innovation, and examines to what degree knowledge about marketing and know-how management is employed to stimulate the absorption of know-how among retarded businesses.

    Networked Control Systems for Electrical Drives

    Get PDF

    Knowledge management practices in large construction organisations

    Get PDF
    This paper investigates how large UK construction organisations manage their knowledge assets. It then proposes STEPS, a mechanism for benchmarking organisation’s knowledge management maturity

    Requirements-driven design of autonomic application software

    Get PDF
    Autonomic computing systems reduce software maintenance costs and management complexity by taking on the responsibility for their configuration, optimization, healing, and protection. These tasks are accomplished by switching at runtime to a different system behaviour - the one that is more efficient, more secure, more stable, etc. - while still fulfilling the main purpose of the system. Thus, identifying the objectives of the system, analyzing alternative ways of how these objectives can be met, and designing a system that supports all or some of these alternative behaviours is a promising way to develop autonomic systems. This paper proposes the use of requirements goal models as a foundation for such software development process and demonstrates this on an example

    PLC CONTROLLED AUTOMATED GUIDED VEHICLE (AGV)

    Get PDF
    A programmable logic controller (PLC) is a specialized computer and designed to be used in industrial for control purpose. In general, an automated guided vehicle (AGV) is a kind of transportation that mostly used in material handling system. It is self-propelled (driverless) vehicles guided along defined pathways and controlled by a computer. The objective of the project is to design an AGV vehicle prototype which is controlled by PLC for industrial application (material handling). Most of the AGV used in industries are controlled by a microprocessor. In this project the main focus is to have a reliable design, easier maintenance and advanced PLC controlled automated guided vehicle (AGV). The significant part of the project is of using sensors to guide the vehicle along the pre-determined path instead of using embedded wire guided method. The initial stage of the project involves feasibility studies of the PLC controlled automated guided vehicle (AGV) which includes the ability of the PLC and the automated guided vehicle system (AGVS). The AGV design requires the determining of the type of hardware needed, for example PLC (CPM2A), sensors (metal), rechargeable battery, power window motors and others. The design of AGV and its path are implemented. The AGV prototype is built part by part and then assembled. Two safety features are added on the prototype, i.e., the emergency reset and obstacles detection by using sensor. The ladder diagram for controlling the AGV is designed base on the path design and tested with PLC CPMlA training kit. The input and output components and the rechargeable battery is interfaced with the PLC on the AGV. Finally, the PLC controlled AGV was tested and some modification is required on the ladder diagram and also on the AGV. The project is successfully executed by using a PLC to control the AGV to find its route in a predefined path

    The Motivation, Architecture and Demonstration of Ultralight Network Testbed

    Get PDF
    In this paper we describe progress in the NSF-funded Ultralight project and a recent demonstration of Ultralight technologies at SuperComputing 2005 (SC|05). The goal of the Ultralight project is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused approach. Ultralight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. Thus we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we present the motivation for, and an overview of, the Ultralight project. We then cover early results in the various working areas of the project. The remainder of the paper describes our experiences of the Ultralight network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many sites interconnected by the Ultralight backbone network. The exercise highlighted the benefits of Ultralight's research and development efforts that are enabling new and advanced methods of distributed scientific data analysis

    The Design and Demonstration of the Ultralight Testbed

    Get PDF
    In this paper we present the motivation, the design, and a recent demonstration of the UltraLight testbed at SC|05. The goal of the Ultralight testbed is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network- focused approach. UltraLight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. To achieve its goal we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we will first present early results in the various working areas of the project. We then describe our experiences of the network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many Grid computing sites
    • …
    corecore