3,603 research outputs found

    Shop floor planning and control in integrated manufacturing systems

    Get PDF
    The implementation of a shop floor planning and control system is a prerequisite in establishing an effective computer integrated manufacturing system. A shop floor control system integrates management production goals with the capabilities and limitations of the manufacturing plant. Shop floor planning begins with a long term rough cut capacity plan and evolves into near term, capacity requirements and input/output plans. Shop floor control provides a status of in-process operations and a measure of the plants success in executing the plan. Effective use of technology on shop floor increases the efficiency of the manufacturing plant. Simulation is an important tools in accomplishing this. The use of simulation for planning and control of shop floor activities is a natural out growth of its application for the design of systems. Simulation, when used for production planning and control, is a useful vehicle for providing the discipline necessary for effective shop floor control in integrated manufacturing systems

    Simulation in Automated Guided Vehicle System Design

    Get PDF
    The intense global competition that manufacturing companies face today results in an increase of product variety and shorter product life cycles. One response to this threat is agile manufacturing concepts. This requires materials handling systems that are agile and capable of reconfiguration. As competition in the world marketplace becomes increasingly customer-driven, manufacturing environments must be highly reconfigurable and responsive to accommodate product and process changes, with rigid, static automation systems giving way to more flexible types. Automated Guided Vehicle Systems (AGVS) have such capabilities and AGV functionality has been developed to improve flexibility and diminish the traditional disadvantages of AGV-systems. The AGV-system design is however a multi-faceted problem with a large number of design factors of which many are correlating and interdependent. Available methods and techniques exhibit problems in supporting the whole design process. A research review of the work reported on AGVS development in combination with simulation revealed that of 39 papers only four were industrially related. Most work was on the conceptual design phase, but little has been reported on the detailed simulation of AGVS. Semi-autonomous vehicles (SA V) are an innovative concept to overcome the problems of inflexible -systems and to improve materials handling functionality. The SA V concept introduces a higher degree of autonomy in industrial AGV -systems with the man-in-the-Ioop. The introduction of autonomy in industrial applications is approached by explicitly controlling the level of autonomy at different occasions. The SA V s are easy to program and easily reconfigurable regarding navigation systems and material handling equipment. Novel approaches to materials handling like the SA V -concept place new requirements on the AGVS development and the use of simulation as a part of the process. Traditional AGV -system simulation approaches do not fully meet these requirements and the improved functionality of AGVs is not used to its full power. There is a considerflble potential in shortening the AGV -system design-cycle, and thus the manufacturing system design-cycle, and still achieve more accurate solutions well suited for MRS tasks. Recent developments in simulation tools for manufacturing have improved production engineering development and the tools are being adopted more widely in industry. For the development of AGV -systems this has not fully been exploited. Previous research has focused on the conceptual part of the design process and many simulation approaches to AGV -system design lack in validity. In this thesis a methodology is proposed for the structured development of AGV -systems using simulation. Elements of this methodology address the development of novel functionality. The objective of the first research case of this research study was to identify factors for industrial AGV -system simulation. The second research case focuses on simulation in the design of Semi-autonomous vehicles, and the third case evaluates a simulation based design framework. This research study has advanced development by offering a framework for developing testing and evaluating AGV -systems, based on concurrent development using a virtual environment. The ability to exploit unique or novel features of AGVs based on a virtual environment improves the potential of AGV-systems considerably.University of Skovde. European Commission for funding the INCO/COPERNICUS Projec

    Computing at massive scale: Scalability and dependability challenges

    Get PDF
    Large-scale Cloud systems and big data analytics frameworks are now widely used for practical services and applications. However, with the increase of data volume, together with the heterogeneity of workloads and resources, and the dynamic nature of massive user requests, the uncertainties and complexity of resource management and service provisioning increase dramatically, often resulting in poor resource utilization, vulnerable system dependability, and user-perceived performance degradations. In this paper we report our latest understanding of the current and future challenges in this particular area, and discuss both existing and potential solutions to the problems, especially those concerned with system efficiency, scalability and dependability. We first introduce a data-driven analysis methodology for characterizing the resource and workload patterns and tracing performance bottlenecks in a massive-scale distributed computing environment. We then examine and analyze several fundamental challenges and the solutions we are developing to tackle them, including for example incremental but decentralized resource scheduling, incremental messaging communication, rapid system failover, and request handling parallelism. We integrate these solutions with our data analysis methodology in order to establish an engineering approach that facilitates the optimization, tuning and verification of massive-scale distributed systems. We aim to develop and offer innovative methods and mechanisms for future computing platforms that will provide strong support for new big data and IoE (Internet of Everything) applications

    Austrian High-Performance-Computing meeting (AHPC2020)

    Get PDF
    This booklet is a collection of abstracts presented at the AHPC conference

    Online Modeling and Tuning of Parallel Stream Processing Systems

    Get PDF
    Writing performant computer programs is hard. Code for high performance applications is profiled, tweaked, and re-factored for months specifically for the hardware for which it is to run. Consumer application code doesn\u27t get the benefit of endless massaging that benefits high performance code, even though heterogeneous processor environments are beginning to resemble those in more performance oriented arenas. This thesis offers a path to performant, parallel code (through stream processing) which is tuned online and automatically adapts to the environment it is given. This approach has the potential to reduce the tuning costs associated with high performance code and brings the benefit of performance tuning to consumer applications where otherwise it would be cost prohibitive. This thesis introduces a stream processing library and multiple techniques to enable its online modeling and tuning. Stream processing (also termed data-flow programming) is a compute paradigm that views an application as a set of logical kernels connected via communications links or streams. Stream processing is increasingly used by computational-x and x-informatics fields (e.g., biology, astrophysics) where the focus is on safe and fast parallelization of specific big-data applications. A major advantage of stream processing is that it enables parallelization without necessitating manual end-user management of non-deterministic behavior often characteristic of more traditional parallel processing methods. Many big-data and high performance applications involve high throughput processing, necessitating usage of many parallel compute kernels on several compute cores. Optimizing the orchestration of kernels has been the focus of much theoretical and empirical modeling work. Purely theoretical parallel programming models can fail when the assumptions implicit within the model are mis-matched with reality (i.e., the model is incorrectly applied). Often it is unclear if the assumptions are actually being met, even when verified under controlled conditions. Full empirical optimization solves this problem by extensively searching the range of likely configurations under native operating conditions. This, however, is expensive in both time and energy. For large, massively parallel systems, even deciding which modeling paradigm to use is often prohibitively expensive and unfortunately transient (with workload and hardware). In an ideal world, a parallel run-time will re-optimize an application continuously to match its environment, with little additional overhead. This work presents methods aimed at doing just that through low overhead instrumentation, modeling, and optimization. Online optimization provides a good trade-off between static optimization and online heuristics. To enable online optimization, modeling decisions must be fast and relatively accurate. Online modeling and optimization of a stream processing system first requires the existence of a stream processing framework that is amenable to the intended type of dynamic manipulation. To fill this void, we developed the RaftLib C++ template library, which enables usage of the stream processing paradigm for C++ applications (it is the run-time which is the basis of almost all the work within this dissertation). An application topology is specified by the user, however almost everything else is optimizable by the run-time. RaftLib takes advantage of the knowledge gained during the design of several prior streaming languages (notably Auto-Pipe). The resultant framework enables online migration of tasks, auto-parallelization, online buffer-reallocation, and other useful dynamic behaviors that were not available in many previous stream processing systems. Several benchmark applications have been designed to assess the performance gains through our approaches and compare performance to other leading stream processing frameworks. Information is essential to any modeling task, to that end a low-overhead instrumentation framework has been developed which is both dynamic and adaptive. Discovering a fast and relatively optimal configuration for a stream processing application often necessitates solving for buffer sizes within a finite capacity queueing network. We show that a generalized gain/loss network flow model can bootstrap the process under certain conditions. Any modeling effort, requires that a model be selected; often a highly manual task, involving many expensive operations. This dissertation demonstrates that machine learning methods (such as a support vector machine) can successfully select models at run-time for a streaming application. The full set of approaches are incorporated into the open source RaftLib framework
    • …
    corecore