256 research outputs found

    Low power and high performance heterogeneous computing on FPGAs

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Google Cloud and solution for industrial automation systems

    Get PDF
    This master’s thesis introduces a possible architecture with the Google cloud platform for storing industrial time-series data from automation programmable logic controller (PLC), and how different messages from PLC are categorized. Master’s thesis introduces three cases of how stored data can be used to gain information from the system. The solution presents a way to divide software responsibilities between local and cloud entity. In the architecture, few cases as to how data is handled locally and writing messages to the cloud are explained with more detail and code examples. The solution was developed concurrently with a new underlying system and because there was no easy to use and flexible solution that offered needed requirements. The solution includes interfaces and generalization that it is not dependent on the underlying system, and architecture is usable with different systems. The architecture was decided to be de- veloped in a cloud platform that would enable global distribution with additional hosted tools

    Pilvipalvelupohjaisten alustojen hyödyntäminen tuotantoautomaation prosessidatan keräyksessä ja visualisoinnissa

    Get PDF
    New developments at the field of factory information systems and resource allocation solutions are constantly taken into practice within the field of manufacturing and production. Customers are turning their vision for more customized products and requesting further monitoring possibilities for the product itself, for its manufacturing and for its delivery. Similar paradigm change is taking place within the companies’ departments and between the clusters of manufacturing stakeholders. Modern cloud based tools are providing the means for gaining these objectives. Technology evolved from parallel, grid and distributed computing; at present cited as Cloud computing is one key future paradigm in factory and production automation. Regardless of the terminology still settling, in multiple occasions cloud computing is used term when referring to cloud services or cloud resources. Cloud technology is further-more understood as resources located outside individual entities premises. These resources are pieces of functionalities for gaining overall performance of the designed system and so worth such an architectural style is referred as Resource-Oriented Architecture (ROA). Most prominent connection method for combining the resources is a communication via REST (Representational State Transfer) based interfaces. When comping cloud resources with internet connected devices technology, Internet-of-Things (IoT) and furthermore IoT Dashboards for creating user interfaces, substantial benefits can be gained. These benefits include shorter lead-time for user interface development, process data gathering and production monitoring at higher abstract level. This Master’s Thesis takes a study for modern cloud computing resources and IoT Dashboards technologies for gaining process monitoring capabilities able to be used in the field of university research. During the thesis work, an alternative user group is kept in mind. Deploying similar methods for private production companies manufacturing environments. Additionally, field of Additive Manufacturing (AM) and one of its sub-category Direct Energy Deposition Method (DED) is detailed for gaining comprehension over the process monitoring needs, laying in the questioned manufacturing method. Finally, an implementation is developed for monitoring Tampere University of Technology Direct Energy Deposition method manufacturing environment research cell process both in real-time and gathering the process data for later reviewing. These functionalities are gained by harnessing cloud based infrastructures and resources

    Online Modeling and Tuning of Parallel Stream Processing Systems

    Get PDF
    Writing performant computer programs is hard. Code for high performance applications is profiled, tweaked, and re-factored for months specifically for the hardware for which it is to run. Consumer application code doesn\u27t get the benefit of endless massaging that benefits high performance code, even though heterogeneous processor environments are beginning to resemble those in more performance oriented arenas. This thesis offers a path to performant, parallel code (through stream processing) which is tuned online and automatically adapts to the environment it is given. This approach has the potential to reduce the tuning costs associated with high performance code and brings the benefit of performance tuning to consumer applications where otherwise it would be cost prohibitive. This thesis introduces a stream processing library and multiple techniques to enable its online modeling and tuning. Stream processing (also termed data-flow programming) is a compute paradigm that views an application as a set of logical kernels connected via communications links or streams. Stream processing is increasingly used by computational-x and x-informatics fields (e.g., biology, astrophysics) where the focus is on safe and fast parallelization of specific big-data applications. A major advantage of stream processing is that it enables parallelization without necessitating manual end-user management of non-deterministic behavior often characteristic of more traditional parallel processing methods. Many big-data and high performance applications involve high throughput processing, necessitating usage of many parallel compute kernels on several compute cores. Optimizing the orchestration of kernels has been the focus of much theoretical and empirical modeling work. Purely theoretical parallel programming models can fail when the assumptions implicit within the model are mis-matched with reality (i.e., the model is incorrectly applied). Often it is unclear if the assumptions are actually being met, even when verified under controlled conditions. Full empirical optimization solves this problem by extensively searching the range of likely configurations under native operating conditions. This, however, is expensive in both time and energy. For large, massively parallel systems, even deciding which modeling paradigm to use is often prohibitively expensive and unfortunately transient (with workload and hardware). In an ideal world, a parallel run-time will re-optimize an application continuously to match its environment, with little additional overhead. This work presents methods aimed at doing just that through low overhead instrumentation, modeling, and optimization. Online optimization provides a good trade-off between static optimization and online heuristics. To enable online optimization, modeling decisions must be fast and relatively accurate. Online modeling and optimization of a stream processing system first requires the existence of a stream processing framework that is amenable to the intended type of dynamic manipulation. To fill this void, we developed the RaftLib C++ template library, which enables usage of the stream processing paradigm for C++ applications (it is the run-time which is the basis of almost all the work within this dissertation). An application topology is specified by the user, however almost everything else is optimizable by the run-time. RaftLib takes advantage of the knowledge gained during the design of several prior streaming languages (notably Auto-Pipe). The resultant framework enables online migration of tasks, auto-parallelization, online buffer-reallocation, and other useful dynamic behaviors that were not available in many previous stream processing systems. Several benchmark applications have been designed to assess the performance gains through our approaches and compare performance to other leading stream processing frameworks. Information is essential to any modeling task, to that end a low-overhead instrumentation framework has been developed which is both dynamic and adaptive. Discovering a fast and relatively optimal configuration for a stream processing application often necessitates solving for buffer sizes within a finite capacity queueing network. We show that a generalized gain/loss network flow model can bootstrap the process under certain conditions. Any modeling effort, requires that a model be selected; often a highly manual task, involving many expensive operations. This dissertation demonstrates that machine learning methods (such as a support vector machine) can successfully select models at run-time for a streaming application. The full set of approaches are incorporated into the open source RaftLib framework
    • …
    corecore