64 research outputs found

    Time series database in Industrial IoT and its testing tool

    Get PDF
    Abstract. In the essence of the Industrial Internet of Things is data gathering. Data is time and event-based and hence time series data is key concept in the Industrial Internet of Things, and specific time series database is required to process and store the data. Solution development and choosing the right time series database for Industrial Internet of Things solution can be difficult. Inefficient comparison of time series databases can lead to wrong choices and consequently to delays and financial losses. This thesis is improving the tools to compare different time series databases in context of the Industrial Internet of Things. In addition, the thesis identifies the functional and non-functional requirements of time series database in Industrial Internet of Things and designs and implements a performance test bench. A practical example of how time series databases can be compared with identified requirements and developed test bench is also provided. The example is used to examine how selected time series databases fulfill these requirements. Eight functional requirements and eight non-functional requirements were identified. Functional requirements included, e.g., aggregation support, information models, and hierarchical configurations. Non-functional requirements included, e.g., scalability, performance, and lifecycle. Developed test bench took Industrial Internet of Things point of view by testing the database in three scenarios: write heavy, read heavy, and concurrent write and read operations. In the practical example, ABB’s cpmPlus History, InfluxDB, and TimescaleDB were evaluated. Both requirement evaluation and performance testing resulted that cpmPlus History performed best, InfluxDB second best, and TimescaleDB the worst. cpmPlus History showed extensive support for the requirements and best performance in all performance test cases. InfluxDB showed high performance for data writing while TimescaleDB showed better performance for data reading.Aikasarjatietokanta teollisuuden esineiden internetissä ja sen testipenkki. Tiivistelmä. Teollisuuden esineiden internetin ytimessä on tiedon keruu. Tieto on aika ja tapahtuma pohjaista ja sen vuoksi aikasarjatieto on teollisuuden esineiden internetin avainkäsitteitä. Prosessoidakseen tällaista tietoa tarvitaan erityinen aikasarjatietokanta. Sovelluskehitys ja oikean aikasarjatietokannan valitseminen teollisuuden esineiden internetin ratkaisuun voi olla vaikeaa. Tehoton aikasarjatietokantojen vertailu voi johtaa vääriin valintoihin ja siten viiveisiin sekä taloudellisiin tappioihin. Tässä diplomityössä kehitetään työkaluja, joilla eri aikasarjatietokantoja teollisuuden esineiden internetin ympäristössä voidaan vertailla. Diplomityössä tunnistetaan toiminnalliset ja ei-toiminnalliset vaatimukset aikasarjatietokannalle teollisuuden esineiden internetissä ja suunnitellaan ja toteutetaan suorituskykytestipenkki aikasarjatietokannoille. Työ tarjoaa myös käytännön esimerkin kuinka aikasarjatietokantoja voidaan vertailla tunnistetuilla vaatimuksilla ja kehitetyllä testipenkillä. Esimerkkiä hyödynnetään tutkimuksessa, jossa selvitetään kuinka nykyiset aikasarjatietokannat täyttävät tunnistetut vaatimukset. Diplomityössä tunnistettiin kahdeksan toiminnallista ja kahdeksan ei-toiminnallista vaatimusta. Toiminnallisiin vaatimuksiin sisältyi mm. aggregoinnin tukeminen, informaatiomallit ja hierarkkiset konfiguraatiot. Ei-toiminnallisiin vaatimuksiin sisältyi mm. skaalautuvuus, suorituskyky ja elinkaari. Kehitetty testipenkki otti teollisuuden esineiden internetin näkökulman kolmella eri testiskenaariolla: kirjoituspainoitteinen, lukemispainoitteinen ja yhtäaikaiset kirjoitus- ja lukemisoperaatiot. Käytännön esimerkissä ABB:n cpmPlus History, InfluxDB ja TimescaleDB tietokannat olivat arvioitavina. Sekä vaatimusten arviointi että suorituskykytestit osoittivat cpmPlus History:n suoriutuvan parhaiten, InfluxDB:n toiseksi parhaiten ja TimescaleDB:n huonoiten. cpmPlus History tuki tunnistettuja vaatimuksia laajimmin ja tarjosi parhaan suorituskyvyn kaikissa testiskenaarioissa. InfluxDB antoi hyvän suorituskyvyn tiedon kirjoittamiselle, kun vastaavasti TimescaleDB osoitti parempaa suorituskykyä tiedon lukemisessa

    Monitoring the waste to energy plant using the latest AI methods and tools

    Get PDF
    Solid wastes for instance, municipal and industrial wastes present great environmental concerns and challenges all over the world. This has led to development of innovative waste-to-energy process technologies capable of handling different waste materials in a more sustainable and energy efficient manner. However, like in many other complex industrial process operations, waste-to-energy plants would require sophisticated process monitoring systems in order to realize very high overall plant efficiencies. Conventional data-driven statistical methods which include principal component analysis, partial least squares, multivariable linear regression and so forth, are normally applied in process monitoring. But recently, latest artificial intelligence (AI) methods in particular deep learning algorithms have demostrated remarkable performances in several important areas such as machine vision, natural language processing and pattern recognition. The new AI algorithms have gained increasing attention from the process industrial applications for instance in areas such as predictive product quality control and machine health monitoring. Moreover, the availability of big-data processing tools and cloud computing technologies further support the use of deep learning based algorithms for process monitoring. In this work, a process monitoring scheme based on the state-of-the-art artificial intelligence methods and cloud computing platforms is proposed for a waste-to-energy industrial use case. The monitoring scheme supports use of latest AI methods, laveraging big-data processing tools and taking advantage of available cloud computing platforms. Deep learning algorithms are able to describe non-linear, dynamic and high demensionality systems better than most conventional data-based process monitoring methods. Moreover, deep learning based methods are best suited for big-data analytics unlike traditional statistical machine learning methods which are less efficient. Furthermore, the proposed monitoring scheme emphasizes real-time process monitoring in addition to offline data analysis. To achieve this the monitoring scheme proposes use of big-data analytics software frameworks and tools such as Microsoft Azure stream analytics, Apache storm, Apache Spark, Hadoop and many others. The availability of open source in addition to proprietary cloud computing platforms, AI and big-data software tools, all support the realization of the proposed monitoring scheme

    Cloud Based IoT Architecture

    Get PDF
    The Internet of Things (IoT) and cloud computing have grown in popularity over the past decade as the internet becomes faster and more ubiquitous. Cloud platforms are well suited to handle IoT systems as they are accessible and resilient, and they provide a scalable solution to store and analyze large amounts of IoT data. IoT applications are complex software systems and software developers need to have a thorough understanding of the capabilities, limitations, architecture, and design patterns of cloud platforms and cloud-based IoT tools to build an efficient, maintainable, and customizable IoT application. As the IoT landscape is constantly changing, research into cloud-based IoT platforms is either lacking or out of date. The goal of this thesis is to describe the basic components and requirements for a cloud-based IoT platform, to provide useful insights and experiences in implementing a cloud-based IoT solution using Microsoft Azure, and to discuss some of the shortcomings when combining IoT with a cloud platform

    Fault-Tolerant, Scalable and Interoperable IoT Platform

    Get PDF
    Tese de mestrado, Engenharia Informática (Engenharia de Software) Universidade de Lisboa, Faculdade de Ciências, 2020Nowadays the growth of Internet usage is quite visible. Everyday the number of devices connected to the Internet increases, everything may be a smart device capable of interacting with the Internet, from smartphones, smartwatches, refrigerators and much more. All of these devices are called things in the Internet of Things. Many of them are usually constrained devices due to it’s size, usually very small with low capacities such as memory and/or processing power. These kind of devices need to be very efficient in all of their actives. For example, the battery lifetime should be maximized as possible so that the necessity to change each device’s battery could be minimized. There are many technologies that allow communication between devices. Besides the technologies, protocols may be involved in the communication between each device in an IoT system. Communication Protocols define the behaviour that is followed by things when communicating with each other. For example, in some protocols acknowledgments must be used to ensure data arrival, while in others this feature is not enforced. There are many communication Protocols available in the literature. The use of communication protocols and communication models bring many benefits to IoT systems, but they may also benefit from using the cloud. One of the biggest struggles in IoT is the fact that things are very constrained devices in terms of resources (CPU and RAM). With the cloud this would no longer be an issue. Plus, the cloud is able of providing device management, scalability, storage and real time transmission. The characteristics of the communication protocols were studied and an innovative system architecture based on micro-services, Kubernetes and Kafka is proposed in this thesis. This proposal tries to address issues such as scalability, interoperability, fault tolerance, resiliency, availability and simple management of large IoT systems. Supported by Kubernetes, which is an open-source technology that allows micro-services to be extensible, configurable and automatically managed with fault tolerance and Kafka, which is a distributed event log that uses the publish-subscribe pattern, the proposed architecture is able to deal with high number of devices producing and consuming data at the same time. The proposed Fault-Tolerant and Interoperable IoT Architecture is a cluster composed of many components (micro-services) that were implemented using docker containers. The current implementation of the system supports the MQTT, CoAP and REST protocols for data incoming and the same plus websockets for data output. Since the system is based on micro-services, more protocols may be added in a simple way (just a new micro-service must be added). The system is able to convert any protocol into another protocol, e.g., if a message arrives at the system through MQTT protocol, it can be consumed using the CoAP or REST protocol. When messages are sent to the system the payload is stored in Kafka independently of the protocol, and when clients request it, it is consumed from Kafka and encapsulated by the client protocol to be sent to the client. In order to evaluate and demonstrate the capabilities of our proposal a set of experiments were made, which allows to collect information about the performance of the Communication Protocols, the system as a whole, Kubernetes and Kafka. From the experiments we were able to conclude that the message size is not so much important, since the system is able to deal with messages from 39 bytes to 2000 bytes. Since we are designing the system for IoT applications, we considered that messages with 2000 Bytes are big messages. Also, it was recognized that the system is able to recover from crashed nodes and to respond well in terms of average delay and packet loss when low and high throughput are compared. In this situation, there is a significant impact of the RAM usage, but the system still works without problems. In terms of scalability, the evaluation of the system through its cluster under-layer platform (Kubernetes) allowed us to understand that there is no direct relation between the time spent toconstant. However, the same conclusion is not true for the number of instances that are needed at high layer (application layer). Here, time spent to increase the number of instances of a specific application is directly proportional to the number of instances that are already running. In respect to data redundancy and persistence, the experiments showed that the average delay and packet loss of a message sent from a Producer to a Receiver is approximately the same regardless of the number of Kafka instances being used. Additionally, using a high number of partitions has a negative impact on the system’s behaviour

    An integrative framework for cooperative production resources in smart manufacturing

    Get PDF
    Under the push of Industry 4.0 paradigm modern manufacturing companies are dealing with a significant digital transition, with the aim to better address the challenges posed by the growing complexity of globalized businesses (Hermann, Pentek, & Otto, Design principles for industrie 4.0 scenarios, 2016). One basic principle of this paradigm is that products, machines, systems and business are always connected to create an intelligent network along the entire factory’s value chain. According to this vision, manufacturing resources are being transformed from monolithic entities into distributed components, which are loosely coupled and autonomous but nevertheless provided of the networking and connectivity capabilities enabled by the increasingly widespread Industrial Internet of Things technology. Under these conditions, they become capable of working together in a reliable and predictable manner, collaborating among themselves in a highly efficient way. Such a mechanism of synergistic collaboration is crucial for the correct evolution of any organization ranging from a multi-cellular organism to a complex modern manufacturing system (Moghaddam & Nof, 2017). Specifically of the last scenario, which is the field of our study, collaboration enables involved resources to exchange relevant information about the evolution of their context. These information can be in turn elaborated to make some decisions, and trigger some actions. In this way connected resources can modify their structure and configuration in response to specific business or operational variations (Alexopoulos, Makris, Xanthakis, Sipsas, & Chryssolouris, 2016). Such a model of “social” and context-aware resources can contribute to the realization of a highly flexible, robust and responsive manufacturing system, which is an objective particularly relevant in the modern factories, as its inclusion in the scope of the priority research lines for the H2020 three-year period 2018-2020 can demonstrate (EFFRA, 2016). Interesting examples of these resources are self-organized logistics which can react to unexpected changes occurred in production or machines capable to predict failures on the basis of the contextual information and then trigger adjustments processes autonomously. This vision of collaborative and cooperative resources can be realized with the support of several studies in various fields ranging from information and communication technologies to artificial intelligence. An update state of the art highlights significant recent achievements that have been making these resources more intelligent and closer to the user needs. However, we are still far from an overall implementation of the vision, which is hindered by three major issues. The first one is the limited capability of a large part of the resources distributed within the shop floor to automatically interpret the exchanged information in a meaningful manner (semantic interoperability) (Atzori, Iera, & Morabito, 2010). This issue is mainly due to the high heterogeneity of data model formats adopted by the different resources used within the shop floor (Modoni, Doukas, Terkaj, Sacco, & Mourtzis, 2016). Another open issue is the lack of efficient methods to fully virtualize the physical resources (Rosen, von Wichert, Lo, & Bettenhausen, 2015), since only pairing physical resource with its digital counterpart that abstracts the complexity of the real world, it is possible to augment communication and collaboration capabilities of the physical component. The third issue is a side effect of the ongoing technological ICT evolutions affecting all the manufacturing companies and consists in the continuous growth of the number of threats and vulnerabilities, which can both jeopardize the cybersecurity of the overall manufacturing system (Wells, Camelio, Williams, & White, 2014). For this reason, aspects related with cyber-security should be considered at the early stage of the design of any ICT solution, in order to prevent potential threats and vulnerabilities. All three of the above mentioned open issues have been addressed in this research work with the aim to explore and identify a precise, secure and efficient model of collaboration among the production resources distributed within the shop floor. This document illustrates main outcomes of the research, focusing mainly on the Virtual Integrative Manufacturing Framework for resources Interaction (VICKI), a potential reference architecture for a middleware application enabling semantic-based cooperation among manufacturing resources. Specifically, this framework provides a technological and service-oriented infrastructure offering an event-driven mechanism that dynamically propagates the changing factors to the interested devices. The proposed system supports the coexistence and combination of physical components and their virtual counterparts in a network of interacting collaborative elements in constant connection, thus allowing to bring back the manufacturing system to a cooperative Cyber-physical Production System (CPPS) (Monostori, 2014). Within this network, the information coming from the productive chain can be promptly and seamlessly shared, distributed and understood by any actor operating in such a context. In order to overcome the problem of the limited interoperability among the connected resources, the framework leverages a common data model based on the Semantic Web technologies (SWT) (Berners-Lee, Hendler, & Lassila, 2001). The model provides a shared understanding on the vocabulary adopted by the distributed resources during their knowledge exchange. In this way, this model allows to integrate heterogeneous data streams into a coherent semantically enriched scheme that represents the evolution of the factory objects, their context and their smart reactions to all kind of situations. The semantic model is also machine-interpretable and re-usable. In addition to modeling, the virtualization of the overall manufacturing system is empowered by the adoption of an agent-based modeling, which contributes to hide and abstract the control functions complexity of the cooperating entities, thus providing the foundations to achieve a flexible and reconfigurable system. Finally, in order to mitigate the risk of internal and external attacks against the proposed infrastructure, it is explored the potential of a strategy based on the analysis and assessment of the manufacturing systems cyber-security aspects integrated into the context of the organization’s business model. To test and validate the proposed framework, a demonstration scenarios has been identified, which are thought to represent different significant case studies of the factory’s life cycle. To prove the correctness of the approach, the validation of an instance of the framework is carried out within a real case study. Moreover, as for data intensive systems such as the manufacturing system, the quality of service (QoS) requirements in terms of latency, efficiency, and scalability are stringent, an evaluation of these requirements is needed in a real case study by means of a defined benchmark, thus showing the impact of the data storage, of the connected resources and of their requests

    Pilvipalvelupohjaisten alustojen hyödyntäminen tuotantoautomaation prosessidatan keräyksessä ja visualisoinnissa

    Get PDF
    New developments at the field of factory information systems and resource allocation solutions are constantly taken into practice within the field of manufacturing and production. Customers are turning their vision for more customized products and requesting further monitoring possibilities for the product itself, for its manufacturing and for its delivery. Similar paradigm change is taking place within the companies’ departments and between the clusters of manufacturing stakeholders. Modern cloud based tools are providing the means for gaining these objectives. Technology evolved from parallel, grid and distributed computing; at present cited as Cloud computing is one key future paradigm in factory and production automation. Regardless of the terminology still settling, in multiple occasions cloud computing is used term when referring to cloud services or cloud resources. Cloud technology is further-more understood as resources located outside individual entities premises. These resources are pieces of functionalities for gaining overall performance of the designed system and so worth such an architectural style is referred as Resource-Oriented Architecture (ROA). Most prominent connection method for combining the resources is a communication via REST (Representational State Transfer) based interfaces. When comping cloud resources with internet connected devices technology, Internet-of-Things (IoT) and furthermore IoT Dashboards for creating user interfaces, substantial benefits can be gained. These benefits include shorter lead-time for user interface development, process data gathering and production monitoring at higher abstract level. This Master’s Thesis takes a study for modern cloud computing resources and IoT Dashboards technologies for gaining process monitoring capabilities able to be used in the field of university research. During the thesis work, an alternative user group is kept in mind. Deploying similar methods for private production companies manufacturing environments. Additionally, field of Additive Manufacturing (AM) and one of its sub-category Direct Energy Deposition Method (DED) is detailed for gaining comprehension over the process monitoring needs, laying in the questioned manufacturing method. Finally, an implementation is developed for monitoring Tampere University of Technology Direct Energy Deposition method manufacturing environment research cell process both in real-time and gathering the process data for later reviewing. These functionalities are gained by harnessing cloud based infrastructures and resources

    Pilvipalvelupohjaisten alustojen hyödyntäminen tuotantoautomaation prosessidatan keräyksessä ja visualisoinnissa

    Get PDF
    New developments at the field of factory information systems and resource allocation solutions are constantly taken into practice within the field of manufacturing and production. Customers are turning their vision for more customized products and requesting further monitoring possibilities for the product itself, for its manufacturing and for its delivery. Similar paradigm change is taking place within the companies’ departments and between the clusters of manufacturing stakeholders. Modern cloud based tools are providing the means for gaining these objectives. Technology evolved from parallel, grid and distributed computing; at present cited as Cloud computing is one key future paradigm in factory and production automation. Regardless of the terminology still settling, in multiple occasions cloud computing is used term when referring to cloud services or cloud resources. Cloud technology is further-more understood as resources located outside individual entities premises. These resources are pieces of functionalities for gaining overall performance of the designed system and so worth such an architectural style is referred as Resource-Oriented Architecture (ROA). Most prominent connection method for combining the resources is a communication via REST (Representational State Transfer) based interfaces. When comping cloud resources with internet connected devices technology, Internet-of-Things (IoT) and furthermore IoT Dashboards for creating user interfaces, substantial benefits can be gained. These benefits include shorter lead-time for user interface development, process data gathering and production monitoring at higher abstract level. This Master’s Thesis takes a study for modern cloud computing resources and IoT Dashboards technologies for gaining process monitoring capabilities able to be used in the field of university research. During the thesis work, an alternative user group is kept in mind. Deploying similar methods for private production companies manufacturing environments. Additionally, field of Additive Manufacturing (AM) and one of its sub-category Direct Energy Deposition Method (DED) is detailed for gaining comprehension over the process monitoring needs, laying in the questioned manufacturing method. Finally, an implementation is developed for monitoring Tampere University of Technology Direct Energy Deposition method manufacturing environment research cell process both in real-time and gathering the process data for later reviewing. These functionalities are gained by harnessing cloud based infrastructures and resources

    An integrative framework for cooperative production resources in smart manufacturing

    Get PDF
    Under the push of Industry 4.0 paradigm modern manufacturing companies are dealing with a significant digital transition, with the aim to better address the challenges posed by the growing complexity of globalized businesses (Hermann, Pentek, & Otto, Design principles for industrie 4.0 scenarios, 2016). One basic principle of this paradigm is that products, machines, systems and business are always connected to create an intelligent network along the entire factory\u2019s value chain. According to this vision, manufacturing resources are being transformed from monolithic entities into distributed components, which are loosely coupled and autonomous but nevertheless provided of the networking and connectivity capabilities enabled by the increasingly widespread Industrial Internet of Things technology. Under these conditions, they become capable of working together in a reliable and predictable manner, collaborating among themselves in a highly efficient way. Such a mechanism of synergistic collaboration is crucial for the correct evolution of any organization ranging from a multi-cellular organism to a complex modern manufacturing system (Moghaddam & Nof, 2017). Specifically of the last scenario, which is the field of our study, collaboration enables involved resources to exchange relevant information about the evolution of their context. These information can be in turn elaborated to make some decisions, and trigger some actions. In this way connected resources can modify their structure and configuration in response to specific business or operational variations (Alexopoulos, Makris, Xanthakis, Sipsas, & Chryssolouris, 2016). Such a model of \u201csocial\u201d and context-aware resources can contribute to the realization of a highly flexible, robust and responsive manufacturing system, which is an objective particularly relevant in the modern factories, as its inclusion in the scope of the priority research lines for the H2020 three-year period 2018-2020 can demonstrate (EFFRA, 2016). Interesting examples of these resources are self-organized logistics which can react to unexpected changes occurred in production or machines capable to predict failures on the basis of the contextual information and then trigger adjustments processes autonomously. This vision of collaborative and cooperative resources can be realized with the support of several studies in various fields ranging from information and communication technologies to artificial intelligence. An update state of the art highlights significant recent achievements that have been making these resources more intelligent and closer to the user needs. However, we are still far from an overall implementation of the vision, which is hindered by three major issues. The first one is the limited capability of a large part of the resources distributed within the shop floor to automatically interpret the exchanged information in a meaningful manner (semantic interoperability) (Atzori, Iera, & Morabito, 2010). This issue is mainly due to the high heterogeneity of data model formats adopted by the different resources used within the shop floor (Modoni, Doukas, Terkaj, Sacco, & Mourtzis, 2016). Another open issue is the lack of efficient methods to fully virtualize the physical resources (Rosen, von Wichert, Lo, & Bettenhausen, 2015), since only pairing physical resource with its digital counterpart that abstracts the complexity of the real world, it is possible to augment communication and collaboration capabilities of the physical component. The third issue is a side effect of the ongoing technological ICT evolutions affecting all the manufacturing companies and consists in the continuous growth of the number of threats and vulnerabilities, which can both jeopardize the cybersecurity of the overall manufacturing system (Wells, Camelio, Williams, & White, 2014). For this reason, aspects related with cyber-security should be considered at the early stage of the design of any ICT solution, in order to prevent potential threats and vulnerabilities. All three of the above mentioned open issues have been addressed in this research work with the aim to explore and identify a precise, secure and efficient model of collaboration among the production resources distributed within the shop floor. This document illustrates main outcomes of the research, focusing mainly on the Virtual Integrative Manufacturing Framework for resources Interaction (VICKI), a potential reference architecture for a middleware application enabling semantic-based cooperation among manufacturing resources. Specifically, this framework provides a technological and service-oriented infrastructure offering an event-driven mechanism that dynamically propagates the changing factors to the interested devices. The proposed system supports the coexistence and combination of physical components and their virtual counterparts in a network of interacting collaborative elements in constant connection, thus allowing to bring back the manufacturing system to a cooperative Cyber-physical Production System (CPPS) (Monostori, 2014). Within this network, the information coming from the productive chain can be promptly and seamlessly shared, distributed and understood by any actor operating in such a context. In order to overcome the problem of the limited interoperability among the connected resources, the framework leverages a common data model based on the Semantic Web technologies (SWT) (Berners-Lee, Hendler, & Lassila, 2001). The model provides a shared understanding on the vocabulary adopted by the distributed resources during their knowledge exchange. In this way, this model allows to integrate heterogeneous data streams into a coherent semantically enriched scheme that represents the evolution of the factory objects, their context and their smart reactions to all kind of situations. The semantic model is also machine-interpretable and re-usable. In addition to modeling, the virtualization of the overall manufacturing system is empowered by the adoption of an agent-based modeling, which contributes to hide and abstract the control functions complexity of the cooperating entities, thus providing the foundations to achieve a flexible and reconfigurable system. Finally, in order to mitigate the risk of internal and external attacks against the proposed infrastructure, it is explored the potential of a strategy based on the analysis and assessment of the manufacturing systems cyber-security aspects integrated into the context of the organization\u2019s business model. To test and validate the proposed framework, a demonstration scenarios has been identified, which are thought to represent different significant case studies of the factory\u2019s life cycle. To prove the correctness of the approach, the validation of an instance of the framework is carried out within a real case study. Moreover, as for data intensive systems such as the manufacturing system, the quality of service (QoS) requirements in terms of latency, efficiency, and scalability are stringent, an evaluation of these requirements is needed in a real case study by means of a defined benchmark, thus showing the impact of the data storage, of the connected resources and of their requests

    A Digital Twin Architecture for a Water Distribution System.

    Get PDF
    Mechanical and Mechatronic Engineerin
    corecore