12 research outputs found

    Fitting simulation input models for correlated traffic data

    Get PDF
    The adequate representation of input models is an important step in building valid simulation models. Modeling independent and identically distributed data is well established in simulation, but for some application areas like computer and communication networks it is known, that the assumption of independent and identically distributed data is violated in practice and that for example interarrival times or packet sizes exhibit autocorrelation over a large number of lags. Moreover, it is known that negligence of these correlations can result in a serious loss of validity of the simulation model. Although different stochastic processes, which can model these autocorrelations, like e.g. Autoregressive-To-Anything (ARTA) processes and Markovian Arrival Processes (MAPs), have been proposed in the past and more recently fitting algorithms to set the parameters of these processes such that they resemble the behavior of observations from a real system have been developed, the integration of correlated processes into simulation models is still a challenge. In this work ARTA processes are extended in several ways to account for the requirements when simulating models of computer and communication systems. In a first step ARTA processes are extended to use an Autoregressive Moving Average (ARMA) process instead of a pure Autoregressive (AR) base process to be able to capture a large number of autocorrelation lags, while keeping the model size small. In a second step they are enabled to use the flexible class of acyclic Phase-type distributions as marginal distribution. To support the usage of these novel processes in simulation models a fitting algorithm is presented, software for fitting and simulating these processes is developed and the tools are integrated into the toolkit ProFiDo, which provides a complete framework for fitting and analyzing different stochastic processes. By means of synthetically generated and real network traces it is shown that the presented stochastic processes are able to provide a good approximation of the marginal distribution as well as the correlation structure of the different traces and result in a compact process description

    Markovian Workload Characterization for QoS Prediction in the Cloud.

    No full text
    Resource allocation in the cloud is usually driven by performance predictions, such as estimates of the future incoming load to the servers or of the quality-of-service (QoS) offered by applications to end users. In this context, characterizing web workload fluctuations in an accurate way is fundamental to understand how to provision cloud resources under time-varying traffic intensities. In this paper, we investigate the Markovian Arrival Processes (MAP) and the related MAP/MAP/1 queueing model as a tool for performance prediction of servers deployed in the cloud. MAPs are a special class of Markov models used as a compact description of the time-varying characteristics of workloads. In addition, MAPs can fit heavy-tail distributions, that are common in HTTP traffic, and can be easily integrated within analytical queueing models to efficiently predict system performance without simulating. By comparison with trace-driven simulation, we observe that existing techniques for MAP parameterization from HTTP log files often lead to inaccurate performance predictions. We then define a maximum likelihood method for fitting MAP parameters based on data commonly available in Apache log files, and a new technique to cope with batch arrivals, which are notoriously difficult to model accurately. Numerical experiments demonstrate the accuracy of our approach for performance prediction of web systems. © 2011 IEEE

    MODEL-BASED APPROACH TO THE UTILIZATION OF HETEROGENEOUS NON-OVERLAPPING DATA IN THE OPTIMIZATION OF COMPLEX AIRPORT SYSTEMS

    Get PDF
    Simulation and optimization have been widely used in air transportation, particularly when it comes to determining how flight operations might evolve. However, with regards to passengers and the services provided to them, this is not the case in large part because the data required for such analysis is harder to collect, requiring the timely use of surveys and significant human labor. The ubiquity of always--connected smart devices and the rise of inexpensive smart devices has made it possible to continuously collect passenger information for passenger-centric solutions such as the automatic mitigation of passenger traffic. Using these devices, it is possible to capture dwell times, transit times, and delays directly from the customers. The data; however, is often sparse and heterogeneous, both spatially and temporally. For instance, the observations come at different times and have different levels of accuracy depending on the location, making it challenging to develop a precise network model of airport operations. The objective of this research is to provide online methods to sequentially correct the estimates of the dynamics of a system of queues despite noisy, quickly changing, and incomplete information. First, a sequential change point detection scheme based on a generalized likelihood ratio test is developed to detect a change in the dynamics of a single queue by using a combination of waiting times, time spent in queue, and queue-length measurements. A trade-off is made between the accuracy of the tests, the speed of the tests, the costs of the tests, and the value of utilizing the observations jointly or separately. The contribution is a robust detection methodology that quickly detects a change in queue dynamics from correlated measurements. In the second part of the work, a model-based estimation tool is developed to update the service rate distribution for a single queue from sparse and noisy airport operations data. Model Reference Adaptive Sampling is used in-the-loop to update a generalized gamma distribution for the service rates within a simulation of the queue at an airport’s immigration center. The contribution is a model predictive tool to optimize the service rates based on waiting time information. The two frameworks allow for the analysis of heterogeneous passenger data sources to enable the tactical mitigation of airport passenger traffic delays.Ph.D

    Eliminierung negativer Effekte autokorrelierter Prozesse an Zusammenführungen

    Get PDF
    Im Kern der vorliegenden Arbeit wird eine neue Vorfahrtstrategie zur Steuerung von Materialflüssen an Zusammenführungen vorgestellt. Das Hauptanwendungsgebiet stellen innerbetriebliche Transportsysteme dar, wobei die Erkenntnisse auf beliebige Transport- bzw. Bediensysteme übertragbar sind. Die Arbeit grenzt sich mit der Annahme autokorrelierter Ankunftsprozesse von bisheriger Forschung und Entwicklung ab. Bis dato werden stets unkorrelierte Ströme angenommen bzw. findet keine spezielle Beachtung autokorrelierter Ströme bei der Vorfahrtsteuerung statt. Untersuchungen zeigen aber, dass zum einen mit hoher Konfidenz mit autokorrelierten Materialflüssen zu rechnen ist und in diesem Fall zum anderen von einem erheblichen Einfluss auf die Systemleistung ausgegangen werden muss. Zusammengefasst konnten im Rahmen der vorliegenden Arbeit 68 Realdatensätze verschiedener Unternehmen untersucht werden, mit dem Ergebnis, dass ca. 95% der Materialflüsse Autokorrelation aufweisen. Ferner wird hergeleitet, dass Autokorrelation intrinsisch in Materialflusssystemen entsteht. Die Folgen autokorrelierter Prozesse bestehen dabei in längeren Durchlaufzeiten, einem volatileren Systemverhalten und höheren Wahrscheinlichkeiten von Systemblockaden. Um die genannten Effekte an Zusammenführungen zu eliminieren, stellt die Arbeit eine neue Vorfahrtstrategie HAFI – Highest Autocorrelated First vor. Diese priorisiert die Ankunftsprozesse anhand deren Autokorrelation. Konkret wird die Vorfahrt zunächst so lange nach dem Prinzip First Come First Served gewährt, bis richtungsweise eine spezifische Warteschlangenlänge überschritten wird. Der jeweilige Wert ergibt sich aus der Höhe der Autokorrelation der Ankunftsprozesse. Vorfahrt bekommt der Strom mit der höchsten Überschreitung seines Grenzwertes. Die Arbeit stellt ferner eine Heuristik DyDeT zur automatischen Bestimmung und dynamischen Anpassung der Grenzwerte vor. Mit einer Simulationsstudie wird gezeigt, dass HAFI mit Anwendung von DyDeT die Vorzüge der etablierten Vorfahrtstrategien First Come First Served und Longest Queue First vereint. Dabei wird auch deutlich, dass die zwei letztgenannten Strategien den besonderen Herausforderungen autokorrelierter Ankunftsprozesse nicht gerecht werden. Bei einer Anwendung von HAFI zur Vorfahrtsteuerung können Durchlaufzeiten und Warteschlangenlängen auf dem Niveau von First Come First Served erreicht werden, wobei dieses ca. 10% unter dem von Longest Queue First liegt. Gleichzeitig ermöglicht HAFI, im Gegensatz zu First Come First Served, eine ähnlich gute Lastbalancierung wie Longest Queue First. Die Ergebnisse stellen sich robust gegenüber Änderungen der Auslastung sowie der Höhe der Autokorrelation dar. Gleichzeitig sind die Erkenntnisse unabhängig der Analyse einer isolierten Zusammenführung und der Anordnung mehrerer Zusammenführungen in einem Netzwerk.:1 Einleitung 1 1.1 Motivation 1 1.2 Zielsetzung, wissenschaftlicher Beitrag 4 1.3 Konzeption 5 2 Grundlagen 7 2.1 Automatisierung, Steuern, Regeln 7 2.2 System, Modell 10 2.3 Stochastik, Statistik 14 2.3.1 Wahrscheinlichkeitsverteilungen 14 2.3.2 Zufallszahlengeneratoren 21 2.3.3 Autokorrelation als Ähnlichkeits- bzw. Abhängigkeitsmaß 24 2.4 Simulation 29 2.5 Warteschlangentheorie und -modelle 32 2.6 Materialflusssystem 35 2.7 Materialflusssteuerung 37 2.7.1 Steuerungssysteme 37 2.7.2 Steuerungsstrategien 40 2.8 Materialflusssystem charakterisierende Kennzahlen 46 3 Stand der Forschung und Technik 51 3.1 Erzeugung autokorrelierter Zufallszahlen 51 3.1.1 Autoregressive Prozesse nach der Box-Jenkins-Methode 52 3.1.2 Distorsions-Methoden 54 3.1.3 Copulae 56 3.1.4 Markovian Arrival Processes 58 3.1.5 Autoregressive Prozesse mit beliebiger Randverteilung 61 3.1.6 Weitere Verfahren 64 3.1.7 Bewertung der Verfahren und Werkzeuge zur Generierung 65 3.2 Wirken von Autokorrelation in Bediensystemen 68 3.3 Fallstudien über Autokorrelation in logistischen Systemen 75 3.4 Ursachen von Autokorrelation in logistischen Systemen 89 3.5 Steuerung von Ankunftsprozessen an Zusammenführungen 96 3.6 Steuerung autokorrelierter Ankunftsprozesse 100 4 Steuerung autokorrelierter Ankunftsprozesse an Zusammenführungen 105 4.1 Modellannahmen, Methodenauswahl, Vorbetrachtungen 106 4.2 First Come First Served und Longest Queue First 114 4.3 Highest Autocorrelated First 117 4.3.1 Grundprinzip 117 4.3.2 Bestimmung der Grenzwerte 127 4.3.3 Dynamische Bestimmung der Grenzwerte mittels „DyDeT“ 133 4.4 Highest Autocorrelated First in Netzwerken 150 4.5 Abschließende Bewertung und Diskussion 161 5 Zusammenfassung und Ausblick 167 Primärliteratur 172 Normen und Standards 194 Abbildungsverzeichnis 197 Tabellenverzeichnis 199 Pseudocodeverzeichnis 201 Abkürzungsverzeichnis 203 Symbolverzeichnis 205 Erklärung an Eides statt 209The work at hand presents a novel strategy to control arrival processes at merges. The main fields of application are intralogistics transport systems. Nevertheless, the findings can be adapted to any queuing system. In contrast to further research and development the thesis assumes autocorrelated arrival processes. Up until now, arrivals are usually assumed to be uncorrelated and there are no special treatments for autocorrelated arrivals in the context of merge controlling. However, surveys show with high reliability the existence of autocorrelated arrivals, resulting in some major impacts on the systems\' performance. In detail, 68 real-world datasets of different companies have been tested in the scope of this work, and in 95% of the cases arrival processes significantly show autocorrelations. Furthermore, the research shows that autocorrelation comes from the system itself. As a direct consequence it was observed that there were longer cycle times, more volatile system behavior, and a higher likelihood of deadlocks. In order to eliminate these effects at merges, this thesis introduces a new priority rule called HAFI-Highest Autocorrelated First. It assesses the arrivals\' priority in accordance to their autocorrelation. More concretely, priority initially is given in accordance to the First Come First Served scheme as long as specific direction-wise queue lengths are not exceeded. The particular thresholds are determined by the arrival processes\' autocorrelation, wherein the process with the highest volume gets priority. Furthermore, the thesis introduces a heuristic to automatically and dynamically determine the specific thresholds of HAFI-so called DyDeT. With a simulation study it can be shown that HAFI in connection with DyDeT, combines the advantages of the well-established priority rules First Come First Served and Longest Queue First. It also becomes obvious that the latter ones are not able to deal with the challenges of autocorrelated arrival processes. By applying HAFI cycling times and mean queue lengths on the level of First Come First Served can be achieved. These are about 10% lower than for Longest Queue First. Concomitantly and in contrast to First Come First Served, HAFI also shows well balanced queues like Longest Queue First. The results are robust against different levels of throughput and autocorrelation, respectively. Furthermore, the findings are independent from analyzing a single instance of a merge or several merges in a network.:1 Einleitung 1 1.1 Motivation 1 1.2 Zielsetzung, wissenschaftlicher Beitrag 4 1.3 Konzeption 5 2 Grundlagen 7 2.1 Automatisierung, Steuern, Regeln 7 2.2 System, Modell 10 2.3 Stochastik, Statistik 14 2.3.1 Wahrscheinlichkeitsverteilungen 14 2.3.2 Zufallszahlengeneratoren 21 2.3.3 Autokorrelation als Ähnlichkeits- bzw. Abhängigkeitsmaß 24 2.4 Simulation 29 2.5 Warteschlangentheorie und -modelle 32 2.6 Materialflusssystem 35 2.7 Materialflusssteuerung 37 2.7.1 Steuerungssysteme 37 2.7.2 Steuerungsstrategien 40 2.8 Materialflusssystem charakterisierende Kennzahlen 46 3 Stand der Forschung und Technik 51 3.1 Erzeugung autokorrelierter Zufallszahlen 51 3.1.1 Autoregressive Prozesse nach der Box-Jenkins-Methode 52 3.1.2 Distorsions-Methoden 54 3.1.3 Copulae 56 3.1.4 Markovian Arrival Processes 58 3.1.5 Autoregressive Prozesse mit beliebiger Randverteilung 61 3.1.6 Weitere Verfahren 64 3.1.7 Bewertung der Verfahren und Werkzeuge zur Generierung 65 3.2 Wirken von Autokorrelation in Bediensystemen 68 3.3 Fallstudien über Autokorrelation in logistischen Systemen 75 3.4 Ursachen von Autokorrelation in logistischen Systemen 89 3.5 Steuerung von Ankunftsprozessen an Zusammenführungen 96 3.6 Steuerung autokorrelierter Ankunftsprozesse 100 4 Steuerung autokorrelierter Ankunftsprozesse an Zusammenführungen 105 4.1 Modellannahmen, Methodenauswahl, Vorbetrachtungen 106 4.2 First Come First Served und Longest Queue First 114 4.3 Highest Autocorrelated First 117 4.3.1 Grundprinzip 117 4.3.2 Bestimmung der Grenzwerte 127 4.3.3 Dynamische Bestimmung der Grenzwerte mittels „DyDeT“ 133 4.4 Highest Autocorrelated First in Netzwerken 150 4.5 Abschließende Bewertung und Diskussion 161 5 Zusammenfassung und Ausblick 167 Primärliteratur 172 Normen und Standards 194 Abbildungsverzeichnis 197 Tabellenverzeichnis 199 Pseudocodeverzeichnis 201 Abkürzungsverzeichnis 203 Symbolverzeichnis 205 Erklärung an Eides statt 20

    Methodology for Analyzing and Characterizing Error Generation in Presence of Autocorrelated Demands in Stochastic Inventory Models

    Get PDF
    Most techniques that describe and solve stochastic inventory problems rely upon the assumption of identically and independently distributed (IID) demands. Stochastic inventory formulations that fail to capture serially-correlated components in the demand lead to serious errors. This dissertation provides a robust method that approximates solutions to the stochastic inventory problem where the control review system is continuous, the demand contains autocorrelated components, and the lost sales case is considered. A simulation optimization technique based on simulated annealing (SA), pattern search (PS), and ranking and selection (R&S) is developed and used to generate near-optimal solutions. The proposed method accounts for the randomness and dependency of the demand as well as for the inherent constraints of the inventory model. The impact of serially-correlated demand is investigated for discrete and continuous dependent input models. For the discrete dependent model, the autocorrelated demand is assumed to behave as a discrete Markov-modulated chain (DMC), while a first-order autoregressive AR(1) process is assumed for describing the continuous demand. The effects of these demand patterns combined with structural cost variations on estimating both total costs and control policy parameters were examined. Results demonstrated that formulations that ignore the serially-correlated component performed worse than those that considered it. In this setting, the effect of holding cost and its interaction with penalty cost become stronger and more significant as the serially-correlated component increases. The growth rate in the error generated in total costs by formulations that ignore dependency components is significant and fits exponential models. To verify the effectiveness of the proposed simulation optimization method for finding the near-optimal inventory policy at different levels of autocorrelation factors, total costs, and stockout rates were estimated. The results provide additional evidence that serially-correlated components in the demand have a relevant impact on determining inventory control policies and estimating measurement of performance

    Towards Autonomous and Efficient Machine Learning Systems

    Get PDF
    Computation-intensive machine learning (ML) applications are becoming some of the most popular workloads running atop cloud infrastructure. While training ML applications, practitioners face the challenge of tuning various system-level parameters, such as the number of training nodes, communication topology during training, instance type, and the number of serving nodes, to meet the SLO requirements for bursty workload during the inference. Similarly, efficient resource utilization is another key challenge in cloud computing. This dissertation proposes high-performing and efficient ML systems to speed up training time and inference tasks while enabling automated and robust system management.To train an ML model in a distributed fashion we focus on strategies to mitigate the resource provisioning overhead and improve the training speed without impacting the model accuracy. More specifically, a system for autonomic and adaptive scheduling is built atop serverless computing that dynamically optimizes deployment and resource scaling for ML training tasks for cost-effectiveness and fast training. Similarly, a dynamic client selection framework is developed to address the stragglers problem caused by resource heterogeneity, data quality, and data quantity in a privacy-preserving Federated Learning (FL) environment without impacting the model accuracy.For serving bursty ML workloads we focus on developing highly scalable and adaptive strategies to serve the dynamically changing workload in a cost-effective manner in an autonomic fashion. We develop a framework that optimizes batching parameters on the fly using a lightweight profiler and an analytical model. We also devise strategies for serving ML workloads of varying sizes, leading to non-deterministic service time in a cost-effective manner. More specifically, we develop an SLO-aware framework that first analyzes the request size variations and workload variation to estimate the number of serving functions and intelligently route requests to multiple serving functions. Finally, resource utilization of burstable instances is optimized to benefit the cloud provider and end-user through a careful orchestration of resources (i.e., CPU, network, and I/O) using an analytical model and lightweight profiling, while complying with a user-defined SLO

    Modelos de series temporales para simulación de procesos industriales : aplicación al dimensionamiento y control de sistemas altamente variables

    Get PDF
    [Resumen] La simulación es una reconocida metodología para la modelización de sistemas productivos. Dentro de un proyecto de simulación, el análisis de los datos de entrada al modelo es una fase crítica que condiciona la validez de los resultados. Aunque diversos autores han indicado anteriormente la importancia de modelar adecuadamente las propiedades estadísticas de las series de temporales de un proceso, pocos trabajos del área han analizado los modelos adecuados para su empleo práctico más allá de la asunción de la hipótesis de independencia e igualdad de distribución (i.i.d.). Con el fin de proporcionar una metodología flexible para la modelización de series con autocorrelación, se considera la adopción de los modelos ARTA (proceso autorregresivo a distribución general). Estos modelos son empleados para estudiar el comportamiento de líneas con autocorrelación en los tiempos de ciclo. El estudio llevado a cabo permite determinar el impacto que la presencia de autocorrelación tiene sobre el rendimiento de la línea, sobre las soluciones al problema de dimensionamiento óptimo y asignación de buffers y sobre los sistemas de control de la producción. Por otro lado, se proporciona un marco conceptual para caracterizar la presencia de variabilidad en múltiples escalas temporales y se analiza la influencia que la consideración de modelos con distintas escalas ejerce sobre los resultados. La tesis se completa con el estudio de un caso paradigmático de una línea de fabricación altamente variable. Se muestra cómo la adopción de un modelo con dos escalas temporales, junto con la consideración de los efectos de autocorrelación, fueron medios necesarios para obtener un modelo válido. Este modelo fue empleado para la valoración de mejoras bajo un enfoque de fabricación Lean

    IoT and Sensor Networks in Industry and Society

    Get PDF
    The exponential progress of Information and Communication Technology (ICT) is one of the main elements that fueled the acceleration of the globalization pace. Internet of Things (IoT), Artificial Intelligence (AI) and big data analytics are some of the key players of the digital transformation that is affecting every aspect of human's daily life, from environmental monitoring to healthcare systems, from production processes to social interactions. In less than 20 years, people's everyday life has been revolutionized, and concepts such as Smart Home, Smart Grid and Smart City have become familiar also to non-technical users. The integration of embedded systems, ubiquitous Internet access, and Machine-to-Machine (M2M) communications have paved the way for paradigms such as IoT and Cyber Physical Systems (CPS) to be also introduced in high-requirement environments such as those related to industrial processes, under the forms of Industrial Internet of Things (IIoT or I2oT) and Cyber-Physical Production Systems (CPPS). As a consequence, in 2011 the German High-Tech Strategy 2020 Action Plan for Germany first envisioned the concept of Industry 4.0, which is rapidly reshaping traditional industrial processes. The term refers to the promise to be the fourth industrial revolution. Indeed, the first industrial revolution was triggered by water and steam power. Electricity and assembly lines enabled mass production in the second industrial revolution. In the third industrial revolution, the introduction of control automation and Programmable Logic Controllers (PLCs) gave a boost to factory production. As opposed to the previous revolutions, Industry 4.0 takes advantage of Internet access, M2M communications, and deep learning not only to improve production efficiency but also to enable the so-called mass customization, i.e. the mass production of personalized products by means of modularized product design and flexible processes. Less than five years later, in January 2016, the Japanese 5th Science and Technology Basic Plan took a further step by introducing the concept of Super Smart Society or Society 5.0. According to this vision, in the upcoming future, scientific and technological innovation will guide our society into the next social revolution after the hunter-gatherer, agrarian, industrial, and information eras, which respectively represented the previous social revolutions. Society 5.0 is a human-centered society that fosters the simultaneous achievement of economic, environmental and social objectives, to ensure a high quality of life to all citizens. This information-enabled revolution aims to tackle today’s major challenges such as an ageing population, social inequalities, depopulation and constraints related to energy and the environment. Accordingly, the citizens will be experiencing impressive transformations into every aspect of their daily lives. This book offers an insight into the key technologies that are going to shape the future of industry and society. It is subdivided into five parts: the I Part presents a horizontal view of the main enabling technologies, whereas the II-V Parts offer a vertical perspective on four different environments. The I Part, dedicated to IoT and Sensor Network architectures, encompasses three Chapters. In Chapter 1, Peruzzi and Pozzebon analyse the literature on the subject of energy harvesting solutions for IoT monitoring systems and architectures based on Low-Power Wireless Area Networks (LPWAN). The Chapter does not limit the discussion to Long Range Wise Area Network (LoRaWAN), SigFox and Narrowband-IoT (NB-IoT) communication protocols, but it also includes other relevant solutions such as DASH7 and Long Term Evolution MAchine Type Communication (LTE-M). In Chapter 2, Hussein et al. discuss the development of an Internet of Things message protocol that supports multi-topic messaging. The Chapter further presents the implementation of a platform, which integrates the proposed communication protocol, based on Real Time Operating System. In Chapter 3, Li et al. investigate the heterogeneous task scheduling problem for data-intensive scenarios, to reduce the global task execution time, and consequently reducing data centers' energy consumption. The proposed approach aims to maximize the efficiency by comparing the cost between remote task execution and data migration. The II Part is dedicated to Industry 4.0, and includes two Chapters. In Chapter 4, Grecuccio et al. propose a solution to integrate IoT devices by leveraging a blockchain-enabled gateway based on Ethereum, so that they do not need to rely on centralized intermediaries and third-party services. As it is better explained in the paper, where the performance is evaluated in a food-chain traceability application, this solution is particularly beneficial in Industry 4.0 domains. Chapter 5, by De Fazio et al., addresses the issue of safety in workplaces by presenting a smart garment that integrates several low-power sensors to monitor environmental and biophysical parameters. This enables the detection of dangerous situations, so as to prevent or at least reduce the consequences of workers accidents. The III Part is made of two Chapters based on the topic of Smart Buildings. In Chapter 6, Petroșanu et al. review the literature about recent developments in the smart building sector, related to the use of supervised and unsupervised machine learning models of sensory data. The Chapter poses particular attention on enhanced sensing, energy efficiency, and optimal building management. In Chapter 7, Oh examines how much the education of prosumers about their energy consumption habits affects power consumption reduction and encourages energy conservation, sustainable living, and behavioral change, in residential environments. In this Chapter, energy consumption monitoring is made possible thanks to the use of smart plugs. Smart Transport is the subject of the IV Part, including three Chapters. In Chapter 8, Roveri et al. propose an approach that leverages the small world theory to control swarms of vehicles connected through Vehicle-to-Vehicle (V2V) communication protocols. Indeed, considering a queue dominated by short-range car-following dynamics, the Chapter demonstrates that safety and security are increased by the introduction of a few selected random long-range communications. In Chapter 9, Nitti et al. present a real time system to observe and analyze public transport passengers' mobility by tracking them throughout their journey on public transport vehicles. The system is based on the detection of the active Wi-Fi interfaces, through the analysis of Wi-Fi probe requests. In Chapter 10, Miler et al. discuss the development of a tool for the analysis and comparison of efficiency indicated by the integrated IT systems in the operational activities undertaken by Road Transport Enterprises (RTEs). The authors of this Chapter further provide a holistic evaluation of efficiency of telematics systems in RTE operational management. The book ends with the two Chapters of the V Part on Smart Environmental Monitoring. In Chapter 11, He et al. propose a Sea Surface Temperature Prediction (SSTP) model based on time-series similarity measure, multiple pattern learning and parameter optimization. In this strategy, the optimal parameters are determined by means of an improved Particle Swarm Optimization method. In Chapter 12, Tsipis et al. present a low-cost, WSN-based IoT system that seamlessly embeds a three-layered cloud/fog computing architecture, suitable for facilitating smart agricultural applications, especially those related to wildfire monitoring. We wish to thank all the authors that contributed to this book for their efforts. We express our gratitude to all reviewers for the volunteering support and precious feedback during the review process. We hope that this book provides valuable information and spurs meaningful discussion among researchers, engineers, businesspeople, and other experts about the role of new technologies into industry and society

    Flexible Automation and Intelligent Manufacturing: The Human-Data-Technology Nexus

    Get PDF
    This is an open access book. It gathers the first volume of the proceedings of the 31st edition of the International Conference on Flexible Automation and Intelligent Manufacturing, FAIM 2022, held on June 19 – 23, 2022, in Detroit, Michigan, USA. Covering four thematic areas including Manufacturing Processes, Machine Tools, Manufacturing Systems, and Enabling Technologies, it reports on advanced manufacturing processes, and innovative materials for 3D printing, applications of machine learning, artificial intelligence and mixed reality in various production sectors, as well as important issues in human-robot collaboration, including methods for improving safety. Contributions also cover strategies to improve quality control, supply chain management and training in the manufacturing industry, and methods supporting circular supply chain and sustainable manufacturing. All in all, this book provides academicians, engineers and professionals with extensive information on both scientific and industrial advances in the converging fields of manufacturing, production, and automation
    corecore