16 research outputs found

    Quality of Information in Mobile Crowdsensing: Survey and Research Challenges

    Full text link
    Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN

    Trusted resource allocation in volunteer edge-cloud computing for scientific applications

    Get PDF
    Data-intensive science applications in fields such as e.g., bioinformatics, health sciences, and material discovery are becoming increasingly dynamic and demanding with resource requirements. Researchers using these applications which are based on advanced scientific workflows frequently require a diverse set of resources that are often not available within private servers or a single Cloud Service Provider (CSP). For example, a user working with Precision Medicine applications would prefer only those CSPs who follow guidelines from HIPAA (Health Insurance Portability and Accountability Act) for implementing their data services and might want services from other CSPs for economic viability. With the generation of more and more data these workflows often require deployment and dynamic scaling of multi-cloud resources in an efficient and high-performance manner (e.g., quick setup, reduced computation time, and increased application throughput). At the same time, users seek to minimize the costs of configuring the related multi-cloud resources. While performance and cost are among the key factors to decide upon CSP resource selection, the scientific workflows often process proprietary/confidential data that introduces additional constraints of security postures. Thus, users have to make an informed decision on the selection of resources that are most suited for their applications while trading off between the key factors of resource selection which are performance, agility, cost, and security (PACS). Furthermore, even with the most efficient resource allocation across multi-cloud, the cost to solution might not be economical for all users which have led to the development of new paradigms of computing such as volunteer computing where users utilize volunteered cyber resources to meet their computing requirements. For economical and readily available resources, it is essential that such volunteered resources can integrate well with cloud resources for providing the most efficient computing infrastructure for users. In this dissertation, individual stages such as user requirement collection, user's resource preferences, resource brokering and task scheduling, in lifecycle of resource brokering for users are tackled. For collection of user requirements, a novel approach through an iterative design interface is proposed. In addition, fuzzy interference-based approach is proposed to capture users' biases and expertise for guiding their resource selection for their applications. The results showed improvement in performance i.e. time to execute in 98 percent of the studied applications. The data collected on user's requirements and preferences is later used by optimizer engine and machine learning algorithms for resource brokering. For resource brokering, a new integer linear programming based solution (OnTimeURB) is proposed which creates multi-cloud template solutions for resource allocation while also optimizing performance, agility, cost, and security. The solution was further improved by the addition of a machine learning model based on naive bayes classifier which captures the true QoS of cloud resources for guiding template solution creation. The proposed solution was able to improve the time to execute for as much as 96 percent of the largest applications. As discussed above, to fulfill necessity of economical computing resources, a new paradigm of computing viz-a-viz Volunteer Edge Computing (VEC) is proposed which reduces cost and improves performance and security by creating edge clusters comprising of volunteered computing resources close to users. The initial results have shown improved time of execution for application workflows against state-of-the-art solutions while utilizing only the most secure VEC resources. Consequently, we have utilized reinforcement learning based solutions to characterize volunteered resources for their availability and flexibility towards implementation of security policies. The characterization of volunteered resources facilitates efficient allocation of resources and scheduling of workflows tasks which improves performance and throughput of workflow executions. VEC architecture is further validated with state-of-the-art bioinformatics workflows and manufacturing workflows.Includes bibliographical references

    CloudMedia: When cloud on demand meets video on demand

    Get PDF
    Internet-based cloud computing is a new computing paradigm aiming to provide agile and scalable resource access in a utility-like fashion. Other than being an ideal platform for computation-intensive tasks, clouds are believed to be also suitable to support large-scale applications with periods of flash crowds by providing elastic amounts of bandwidth and other resources on the fly. The fundamental question is how to configure the cloud utility to meet the highly dynamic demands of such applications at a modest cost. In this paper, we address this practical issue with solid theoretical analysis and efficient algorithm design using Video on Demand (VoD) as the example application. Having intensive bandwidth and storage demands in real time, VoD applications are purportedly ideal candidates to be supported on a cloud platform, where the on-demand resource supply of the cloud meets the dynamic demands of the VoD applications. We introduce a queueing network based model to characterize the viewing behaviors of users in a multichannel VoD application, and derive the server capacities needed to support smooth playback in the channels for two popular streaming models: client-server and P2P. We then propose a dynamic cloud resource provisioning algorithm which, using the derived capacities and instantaneous network statistics as inputs, can effectively support VoD streaming with low cloud utilization cost. Our analysis and algorithm design are verified and extensively evaluated using large-scale experiments under dynamic realistic settings on a home-built cloud platform. © 2011 IEEE.published_or_final_versionThe 31st International Conference on Distributed Computing Systems (ICDCS 2011), Minneapolis, MN., 20-24 June 2011. In Proceedings of 31st ICDCS, 2011, p. 268-27

    Service recommendation and selection in centralized and decentralized environments.

    Get PDF
    With the increasing use of web services in everyday tasks we are entering an era of Internet of Services (IoS). Service discovery and selection in both centralized and decentralized environments have become a critical issue in the area of web services, in particular when services having similar functionality but different Quality of Service (QoS). As a result, selecting a high quality service that best suits consumer requirements from a large list of functionally equivalent services is a challenging task. In response to increasing numbers of services in the discovery and selection process, there is a corresponding increase of service consumers and a consequent diversity in Quality of Service (QoS) available. Increases in both sides leads to a diversity in the demand and supply of services, which would result in the partial match of the requirements and offers. Furthermore, it is challenging for customers to select suitable services from a large number of services that satisfy consumer functional requirements. Therefore, web service recommendation becomes an attractive solution to provide recommended services to consumers which can satisfy their requirements.In this thesis, first a service ranking and selection algorithm is proposed by considering multiple QoS requirements and allowing partially matched services to be counted as a candidate for the selection process. With the initial list of available services the approach considers those services with a partial match of consumer requirements and ranks them based on the QoS parameters, this allows the consumer to select suitable service. In addition, providing weight value for QoS parameters might not be an easy and understandable task for consumers, as a result an automatic weight calculation method has been included for consumer requirements by utilizing distance correlation between QoS parameters. The second aspect of the work in the thesis is the process of QoS based web service recommendation. With an increasing number of web services having similar functionality, it is challenging for service consumers to find out suitable web services that meet their requirements. We propose a personalised service recommendation method using the LDA topic model, which extracts latent interests of consumers and latent topics of services in the form of probability distribution. In addition, the proposed method is able to improve the accuracy of prediction of QoS properties by considering the correlation between neighbouring services and return a list of recommended services that best satisfy consumer requirements. The third part of the thesis concerns providing service discovery and selection in a decentralized environment. Service discovery approaches are often supported by centralized repositories that could suffer from single point failure, performance bottleneck, and scalability issues in large scale systems. To address these issues, we propose a context-aware service discovery and selection approach in a decentralized peer-to-peer environment. In the approach homophily similarity was used for bootstrapping and distribution of nodes. The discovery process is based on the similarity of nodes and previous interaction and behaviour of the nodes, which will help the discovery process in a dynamic environment. Our approach is not only considering service discovery, but also the selection of suitable web service by taking into account the QoS properties of the web services. The major contribution of the thesis is providing a comprehensive QoS based service recommendation and selection in centralized and decentralized environments. With the proposed approach consumers will be able to select suitable service based on their requirements. Experimental results on real world service datasets showed that proposed approaches achieved better performance and efficiency in recommendation and selection process.N/

    Trust Management for Context-Aware Composite Services

    Get PDF
    In the areas of cloud computing, big data and internet of things, composite services are designed to effectively address complex levels of user requirements. A major challenge for composite services management is the dynamic and continuously changing run-time environments that could raise several exceptional situations such as service execution time that may have greatly increased or a service that may become unavailable. Composite services in this environmental context have difficulty securing an acceptable quality of service (QoS). The need for dynamic adaptations to be triggered becomes then urgent for service-based systems. These systems also require trust management to ensure service level agreement (SLA) compliance. To face this dynamism and volatility, context-aware composite services (i.e., run-time self-adaptable services) are designed to continue offering their functionalities without compromising their operational efficiency to boost the added value of the composition. The literature on adaptation management for context-aware composite services mainly focuses on the closed world assumption that the boundary between the service and its run-time environment is known, which is impractical for dynamic services in the open world where environmental contexts are unexpected. Besides, the literature relies on centralized architectures that suffer from management overhead or distributed architectures that suffer from communication overhead to manage service adaptation. Moreover, the problem of encountering malicious constituent services at run-time still needs further investigation toward a more efficient solution. Such services take advantage of the environmental contexts for their benefit by providing unsatisfying QoS values or maliciously collaborate with other services. Furthermore, the literature overlooks the fact that composite services data is relational and relies on propositional data (i.e., flattened data containing the information without the structure). This contradicts with the fact that services are statistically dependent since QoS values of service are correlated with those of other services. This thesis aims to address these gaps by capitalizing on different methods from software engineering, computational intelligence and machine learning. To support context-aware composite services in the open world, dynamic adaptation mechanisms are carried out at design-time to guide the running services. To this end, this thesis proposes an adaptation solution based on a feature model that captures the variability of the composite service and deliberates the inter-dependency relations among QoS constraints. We apply the master-slaves adaptation pattern to enable coordination of the self-adaptation process based on the MAPE loop (Monitor-Analysis-Plan-Execute) at run time. We model the adaptation process as a multi-objective optimization problem and solve it using a meta-heuristic search technique constrained by SLA and feature model constraints. This enables the master to resolve conflicting QoS goals of the service adaptation. In the slave side, we propose an adaptation solution that immediately substitutes failed constituent services with no need for complex and costly global adaptation. To support the decision making at different levels of adaptation, we first propose an online SLA violation prediction model that requires small amounts of end-to-end QoS data. We then extend the model to comprehensively consider service dependency that exists in the real business world at run time by leveraging the relational dependency network, thus enhancing the prediction accuracy. In addition, we propose a trust management model for services based on the dependency network. Particularly, we predict the probability of delivering a satisfactory QoS under changing environmental contexts by leveraging the cyclic dependency relations among QoS metrics and environmental context variables. Moreover, we develop a service reputation evaluation technique based on the power of mass collaboration where we explicitly detect collusion attacks. As another contribution of this thesis, we introduce for the newcomer services a trust bootstrapping mechanism resilient to the white-washing attack using the concept of social adoption. The thesis reports simulation results using real datasets showing the efficiency of the proposed solutions

    Multicloud Resource Allocation:Cooperation, Optimization and Sharing

    Get PDF
    Nowadays our daily life is not only powered by water, electricity, gas and telephony but by "cloud" as well. Big cloud vendors such as Amazon, Microsoft and Google have built large-scale centralized data centers to achieve economies of scale, on-demand resource provisioning, high resource availability and elasticity. However, those massive data centers also bring about many other problems, e.g., bandwidth bottlenecks, privacy, security, huge energy consumption, legal and physical vulnerabilities. One of the possible solutions for those problems is to employ multicloud architectures. In this thesis, our work provides research contributions to multicloud resource allocation from three perspectives of cooperation, optimization and data sharing. We address the following problems in the multicloud: how resource providers cooperate in a multicloud, how to reduce information leakage in a multicloud storage system and how to share the big data in a cost-effective way. More specifically, we make the following contributions: Cooperation in the decentralized cloud. We propose a decentralized cloud model in which a group of SDCs can cooperate with each other to improve performance. Moreover, we design a general strategy function for SDCs to evaluate the performance of cooperation based on different dimensions of resource sharing. Through extensive simulations using a realistic data center model, we show that the strategies based on reciprocity are more effective than other strategies, e.g., those using prediction based on historical data. Our results show that the reciprocity-based strategy can thrive in a heterogeneous environment with competing strategies. Multicloud optimization on information leakage. In this work, we firstly study an important information leakage problem caused by unplanned data distribution in multicloud storage services. Then, we present StoreSim, an information leakage aware storage system in multicloud. StoreSim aims to store syntactically similar data on the same cloud, thereby minimizing the user's information leakage across multiple clouds. We design an approximate algorithm to efficiently generate similarity-preserving signatures for data chunks based on MinHash and Bloom filter, and also design a function to compute the information leakage based on these signatures. Next, we present an effective storage plan generation algorithm based on clustering for distributing data chunks with minimal information leakage across multiple clouds. Finally, we evaluate our scheme using two real datasets from Wikipedia and GitHub. We show that our scheme can reduce the information leakage by up to 60% compared to unplanned placement. Furthermore, our analysis in terms of system attackability demonstrates that our scheme makes attacks on information much more complex. Smart data sharing. Moving large amounts of distributed data into the cloud or from one cloud to another can incur high costs in both time and bandwidth. The optimization on data sharing in the multicloud can be conducted from two different angles: inter-cloud scheduling and intra-cloud optimization. We first present CoShare, a P2P inspired decentralized cost effective sharing system for data replication to optimize network transfer among small data centers. Then we propose a data summarization method to reduce the total size of dataset, thereby reducing network transfer

    Modelo de Calidad para Servicios Cloud

    Full text link
    [EN] Modelo de Calidad para Servicios Cloud 4 Abstract Context: Cloud computing is a model of provision and consumption of services that offers many advantages to companies (high availability, flexibility, maximum utilization of resources, etc.) that result in quality requirements that must be met by the servi ce. In recent years there have been proposed numerous quality attributes and metrics for cloud services, but there is no study to collect this information and classified it with respect to internal and external characteristics of service (Quality of Servic e - QoS) and characteristics in use of the service (Quality of Experience - QoE). Objective: The objective of this final master ’s work is to define a model specific quality cloud services aligned with the ISO / IEC 25010, which integrate the quality feature s, attributes and metrics proposed in the literature and which allow to assess the quality cloud of artifacts in various stages of the life cycle. Method: We performed a systematic review of the literature in order to identify and analyze the attributes an d quality metrics proposed to assess the quality of cloud services. This method has been widely used in the field of Software Engineering and has proven useful to collect and analyze existing information on a particular research topic. Results: The result is a quality model for cloud services which has been built from 178 attributes and 364 metric obtained as a result of the systematic review. In particular, the results of the review indicate that 48% of proposals are metrics to measure performance efficien cy, reliability metrics following him with 23%. With respect to the phase of the life cycle, 55% of these metrics are used in the operation phase and 32% in the acquisition phase. Regarding the point of view of stakeholders, 39% of the metrics are oriented to the service provider, 33% consumer, 7% to the facilitator (broker) and only 5% service developer. With respect to cloud evaluated artifacts, most metrics (97%) are applied to the cloud service being tested or deployed in the cloud; only 2% of the metri cs are applied to the service architecture and 1% on the service specification. With regard to the validation, the results show that 99% of metric proposals lack any type of validation, although 44% presents a proof of concept illustrating how the metrics can be used. Additionally, we identified 27 attributes for cloud services, the elasticity was the most named, with 14%. Conclusions: The results of this work have provided relevant information on the current status and gaps that exist in the field of quali ty assessment of cloud services. They have also allowed us to define a quality model to meet some identified shortcomings. As future work, we intend to refine the proposed model, propose new metrics and adapt some existing architectures for evaluating clou d and empirical studies to provide evidence about theusefulness of a set of metrics[ES] El trabajo consiste en definir un modelo de calidad que determine las características de los servicios cloud y proporcione los mecanismos necesarios para evaluar su calidad. Se realizará una revisión sistemática de la literatura con el objetivo de identificar un conjunto de atributos de calidad, métricas e indicadores que permitirán medir las características identificadas. Como resultado se obtendrá un modelo de calidad alineado con la ISO/IEC 25010 que integrará las características de calidad, atributos, métricas e indicadores que dan soporte a su evaluación.[CA] Context: La computacio en el nuvol es un model de prestacio i consum de servicis que oferix moltes ventages a les empreses (alta disponibilitat, elasticitat, maxim aprofitament de recursos, etc.) que se traduixen en requisits de calitat que deuen ser complits pel servici. En els ultims anys s'han propost numerosos atributs de calitat i metriques per a servicis cloud, pero no existix un estudi que recoja esta informacio i la classifique en respecte a les caracteristiques internes i externes del servici (Quality of Service – QoS), aixina com caracteristiques en us del servici (Quality of Experience – QoE). Objectiu: L'objectiu d'este treball de fi de máster es definir un model de calitat especifica per a servicis cloud, enringlerat en l'ISO/IEC 25010, que integre les caracteristiques de calitat, atributs i metriques proposts en la lliteratura i que permeten evaluar la calitat dels artefactes cloud en distintes fases del cicle de vida. Metodo: S'ha realisat una revisio sistematica de la lliteratura en l'objectiu d'identificar i analisar els atributs i metriques de calitat propostes per a evaluar la calitat dels servicis cloud. Este metodo ha segut utilisat extensament en l'ambit de l'Ingenieria del Software i ha demostrat ser util per a recopilar i analisar l'informacio existent relativa a un determinat tema d'investigacio. Resultats: El resultat es un model de calitat per a servicis cloud que ha segut construit a partir dels 178 atributs i 364 metriques obtingudes com resultat de la revisio sistematica. En particular, els resultats de la revisio indiquen que el 48% de les metriques propostes son per a mesurar Eficiencia de desempenyorament, seguint-li les metriques de fiabilidad en en un 23%. En respecte a la fase del cicle de vida, un 55% d'estes metriques s'utilisen en la fase d'Operacio i un 32% en la fase d'Adquisicio. En respecte al punt de vista dels stakeholders, el 39% de les metriques estan orientades al proveïdor del servici, el 33% al consumidor, el 7% al facilitador (bróker) i nomes un 5% al desarrollador del servici. En respecte als artefactes cloud valorats, la majoria de les metriques (97%) s'apliquen sobre el servici cloud en fase de proves o desplegat en el cloud; nomes un 2% de les metriques s'apliquen sobre l'arquitectura del servici i un 1% sobre l'especificacio del servici. En respecte a la validacio, els resultats mostren que el 99% de les metriques propostes carixen de qualsevol tipo de validacio, encara que el 44% presenta una prova de concepte que ilustra com se pot utilisar les metriques. Adicionalment identifiquem 27 atributs propis dels servicis cloud, sent l'elasticitat el mes nomenat, en 14%. Conclusions: Els resultats del treball han proporcionat informacio rellevant sobre l'estat actual i les carencies que existixen en l'ambit de l'evaluacio de la calitat dels servicis cloud. Tambe mos han permes definir un model de calitat per a suplir algunes carencies identificades. Com trabajos futurs, pretenem refinar el model propost, propondre noves metriques i adaptar algunes existents per a l'evaluacio d'arquitectures cloud, aixina com realisar estudios empirics per a proporcionar evidencia al voltant de l'utilitat d'un conjunt de metriquesNavas Rosales, RM. (2016). Modelo de Calidad para Servicios Cloud. http://hdl.handle.net/10251/77847TFG

    IoT and Sensor Networks in Industry and Society

    Get PDF
    The exponential progress of Information and Communication Technology (ICT) is one of the main elements that fueled the acceleration of the globalization pace. Internet of Things (IoT), Artificial Intelligence (AI) and big data analytics are some of the key players of the digital transformation that is affecting every aspect of human's daily life, from environmental monitoring to healthcare systems, from production processes to social interactions. In less than 20 years, people's everyday life has been revolutionized, and concepts such as Smart Home, Smart Grid and Smart City have become familiar also to non-technical users. The integration of embedded systems, ubiquitous Internet access, and Machine-to-Machine (M2M) communications have paved the way for paradigms such as IoT and Cyber Physical Systems (CPS) to be also introduced in high-requirement environments such as those related to industrial processes, under the forms of Industrial Internet of Things (IIoT or I2oT) and Cyber-Physical Production Systems (CPPS). As a consequence, in 2011 the German High-Tech Strategy 2020 Action Plan for Germany first envisioned the concept of Industry 4.0, which is rapidly reshaping traditional industrial processes. The term refers to the promise to be the fourth industrial revolution. Indeed, the first industrial revolution was triggered by water and steam power. Electricity and assembly lines enabled mass production in the second industrial revolution. In the third industrial revolution, the introduction of control automation and Programmable Logic Controllers (PLCs) gave a boost to factory production. As opposed to the previous revolutions, Industry 4.0 takes advantage of Internet access, M2M communications, and deep learning not only to improve production efficiency but also to enable the so-called mass customization, i.e. the mass production of personalized products by means of modularized product design and flexible processes. Less than five years later, in January 2016, the Japanese 5th Science and Technology Basic Plan took a further step by introducing the concept of Super Smart Society or Society 5.0. According to this vision, in the upcoming future, scientific and technological innovation will guide our society into the next social revolution after the hunter-gatherer, agrarian, industrial, and information eras, which respectively represented the previous social revolutions. Society 5.0 is a human-centered society that fosters the simultaneous achievement of economic, environmental and social objectives, to ensure a high quality of life to all citizens. This information-enabled revolution aims to tackle today’s major challenges such as an ageing population, social inequalities, depopulation and constraints related to energy and the environment. Accordingly, the citizens will be experiencing impressive transformations into every aspect of their daily lives. This book offers an insight into the key technologies that are going to shape the future of industry and society. It is subdivided into five parts: the I Part presents a horizontal view of the main enabling technologies, whereas the II-V Parts offer a vertical perspective on four different environments. The I Part, dedicated to IoT and Sensor Network architectures, encompasses three Chapters. In Chapter 1, Peruzzi and Pozzebon analyse the literature on the subject of energy harvesting solutions for IoT monitoring systems and architectures based on Low-Power Wireless Area Networks (LPWAN). The Chapter does not limit the discussion to Long Range Wise Area Network (LoRaWAN), SigFox and Narrowband-IoT (NB-IoT) communication protocols, but it also includes other relevant solutions such as DASH7 and Long Term Evolution MAchine Type Communication (LTE-M). In Chapter 2, Hussein et al. discuss the development of an Internet of Things message protocol that supports multi-topic messaging. The Chapter further presents the implementation of a platform, which integrates the proposed communication protocol, based on Real Time Operating System. In Chapter 3, Li et al. investigate the heterogeneous task scheduling problem for data-intensive scenarios, to reduce the global task execution time, and consequently reducing data centers' energy consumption. The proposed approach aims to maximize the efficiency by comparing the cost between remote task execution and data migration. The II Part is dedicated to Industry 4.0, and includes two Chapters. In Chapter 4, Grecuccio et al. propose a solution to integrate IoT devices by leveraging a blockchain-enabled gateway based on Ethereum, so that they do not need to rely on centralized intermediaries and third-party services. As it is better explained in the paper, where the performance is evaluated in a food-chain traceability application, this solution is particularly beneficial in Industry 4.0 domains. Chapter 5, by De Fazio et al., addresses the issue of safety in workplaces by presenting a smart garment that integrates several low-power sensors to monitor environmental and biophysical parameters. This enables the detection of dangerous situations, so as to prevent or at least reduce the consequences of workers accidents. The III Part is made of two Chapters based on the topic of Smart Buildings. In Chapter 6, Petroșanu et al. review the literature about recent developments in the smart building sector, related to the use of supervised and unsupervised machine learning models of sensory data. The Chapter poses particular attention on enhanced sensing, energy efficiency, and optimal building management. In Chapter 7, Oh examines how much the education of prosumers about their energy consumption habits affects power consumption reduction and encourages energy conservation, sustainable living, and behavioral change, in residential environments. In this Chapter, energy consumption monitoring is made possible thanks to the use of smart plugs. Smart Transport is the subject of the IV Part, including three Chapters. In Chapter 8, Roveri et al. propose an approach that leverages the small world theory to control swarms of vehicles connected through Vehicle-to-Vehicle (V2V) communication protocols. Indeed, considering a queue dominated by short-range car-following dynamics, the Chapter demonstrates that safety and security are increased by the introduction of a few selected random long-range communications. In Chapter 9, Nitti et al. present a real time system to observe and analyze public transport passengers' mobility by tracking them throughout their journey on public transport vehicles. The system is based on the detection of the active Wi-Fi interfaces, through the analysis of Wi-Fi probe requests. In Chapter 10, Miler et al. discuss the development of a tool for the analysis and comparison of efficiency indicated by the integrated IT systems in the operational activities undertaken by Road Transport Enterprises (RTEs). The authors of this Chapter further provide a holistic evaluation of efficiency of telematics systems in RTE operational management. The book ends with the two Chapters of the V Part on Smart Environmental Monitoring. In Chapter 11, He et al. propose a Sea Surface Temperature Prediction (SSTP) model based on time-series similarity measure, multiple pattern learning and parameter optimization. In this strategy, the optimal parameters are determined by means of an improved Particle Swarm Optimization method. In Chapter 12, Tsipis et al. present a low-cost, WSN-based IoT system that seamlessly embeds a three-layered cloud/fog computing architecture, suitable for facilitating smart agricultural applications, especially those related to wildfire monitoring. We wish to thank all the authors that contributed to this book for their efforts. We express our gratitude to all reviewers for the volunteering support and precious feedback during the review process. We hope that this book provides valuable information and spurs meaningful discussion among researchers, engineers, businesspeople, and other experts about the role of new technologies into industry and society
    corecore