1,161 research outputs found

    The Application of Data Analytics Technologies for the Predictive Maintenance of Industrial Facilities in Internet of Things (IoT) Environments

    Get PDF
    In industrial production environments, the maintenance of equipment has a decisive influence on costs and on the plannability of production capacities. In particular, unplanned failures during production times cause high costs, unplanned downtimes and possibly additional collateral damage. Predictive Maintenance starts here and tries to predict a possible failure and its cause so early that its prevention can be prepared and carried out in time. In order to be able to predict malfunctions and failures, the industrial plant with its characteristics, as well as wear and ageing processes, must be modelled. Such modelling can be done by replicating its physical properties. However, this is very complex and requires enormous expert knowledge about the plant and about wear and ageing processes of each individual component. Neural networks and machine learning make it possible to train such models using data and offer an alternative, especially when very complex and non-linear behaviour is evident. In order for models to make predictions, as much data as possible about the condition of a plant and its environment and production planning data is needed. In Industrial Internet of Things (IIoT) environments, the amount of available data is constantly increasing. Intelligent sensors and highly interconnected production facilities produce a steady stream of data. The sheer volume of data, but also the steady stream in which data is transmitted, place high demands on the data processing systems. If a participating system wants to perform live analyses on the incoming data streams, it must be able to process the incoming data at least as fast as the continuous data stream delivers it. If this is not the case, the system falls further and further behind in processing and thus in its analyses. This also applies to Predictive Maintenance systems, especially if they use complex and computationally intensive machine learning models. If sufficiently scalable hardware resources are available, this may not be a problem at first. However, if this is not the case or if the processing takes place on decentralised units with limited hardware resources (e.g. edge devices), the runtime behaviour and resource requirements of the type of neural network used can become an important criterion. This thesis addresses Predictive Maintenance systems in IIoT environments using neural networks and Deep Learning, where the runtime behaviour and the resource requirements are relevant. The question is whether it is possible to achieve better runtimes with similarly result quality using a new type of neural network. The focus is on reducing the complexity of the network and improving its parallelisability. Inspired by projects in which complexity was distributed to less complex neural subnetworks by upstream measures, two hypotheses presented in this thesis emerged: a) the distribution of complexity into simpler subnetworks leads to faster processing overall, despite the overhead this creates, and b) if a neural cell has a deeper internal structure, this leads to a less complex network. Within the framework of a qualitative study, an overall impression of Predictive Maintenance applications in IIoT environments using neural networks was developed. Based on the findings, a novel model layout was developed named Sliced Long Short-Term Memory Neural Network (SlicedLSTM). The SlicedLSTM implements the assumptions made in the aforementioned hypotheses in its inner model architecture. Within the framework of a quantitative study, the runtime behaviour of the SlicedLSTM was compared with that of a reference model in the form of laboratory tests. The study uses synthetically generated data from a NASA project to predict failures of modules of aircraft gas turbines. The dataset contains 1,414 multivariate time series with 104,897 samples of test data and 160,360 samples of training data. As a result, it could be proven for the specific application and the data used that the SlicedLSTM delivers faster processing times with similar result accuracy and thus clearly outperforms the reference model in this respect. The hypotheses about the influence of complexity in the internal structure of the neuronal cells were confirmed by the study carried out in the context of this thesis

    Digital twins for performance management in the built environment

    Get PDF
    Recent events worldwide of climate and geological origins highlight the vulnerability of our infrastructures and stress the often dramatic consequences on our environment. Accurate digital models are needed to understand how climate change and associated risks affect buildings, while informing on ways of enhancing their adaptability and resilience. This requires a paradigm shift in design and engineering interventions as the potential for adaptation and resilience should be embedded into initial brief formulation, design, engineering, construction and facility maintenance methods. This paper argues the need for smarter and digital interventions for buildings and infrastructures and their underpinning data systems that factor in topology (including geometry), mereology, and behavioural (dynamic) considerations. Digital models can be used as a basis to understand the complex interplay between environmental variables and performance, and explore real-time response strategies (including control and actuation) to known and uncertain solicitations enabled by a new generation of technologies. The paper proposes a digital twin model for the construction and industrial assets that paves the way to a new generation of buildings and infrastructures that (a) address lifetime requirements, (b) are capable of performing optimally within the constraints of unknown future scenarios, and (c) achieve acceptable levels of adaptability, efficiency and resilience

    Demand Side Management In Smart Grid Optimization Using Artificial Fish Swarm Algorithm

    Get PDF
    The demand side management and their response including peak shaving approaches and motivations with shiftable load scheduling strategies advantages are the main focus of this paper. A recent real-time pricing model for regulating energy demand is proposed after a survey of literature-based demand side management techniques. Lack of user’s resources needed to change their energy consumption for the system's overall benefit. The recommended strategy involves modern system identification and administration that would enable user side load control. This might assist in balancing the demand and supply sides more effectively while also lowering peak demand and enhancing system efficiency. The AFSA and BFO algorithms are combined in this study to handle the optimization of difficult problems in a range of industries. Although the BFO will be used to exploit the search space and converge to the optimum solution, the AFSA will be used to explore the search space and retain variation. In terms of reduction of peak demand, energy consumption, and user satisfaction, the AFSA-BFO hybrid algorithm outperforms previous techniques in the field of demand side management in a smart grid context, using an AFSA. According to simulation results, the genetic algorithm successfully reduces PAR and power consumption expenses

    Digital Traces of the Mind::Using Smartphones to Capture Signals of Well-Being in Individuals

    Get PDF
    General context and questions Adolescents and young adults typically use their smartphone several hours a day. Although there are concerns about how such behaviour might affect their well-being, the popularity of these powerful devices also opens novel opportunities for monitoring well-being in daily life. If successful, monitoring well-being in daily life provides novel opportunities to develop future interventions that provide personalized support to individuals at the moment they require it (just-in-time adaptive interventions). Taking an interdisciplinary approach with insights from communication, computational, and psychological science, this dissertation investigated the relation between smartphone app use and well-being and developed machine learning models to estimate an individual’s well-being based on how they interact with their smartphone. To elucidate the relation between smartphone trace data and well-being and to contribute to the development of technologies for monitoring well-being in future clinical practice, this dissertation addressed two overarching questions:RQ1: Can we find empirical support for theoretically motivated relations between smartphone trace data and well-being in individuals? RQ2: Can we use smartphone trace data to monitor well-being in individuals?Aims The first aim of this dissertation was to quantify the relation between the collected smartphone trace data and momentary well-being at the sample level, but also for each individual, following recent conceptual insights and empirical findings in psychological, communication, and computational science. A strength of this personalized (or idiographic) approach is that it allows us to capture how individuals might differ in how smartphone app use is related to their well-being. Considering such interindividual differences is important to determine if some individuals might potentially benefit from spending more time on their smartphone apps whereas others do not or even experience adverse effects. The second aim of this dissertation was to develop models for monitoring well-being in daily life. The present work pursued this transdisciplinary aim by taking a machine learning approach and evaluating to what extent we might estimate an individual’s well-being based on their smartphone trace data. If such traces can be used for this purpose by helping to pinpoint when individuals are unwell, they might be a useful data source for developing future interventions that provide personalized support to individuals at the moment they require it (just-in-time adaptive interventions). With this aim, the dissertation follows current developments in psychoinformatics and psychiatry, where much research resources are invested in using smartphone traces and similar data (obtained with smartphone sensors and wearables) to develop technologies for detecting whether an individual is currently unwell or will be in the future. Data collection and analysis This work combined novel data collection techniques (digital phenotyping and experience sampling methodology) for measuring smartphone use and well-being in the daily lives of 247 student participants. For a period up to four months, a dedicated application installed on participants’ smartphones collected smartphone trace data. In the same time period, participants completed a brief smartphone-based well-being survey five times a day (for 30 days in the first month and 30 days in the fourth month; up to 300 assessments in total). At each measurement, this survey comprised questions about the participants’ momentary level of procrastination, stress, and fatigue, while sleep duration was measured in the morning. Taking a time-series and machine learning approach to analysing these data, I provide the following contributions: Chapter 2 investigates the person-specific relation between passively logged usage of different application types and momentary subjective procrastination, Chapter 3 develops machine learning methodology to estimate sleep duration using smartphone trace data, Chapter 4 combines machine learning and explainable artificial intelligence to discover smartphone-tracked digital markers of momentary subjective stress, Chapter 5 uses a personalized machine learning approach to evaluate if smartphone trace data contains behavioral signs of fatigue. Collectively, these empirical studies provide preliminary answers to the overarching questions of this dissertation.Summary of results With respect to the theoretically motivated relations between smartphone trace data and wellbeing (RQ1), we found that different patterns in smartphone trace data, from time spent on social network, messenger, video, and game applications to smartphone-tracked sleep proxies, are related to well-being in individuals. The strength and nature of this relation depends on the individual and app usage pattern under consideration. The relation between smartphone app use patterns and well-being is limited in most individuals, but relatively strong in a minority. Whereas some individuals might benefit from using specific app types, others might experience decreases in well-being when spending more time on these apps. With respect to the question whether we might use smartphone trace data to monitor well-being in individuals (RQ2), we found that smartphone trace data might be useful for this purpose in some individuals and to some extent. They appear most relevant in the context of sleep monitoring (Chapter 3) and have the potential to be included as one of several data sources for monitoring momentary procrastination (Chapter 2), stress (Chapter 4), and fatigue (Chapter 5) in daily life. Outlook Future interdisciplinary research is needed to investigate whether the relationship between smartphone use and well-being depends on the nature of the activities performed on these devices, the content they present, and the context in which they are used. Answering these questions is essential to unravel the complex puzzle of developing technologies for monitoring well-being in daily life.<br/

    MetaOmniCity: Towards urban metaverse cyberspaces using immersive smart city digital twins

    Get PDF
    The movie - The Matrix (1999) - boosted our imagination about how further we can be immersed within the cyber world, i.e., how further the cyber world can be indistinguishable from the real world with the metaverse space travel. Nobody had expected involving the creators that the aspirational fictional virtual worlds such as "ActiveWorlds (1995)", and ``Second Life (2003)'' with many urban experiences embedded into a rich featured 3D environment would impact the way of experiencing our real urban environments. Are we going to feel/become ourselves - our cyber-physical presence (e.g., our augmented avatars) - in other mirror worlds doing many other things? Are the created imaginary worlds becoming a part of the real worlds or vice versa? The recent once-in-a-lifetime pandemic has confirmed the importance of location and time-independent Digital Twins (DTs) (i.e., virtual scale models) of cities and their automated services that can provide everybody with equity and accessibility by democratising all types of services leading to increased Quality of Life (QoL). This study analyses how the metaverse (3D elevation of linear Internet), that aims to build high-fidelity virtual worlds with which to interact with the real world, can be engaged within the Smart City (SC) ecosystem with high immersive Quality of Experiences (QoE) and an urban metaverse ecosystem framework — MetaOmniCity — that is designed to demonstrate a variety of insights and orchestrational directions for policymakers, city planners and all other stakeholders about how to transform data-driven SCs with DTs into virtually inhabitable cities with a network of shared urban experiences from a metaverse point of view. MetaOmniCity, allowing the metaversification of cities with granular virtual societies, i.e., MetaSocieties, and eliminating the boundaries (e.g., time, space and language) between the real world and their virtual counterparts, can be shaped to the particular requirements and features of cities. This can pave the way for immersive globalisation with the bigger and richer metaverse of Country (MoC) and metaverse of World (MoW) being an immersive DT of the broader universe with digitally connected cities by removing physical borders. MetaOmniCity is expected to accelerate the building, deployment, and adoption of immersive urban metaverse worlds/networks for citizens to interface with as an extension of real urban social and individual experiences

    Towards an integrated vulnerability-based approach for evaluating, managing and mitigating earthquake risk in urban areas

    Get PDF
    Tese de doutoramento em Civil EngineeringSismos de grande intensidade, como aqueles que ocorreram na Turquía-Síria (2023) ou México (2017) deviam chamar a atenção para o projeto e implementação de ações proativas que conduzam à identificação de bens vulneráveis. A presente tese propõe um fluxo de trabalho relativamente simples para efetuar avaliações da vulnerabilidade sísmica à escala urbana mediante ferramentas digitais. Um modelo de vulnerabilidade baseado em parâmetros é adotado devido à afinidade que possui com o Catálogo Nacional de Monumentos Históricos mexicano. Uma primeira implementação do método (a grande escala) foi efetuada na cidade histórica de Atlixco (Puebla, México), demonstrando a sua aplicabilidade e algumas limitações, o que permitiu o desenvolvimento de uma estratégia para quantificar e considerar as incertezas epistémicas encontradas nos processos de aquisição de dados. Devido ao volume de dados tratado, foi preciso desenvolver meios robustos para obter, armazenar e gerir informações. O uso de Sistemas de Informação Geográfica, com programas à medida baseados em linguagem Python e a distribuição de ficheiros na ”nuvem”, facilitou a criação de bases de dados de escala urbana para facilitar a aquisição de dados em campo, os cálculos de vulnerabilidade e dano e, finalmente, a representação dos resultados. Este desenvolvimento foi a base para um segundo conjunto de trabalhos em municípios do estado de Morelos (México). A caracterização da vulnerabilidade sísmica de mais de 160 construções permitiu a avaliação da representatividade do método paramétrico pela comparação entre os níveis de dano teórico e os danos observados depois do terramoto de Puebla-Morelos (2017). Esta comparação foi a base para efetuar processos de calibração e ajuste assistidos por algoritmos de aprendizagem de máquina (Machine Learning), fornecendo bases para o desenvolvimento de modelos de vulnerabilidade à medida (mediante o uso de Inteligência Artificial), apoiados nas evidências de eventos sísmicos prévios.Strong seismic events like the ones of Türkiye-Syria (2023) or Mexico (2017) should guide our attention to the design and implementation of proactive actions aimed to identify vulnerable assets. This work is aimed to propose a suitable and easy-to-implement workflow for performing large-scale seismic vulnerability assessments in historic environments by means of digital tools. A vulnerability-oriented model based on parameters is adopted given its affinity with the Mexican Catalogue of Historical Monuments. A first large-scale implementation of this method in the historical city of Atlixco (Puebla, Mexico) demonstrated its suitability and some limitations, which lead to develop a strategy for quantifying and involving the epistemic uncertainties found during the data acquisition process. Given the volume of data that these analyses involve, it was necessary to develop robust data acquisition, storing and management strategies. The use of Geographical Information System environments together with customised Python-based programs and cloud-based distribution permitted to assemble urban databases for facilitating field data acquisition, performing vulnerability and damage calculations, and representing outcomes. This development was the base for performing a second large-scale assessment in selected municipalities of the state of Morelos (Mexico). The characterisation of the seismic vulnerability of more than 160 buildings permitted to assess the representativeness of the parametric vulnerability approach by comparing the theoretical damage estimations against the damages observed after the Puebla-Morelos 2017 Earthquakes. Such comparison is the base for performing a Machine Learning assisted process of calibration and adjustment, representing a feasible strategy for calibrating these vulnerability models by using Machine-Learning algorithms and the empirical evidence of damage in post-seismic scenarios.This work was partly financed by FCT/MCTES through national funds (PIDDAC) under the R&D Unit Institute for Sustainability and Innovation in Structural Engineering (ISISE), reference UIDB/04029/2020. This research had financial support provided by the Portuguese Foundation of Science and Technology (FCT) through the Analysis and Mitigation of Risks in Infrastructures (InfraRisk) program under the PhD grant PD/BD/150385/2019

    PBL in a Digital Age

    Get PDF

    Cybersecurity applications of Blockchain technologies

    Get PDF
    With the increase in connectivity, the popularization of cloud services, and the rise of the Internet of Things (IoT), decentralized approaches for trust management are gaining momentum. Since blockchain technologies provide a distributed ledger, they are receiving massive attention from the research community in different application fields. However, this technology does not provide cybersecurity by itself. Thus, this thesis first aims to provide a comprehensive review of techniques and elements that have been proposed to achieve cybersecurity in blockchain-based systems. The analysis is intended to target area researchers, cybersecurity specialists and blockchain developers. We present a series of lessons learned as well. One of them is the rise of Ethereum as one of the most used technologies. Furthermore, some intrinsic characteristics of the blockchain, like permanent availability and immutability made it interesting for other ends, namely as covert channels and malicious purposes. On the one hand, the use of blockchains by malwares has not been characterized yet. Therefore, this thesis also analyzes the current state of the art in this area. One of the lessons learned is that covert communications have received little attention. On the other hand, although previous works have analyzed the feasibility of covert channels in a particular blockchain technology called Bitcoin, no previous work has explored the use of Ethereum to establish a covert channel considering all transaction fields and smart contracts. To foster further defence-oriented research, two novel mechanisms are presented on this thesis. First, Zephyrus takes advantage of all Ethereum fields and smartcontract bytecode. Second, Smart-Zephyrus is built to complement Zephyrus by leveraging smart contracts written in Solidity. We also assess the mechanisms feasibility and cost. Our experiments show that Zephyrus, in the best case, can embed 40 Kbits in 0.57 s. for US1.64,andretrievethemin2.8s.SmartZephyrus,however,isabletohidea4Kbsecretin41s.Whilebeingexpensive(aroundUS 1.64, and retrieve them in 2.8 s. Smart-Zephyrus, however, is able to hide a 4 Kb secret in 41 s. While being expensive (around US 1.82 per bit), the provided stealthiness might be worth the price for attackers. Furthermore, these two mechanisms can be combined to increase capacity and reduce costs.Debido al aumento de la conectividad, la popularización de los servicios en la nube y el auge del Internet de las cosas (IoT), los enfoques descentralizados para la gestión de la confianza están cobrando impulso. Dado que las tecnologías de cadena de bloques (blockchain) proporcionan un archivo distribuido, están recibiendo una atención masiva por parte de la comunidad investigadora en diferentes campos de aplicación. Sin embargo, esta tecnología no proporciona ciberseguridad por sí misma. Por lo tanto, esta tesis tiene como primer objetivo proporcionar una revisión exhaustiva de las técnicas y elementos que se han propuesto para lograr la ciberseguridad en los sistemas basados en blockchain. Este análisis está dirigido a investigadores del área, especialistas en ciberseguridad y desarrolladores de blockchain. A su vez, se presentan una serie de lecciones aprendidas, siendo una de ellas el auge de Ethereum como una de las tecnologías más utilizadas. Asimismo, algunas características intrínsecas de la blockchain, como la disponibilidad permanente y la inmutabilidad, la hacen interesante para otros fines, concretamente como canal encubierto y con fines maliciosos. Por una parte, aún no se ha caracterizado el uso de la blockchain por parte de malwares. Por ello, esta tesis también analiza el actual estado del arte en este ámbito. Una de las lecciones aprendidas al analizar los datos es que las comunicaciones encubiertas han recibido poca atención. Por otro lado, aunque trabajos anteriores han analizado la viabilidad de los canales encubiertos en una tecnología blockchain concreta llamada Bitcoin, ningún trabajo anterior ha explorado el uso de Ethereum para establecer un canal encubierto considerando todos los campos de transacción y contratos inteligentes. Con el objetivo de fomentar una mayor investigación orientada a la defensa, en esta tesis se presentan dos mecanismos novedosos. En primer lugar, Zephyrus aprovecha todos los campos de Ethereum y el bytecode de los contratos inteligentes. En segundo lugar, Smart-Zephyrus complementa Zephyrus aprovechando los contratos inteligentes escritos en Solidity. Se evalúa, también, la viabilidad y el coste de ambos mecanismos. Los resultados muestran que Zephyrus, en el mejor de los casos, puede ocultar 40 Kbits en 0,57 s. por 1,64 US$, y recuperarlos en 2,8 s. Smart-Zephyrus, por su parte, es capaz de ocultar un secreto de 4 Kb en 41 s. Si bien es cierto que es caro (alrededor de 1,82 dólares por bit), el sigilo proporcionado podría valer la pena para los atacantes. Además, estos dos mecanismos pueden combinarse para aumentar la capacidad y reducir los costesPrograma de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José Manuel Estévez Tapiador.- Secretario: Jorge Blasco Alís.- Vocal: Luis Hernández Encina
    corecore