4,028 research outputs found

    Application of Track Geometry Deterioration Modelling and Data Mining in Railway Asset Management

    Get PDF
    Modernin rautatiejärjestelmän hallinnassa rahankäyttö kohdistuu valtaosin nykyisen rataverkon korjauksiin ja parannuksiin ennemmin kuin uusien ratojen rakentamiseen. Nykyisen rataverkon kunnossapitotyöt aiheuttavat suurten kustannusten lisäksi myös usein liikennerajoitteita tai yhteyksien väliaikaisia sulkemisia, jotka heikentävät rataverkon käytettävyyttä Siispä oikea-aikainen ja pitkäaikaisia parannuksia aikaansaava kunnossapito ovat edellytyksiä kilpailukykyisille ja täsmällisille rautatiekuljetuksille. Tällainen kunnossapito vaatii vankan tietopohjan radan nykyisestä kunnosta päätöksenteon tueksi. Ratainfran omistajat teettävät päätöksenteon tueksi useita erilaisia radan kuntoa kuvaavia mittauksia ja ylläpitävät kattavia omaisuustietorekistereitä. Kenties tärkein näistä datalähteistä on koneellisen radantarkastuksen tuottamat mittaustulokset, jotka kuvastavat radan geometrian kuntoa. Nämä mittaustulokset ovat tärkeitä, koska ne tuottavat luotettavaa kuntotietoa: mittaukset tehdään toistuvasti, 2–6 kertaa vuodessa Suomessa rataosasta riippuen, mittausvaunu pysyy useita vuosia samana, tulokset ovat hyvin toistettavia ja ne antavat hyvän yleiskuvan radan kunnosta. Vaikka laadukasta dataa on paljon saatavilla, käytännön omaisuudenhallinnassa on merkittäviä haasteita datan analysoinnissa, sillä vakiintuneita menetelmiä siihen on vähän. Käytännössä seurataan usein vain mittaustulosten raja-arvojen ylittymistä ja pyritään subjektiivisesti arvioimaan rakenteiden kunnon kehittymistä ja korjaustarpeita. Kehittyneen analytiikan puutteet estävät kuntotietojen laajamittaisen hyödyntämisen kunnossapidon suunnittelussa, mikä vaikeuttaa päätöksentekoa. Tämän väitöskirjatutkimuksen päätavoitteita olivat kehittää ratageometrian heikkenemiseen mallintamismenetelmiä, soveltaa tiedonlouhintaa saatavilla olevan omaisuusdatan analysointiin sekä jalkauttaa kyseiset tutkimustulokset käytännön rataomaisuudenhallintaan. Ratageometrian heikkenemisen mallintamismenetelmien kehittämisessä keskityttiin tuottamaan nykyisin saatavilla olevasta datasta uutta tietoa radan kunnon kehityksestä, tehdyn kunnossapidon tehokkuudesta sekä tulevaisuuden kunnossapitotarpeista. Tiedonlouhintaa sovellettiin ratageometrian heikkenemisen juurisyiden selvittämiseen rataomaisuusdatan perusteella. Lopuksi hyödynnettiin kypsyysmalleja perustana ratageometrian heikkenemisen mallinnuksen ja rataomaisuusdatan analytiikan käytäntöön viennille. Tutkimustulosten perusteella suomalainen radantarkastus- ja rataomaisuusdata olivat riittäviä tavoiteltuihin analyyseihin. Tulokset osoittivat, että robusti lineaarinen optimointi soveltuu hyvin suomalaisen rataverkon ratageometrian heikkenemisen mallinnukseen. Mallinnuksen avulla voidaan tuottaa tunnuslukuja, jotka kuvaavat rakenteen kuntoa, kunnossapidon tehokkuutta ja tulevaa kunnossapitotarvetta, sekä muodostaa havainnollistavia visualisointeja datasta. Rataomaisuusdatan eksploratiiviseen tiedonlouhintaan käytetyn GUHA-menetelmän avulla voitiin selvittää mielenkiintoisia ja vaikeasti havaittavia korrelaatioita datasta. Näiden tulosten avulla saatiin uusia havaintoja ongelmallisista ratarakennetyypeistä. Havaintojen avulla voitiin kohdentaa jatkotutkimuksia näihin rakenteisiin, mikä ei olisi ollut mahdollista, jollei tiedonlouhinnan avulla olisi ensin tunnistettu näitä rakennetyyppejä. Kypsyysmallin soveltamisen avulla luotiin puitteet ratageometrian heikkenemisen mallintamisen ja rataomaisuusdatan analytiikan kehitykselle Suomen rataomaisuuden hallinnassa. Kypsyysmalli tarjosi käytännöllisen tavan lähestyä tarvittavaa kehitystyötä, kun eteneminen voitiin jaotella neljään eri kypsyystasoon, jotka loivat selkeitä välitavoitteita. Kypsyysmallin ja asetettujen välitavoitteiden avulla kehitys on suunniteltua ja edistystä voidaan jaotella, mikä antaa edellytykset tämän laajamittaisen kehityksen onnistuneelle läpiviennille. Tämän väitöskirjatutkimuksen tulokset osoittavat, miten nykyisin saatavilla olevasta datasta saadaan täysin uutta ja merkityksellistä tietoa, kun sitä käsitellään kehittyneen analytiikan avulla. Tämä väitöskirja tarjoaa datankäsittelyratkaisujen luomisen ja soveltamisen lisäksi myös keinoja niiden käytäntöönpanolle, sillä tietopohjaisen päätöksenteon todelliset hyödyt saavutetaan vasta käytännön radanpidossa.In the management of a modern European railway system, spending is predominantly allocated to maintaining and renewing the existing rail network rather than constructing completely new lines. In addition to major costs, the maintenance and renewals of the existing rail network often cause traffic restrictions or line closures, which decrease the usability of the rail network. Therefore, timely maintenance that achieves long-lasting improvements is imperative for achieving competitive and punctual rail traffic. This kind of maintenance requires a strong knowledge base for decision making regarding the current condition of track structures. Track owners commission several different measurements that depict the condition of track structures and have comprehensive asset management data repositories. Perhaps one of the most important data sources is the track recording car measurement history, which depicts the condition of track geometry at different times. These measurement results are important because they offer a reliable condition database; the measurements are done recurrently, two to six times a year in Finland depending on the track section; the same recording car is used for many years; the results are repeatable; and they provide a good overall idea of the condition of track structures. However, although high-quality data is available, there are major challenges in analysing the data in practical asset management because there are few established methods for analytics. Practical asset management typically only monitors whether given threshold values are exceeded and subjectively assesses maintenance needs and development in the condition of track structures. The lack of advanced analytics prevents the full utilisation of the available data in maintenance planning which hinders decision making. The main goals of this dissertation study were to develop track geometry deterioration modelling methods, apply data mining in analysing currently available railway asset data, and implement the results from these studies into practical railway asset management. The development of track geometry deterioration modelling methods focused on utilising currently available data for producing novel information on the development in the condition of track structures, past maintenance effectiveness, and future maintenance needs. Data mining was applied in investigating the root causes of track geometry deterioration based on asset data. Finally, maturity models were applied as the basis for implementing track geometry deterioration modelling and track asset data analytics into practice. Based on the research findings, currently available Finnish measurement and asset data was sufficient for the desired analyses. For the Finnish track inspection data, robust linear optimisation was developed for track geometry deterioration modelling. The modelling provided key figures, which depict the condition of structures, maintenance effectiveness, and future maintenance needs. Moreover, visualisations were created from the modelling to enable the practical use of the modelling results. The applied exploratory data mining method, General Unary Hypotheses Automaton (GUHA), could find interesting and hard-to-detect correlations within asset data. With these correlations, novel observations on problematic track structure types were made. The observations could be utilised for allocating further research for problematic track structures, which would not have been possible without using data mining to identify these structures. The implementation of track geometry deterioration and asset data analytics into practice was approached by applying maturity models. The use of maturity models offered a practical way of approaching future development, as the development could be divided into four maturity levels, which created clear incremental goals for development. The maturity model and the incremental goals enabled wide-scale development planning, in which the progress can be segmented and monitored, which enhances successful project completion. The results from these studies demonstrate how currently available data can be used to provide completely new and meaningful information, when advanced analytics are used. In addition to novel solutions for data analytics, this dissertation research also provided methods for implementing the solutions, as the true benefits of knowledge-based decision making are obtained in only practical railway asset management

    Adaptive Resource Allocation for Workflow Containerization on Kubernetes

    Full text link
    In a cloud-native era, the Kubernetes-based workflow engine enables workflow containerized execution through the inherent abilities of Kubernetes. However, when encountering continuous workflow requests and unexpected resource request spikes, the engine is limited to the current workflow load information for resource allocation, which lacks the agility and predictability of resource allocation, resulting in over and under-provisioning resources. This mechanism seriously hinders workflow execution efficiency and leads to high resource waste. To overcome these drawbacks, we propose an adaptive resource allocation scheme named ARAS for the Kubernetes-based workflow engines. Considering potential future workflow task requests within the current task pod's lifecycle, the ARAS uses a resource scaling strategy to allocate resources in response to high-concurrency workflow scenarios. The ARAS offers resource discovery, resource evaluation, and allocation functionalities and serves as a key component for our tailored workflow engine (KubeAdaptor). By integrating the ARAS into KubeAdaptor for workflow containerized execution, we demonstrate the practical abilities of KubeAdaptor and the advantages of our ARAS. Compared with the baseline algorithm, experimental evaluation under three distinct workflow arrival patterns shows that ARAS gains time-saving of 9.8% to 40.92% in the average total duration of all workflows, time-saving of 26.4% to 79.86% in the average duration of individual workflow, and an increase of 1% to 16% in CPU and memory resource usage rate

    Intelligent computing : the latest advances, challenges and future

    Get PDF
    Computing is a critical driving force in the development of human civilization. In recent years, we have witnessed the emergence of intelligent computing, a new computing paradigm that is reshaping traditional computing and promoting digital revolution in the era of big data, artificial intelligence and internet-of-things with new computing theories, architectures, methods, systems, and applications. Intelligent computing has greatly broadened the scope of computing, extending it from traditional computing on data to increasingly diverse computing paradigms such as perceptual intelligence, cognitive intelligence, autonomous intelligence, and human computer fusion intelligence. Intelligence and computing have undergone paths of different evolution and development for a long time but have become increasingly intertwined in recent years: intelligent computing is not only intelligence-oriented but also intelligence-driven. Such cross-fertilization has prompted the emergence and rapid advancement of intelligent computing

    Modelling, Monitoring, Control and Optimization for Complex Industrial Processes

    Get PDF
    This reprint includes 22 research papers and an editorial, collected from the Special Issue "Modelling, Monitoring, Control and Optimization for Complex Industrial Processes", highlighting recent research advances and emerging research directions in complex industrial processes. This reprint aims to promote the research field and benefit the readers from both academic communities and industrial sectors

    Inclusive Intelligent Learning Management System Framework - Application of Data Science in Inclusive Education

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceBeing a disabled student the author faced higher education with a handicap which as experience studying during COVID 19 confinement periods matched the findings in recent research about the importance of digital accessibility through more e-learning intensive academic experiences. Narrative and systematic literature reviews enabled providing context in World Health Organization’s International Classification of Functioning, Disability and Health, legal and standards framework and information technology and communication state-of-the art. Assessing Portuguese higher education institutions’ web sites alerted to the fact that only outlying institutions implemented near perfect, accessibility-wise, websites. Therefore a gap was identified in how accessible the Portuguese higher education websites are, the needs of all students, including those with disabilities, and even the accessibility minimum legal requirements for digital products and the services provided by public or publicly funded organizations. Having identified a problem in society and exploring the scientific base of knowledge for context and state of the art was a first stage in the Design Science Research methodology, to which followed development and validation cycles of an Inclusive Intelligent Learning Management System Framework. The framework blends various Data Science study fields contributions with accessibility guidelines compliant interface design and content upload accessibility compliance assessment. Validation was provided by a focus group whose inputs were considered for the version presented in this dissertation. Not being the purpose of the research to deliver a complete implementation of the framework and lacking consistent data to put all the modules interacting with each other, the most relevant modules were tested with open data as proof of concept. The rigor cycle of DSR started with the inclusion of the previous thesis on Atlântica University Institute Scientific Repository and is to be completed with the publication of this thesis and the already started PhD’s findings in relevant journals and conferences

    A Design Science Research Approach to Smart and Collaborative Urban Supply Networks

    Get PDF
    Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness. A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense. Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice

    Federated Domain Generalization: A Survey

    Full text link
    Machine learning typically relies on the assumption that training and testing distributions are identical and that data is centrally stored for training and testing. However, in real-world scenarios, distributions may differ significantly and data is often distributed across different devices, organizations, or edge nodes. Consequently, it is imperative to develop models that can effectively generalize to unseen distributions where data is distributed across different domains. In response to this challenge, there has been a surge of interest in federated domain generalization (FDG) in recent years. FDG combines the strengths of federated learning (FL) and domain generalization (DG) techniques to enable multiple source domains to collaboratively learn a model capable of directly generalizing to unseen domains while preserving data privacy. However, generalizing the federated model under domain shifts is a technically challenging problem that has received scant attention in the research area so far. This paper presents the first survey of recent advances in this area. Initially, we discuss the development process from traditional machine learning to domain adaptation and domain generalization, leading to FDG as well as provide the corresponding formal definition. Then, we categorize recent methodologies into four classes: federated domain alignment, data manipulation, learning strategies, and aggregation optimization, and present suitable algorithms in detail for each category. Next, we introduce commonly used datasets, applications, evaluations, and benchmarks. Finally, we conclude this survey by providing some potential research topics for the future

    Trustworthiness Mechanisms for Long-Distance Networks in Internet of Things

    Get PDF
    Aquesta tesi té com a objectiu aconseguir un intercanvi de dades fiable en un entorn hostil millorant-ne la confiabilitat mitjançant el disseny d'un model complet que tingui en compte les diferents capes de confiabilitat i mitjançant la implementació de les contramesures associades al model. La tesi se centra en el cas d'ús del projecte SHETLAND-NET, amb l'objectiu de desplegar una arquitectura d'Internet de les coses (IoT) híbrida amb comunicacions LoRa i d'ona ionosfèrica d'incidència gairebé vertical (NVIS) per oferir un servei de telemetria per al monitoratge del “permafrost” a l'Antàrtida. Per complir els objectius de la tesi, en primer lloc, es fa una revisió de l'estat de l'art en confiabilitat per proposar una definició i l'abast del terme de confiança. Partint d'aquí, es dissenya un model de confiabilitat de quatre capes, on cada capa es caracteritza pel seu abast, mètrica per a la quantificació de la confiabilitat, contramesures per a la millora de la confiabilitat i les interdependències amb les altres capes. Aquest model permet el mesurament i l'avaluació de la confiabilitat del cas d'ús a l'Antàrtida. Donades les condicions hostils i les limitacions de la tecnologia utilitzada en aquest cas d’ús, es valida el model i s’avalua el servei de telemetria a través de simulacions en Riverbed Modeler. Per obtenir valors anticipats de la confiabilitat esperada, l'arquitectura proposada es modela per avaluar els resultats amb diferents configuracions previ al seu desplegament en proves de camp. L'arquitectura proposada passa per tres principals iteracions de millora de la confiabilitat. A la primera iteració, s'explora l'ús de mecanismes de consens i gestió de la confiança social per aprofitar la redundància de sensors. En la segona iteració, s’avalua l’ús de protocols de transport moderns per al cas d’ús antàrtic. L’última iteració d’aquesta tesi avalua l’ús d’una arquitectura de xarxa tolerant al retard (DTN) utilitzant el Bundle Protocol (BP) per millorar la confiabilitat del sistema. Finalment, es presenta una prova de concepte (PoC) amb maquinari real que es va desplegar a la campanya antàrtica 2021-2022, descrivint les proves de camp funcionals realitzades a l'Antàrtida i Catalunya.Esta tesis tiene como objetivo lograr un intercambio de datos confiable en un entorno hostil mejorando su confiabilidad mediante el diseño de un modelo completo que tenga en cuenta las diferentes capas de confiabilidad y mediante la implementación de las contramedidas asociadas al modelo. La tesis se centra en el caso de uso del proyecto SHETLAND-NET, con el objetivo de desplegar una arquitectura de Internet de las cosas (IoT) híbrida con comunicaciones LoRa y de onda ionosférica de incidencia casi vertical (NVIS) para ofrecer un servicio de telemetría para el monitoreo del “permafrost” en la Antártida. Para cumplir con los objetivos de la tesis, en primer lugar, se realiza una revisión del estado del arte en confiabilidad para proponer una definición y alcance del término confiabilidad. Partiendo de aquí, se diseña un modelo de confiabilidad de cuatro capas, donde cada capa se caracteriza por su alcance, métrica para la cuantificación de la confiabilidad, contramedidas para la mejora de la confiabilidad y las interdependencias con las otras capas. Este modelo permite la medición y evaluación de la confiabilidad del caso de uso en la Antártida. Dadas las condiciones hostiles y las limitaciones de la tecnología utilizada en este caso de uso, se valida el modelo y se evalúa el servicio de telemetría a través de simulaciones en Riverbed Modeler. Para obtener valores anticipados de la confiabilidad esperada, la arquitectura propuesta es modelada para evaluar los resultados con diferentes configuraciones previo a su despliegue en pruebas de campo. La arquitectura propuesta pasa por tres iteraciones principales de mejora de la confiabilidad. En la primera iteración, se explora el uso de mecanismos de consenso y gestión de la confianza social para aprovechar la redundancia de sensores. En la segunda iteración, se evalúa el uso de protocolos de transporte modernos para el caso de uso antártico. La última iteración de esta tesis evalúa el uso de una arquitectura de red tolerante al retardo (DTN) utilizando el Bundle Protocol (BP) para mejorar la confiabilidad del sistema. Finalmente, se presenta una prueba de concepto (PoC) con hardware real que se desplegó en la campaña antártica 2021-2022, describiendo las pruebas de campo funcionales realizadas en la Antártida y Cataluña.This thesis aims at achieving reliable data exchange over a harsh environment by improving its trustworthiness through the design of a complete model that takes into account the different layers of trustworthiness and through the implementation of the model’s associated countermeasures. The thesis focuses on the use case of the SHETLAND-NET project, aiming to deploy a hybrid Internet of Things (IoT) architecture with LoRa and Near Vertical Incidence Skywave (NVIS) communications to offer a telemetry service for permafrost monitoring in Antarctica. To accomplish the thesis objectives, first, a review of the state of the art in trustworthiness is carried out to propose a definition and scope of the trustworthiness term. From these, a four-layer trustworthiness model is designed, with each layer characterized by its scope, metric for trustworthiness accountability, countermeasures for trustworthiness improvement, and the interdependencies with the other layers. This model enables trustworthiness accountability and assessment of the Antarctic use case. Given the harsh conditions and the limitations of the use technology in this use case, the model is validated and the telemetry service is evaluated through simulations in Riverbed Modeler. To obtain anticipated values of the expected trustworthiness, the proposal has been modeled to evaluate the performance with different configurations prior to its deployment in the field. The proposed architecture goes through three major iterations of trustworthiness improvement. In the first iteration, using social trust management and consensus mechanisms is explored to take advantage of sensor redundancy. In the second iteration, the use of modern transport protocols is evaluated for the Antarctic use case. The final iteration of this thesis assesses using a Delay Tolerant Network (DTN) architecture using the Bundle Protocol (BP) to improve the system’s trustworthiness. Finally, a Proof of Concept (PoC) with real hardware that was deployed in the 2021-2022 Antarctic campaign is presented, describing the functional tests performed in Antarctica and Catalonia
    corecore