4,491 research outputs found

    Dynamics under Uncertainty: Modeling Simulation and Complexity

    Get PDF
    The dynamics of systems have proven to be very powerful tools in understanding the behavior of different natural phenomena throughout the last two centuries. However, the attributes of natural systems are observed to deviate from their classical states due to the effect of different types of uncertainties. Actually, randomness and impreciseness are the two major sources of uncertainties in natural systems. Randomness is modeled by different stochastic processes and impreciseness could be modeled by fuzzy sets, rough sets, Dempster–Shafer theory, etc

    Condition Assessment of Concrete Bridge Decks Using Ground and Airborne Infrared Thermography

    Get PDF
    Applications of nondestructive testing (NDT) technologies have shown promise in assessing the condition of existing concrete bridges. Infrared thermography (IRT) has gradually gained wider acceptance as a NDT and evaluation tool in the civil engineering field. The high capability of IRT in detecting subsurface delamination, commercial availability of infrared cameras, lower cost compared with other technologies, speed of data collection, and remote sensing are some of the expected benefits of applying this technique in bridge deck inspection practices. The research conducted in this thesis aims at developing a rational condition assessment system for concrete bridge decks based on IRT technology, and automating its analysis process in order to add this invaluable technique to the bridge inspector’s tool box. Ground penetrating radar (GPR) has also been vastly recognized as a NDT technique capable of evaluating the potential of active corrosion. Therefore, integrating IRT and GPR results in this research provides more precise assessments of bridge deck conditions. In addition, the research aims to establish a unique link between NDT technologies and inspector findings by developing a novel bridge deck condition rating index (BDCI). The proposed procedure captures the integrated results of IRT and GPR techniques, along with visual inspection judgements, thus overcoming the inherent scientific uncertainties of this process. Finally, the research aims to explore the potential application of unmanned aerial vehicle (UAV) infrared thermography for detecting hidden defects in concrete bridge decks. The NDT work in this thesis was conducted on full-scale deteriorated reinforced concrete bridge decks located in Montreal, Quebec and London, Ontario. The proposed models have been validated through various case studies. IRT, either from the ground or by utilizing a UAV with high-resolution thermal infrared imagery, was found to be an appropriate technology for inspecting and precisely detecting subsurface anomalies in concrete bridge decks. The proposed analysis produced thermal mosaic maps from the individual IR images. The k-means clustering classification technique was utilized to segment the mosaics and identify objective thresholds and, hence, to delineate different categories of delamination severity in the entire bridge decks. The proposed integration methodology of NDT technologies and visual inspection results provided more reliable BDCI. The information that was sought to identify the parameters affecting the integration process was gathered from bridge engineers with extensive experience and intuition. The analysis process utilized the fuzzy set theory to account for uncertainties and imprecision in the measurements of bridge deck defects detected by IRT and GPR testing along with bridge inspector observations. The developed system and models should stimulate wider acceptance of IRT as a rapid, systematic and cost-effective evaluation technique for detecting bridge deck delaminations. The proposed combination of IRT and GPR results should expand their correlative use in bridge deck inspection. Integrating the proposed BDCI procedure with existing bridge management systems can provide a detailed and timely picture of bridge health, thus helping transportation agencies in identifying critical deficiencies at various service life stages. Consequently, this can yield sizeable reductions in bridge inspection costs, effective allocation of limited maintenance and repair funds, and promote the safety, mobility, longevity, and reliability of our highway transportation assets

    State-of-research on performance indicators for bridge quality control and management

    Get PDF
    The present study provides a review of the most diffused technical and non-technical performance indicators adopted worldwide by infrastructure owners. This work, developed within the European COST Action TU 1406—“Quality specifications for roadway bridges, standardization at a European level,” aims to summarize the state-of-art maintenance scheduling practices adopted by bridge owners, mainly focusing on the identification and classification of the most diffused performance indicators (PIs). PIs are subdivided in technical and non-technical ones: for this latter subclass, PIs are classified in environmental, social and economic-targeted. The study aims to be a reference for researchers dealing with performance-based assessments and bridge maintenance and management practices.Peer ReviewedPostprint (published version

    Similarity-based methods for machine diagnosis

    Get PDF
    This work presents a data-driven condition-based maintenance system based on similarity-based modeling (SBM) for automatic machinery fault diagnosis. The proposed system provides information about the equipment current state (degree of anomaly), and returns a set of exemplars that can be employed to describe the current state in a sparse fashion, which can be examined by the operator to assess a decision to be made. The system is modular and data-agnostic, enabling its use in different equipment and data sources with small modifications. The main contributions of this work are: the extensive study of the proposition and use of multiclass SBM on different databases, either as a stand-alone classification method or in combination with an off-the-shelf classifier; novel methods for selecting prototypes for the SBM models; the use of new similarity functions; and a new production-ready fault detection service. These contributions achieved the goal of increasing the SBM models performance in a fault classification scenario while reducing its computational complexity. The proposed system was evaluated in three different databases, achieving higher or similar performance when compared with previous works on the same database. Comparisons with other methods are shown for the recently developed Machinery Fault Database (MaFaulDa) and for the Case Western Reserve University (CWRU) bearing database. The proposed techniques increase the generalization power of the similarity model and of the associated classifier, having accuracies of 98.5% on MaFaulDa and 98.9% on CWRU database. These results indicate that the proposed approach based on SBM is worth further investigation.Este trabalho apresenta um sistema de manutenção preditiva para diagnóstico automático de falhas em máquinas. O sistema proposto, baseado em uma técnica denominada similarity-based modeling (SBM), provê informações sobre o estado atual do equipamento (grau de anomalia), e retorna um conjunto de amostras representativas que pode ser utilizado para descrever o estado atual de forma esparsa, permitindo a um operador avaliar a melhor decisão a ser tomada. O sistema é modular e agnóstico aos dados, permitindo que seja utilizado em variados equipamentos e dados com pequenas modificações. As principais contribuições deste trabalho são: o estudo abrangente da proposta do classificador SBM multi-classe e o seu uso em diferentes bases de dados, seja como um classificador ou auxiliando outros classificadores comumente usados; novos métodos para a seleção de amostras representativas para os modelos SBM; o uso de novas funções de similaridade; e um serviço de detecção de falhas pronto para ser utilizado em produção. Essas contribuições atingiram o objetivo de melhorar o desempenho dos modelos SBM em cenários de classificação de falhas e reduziram sua complexidade computacional. O sistema proposto foi avaliado em três bases de dados, atingindo desempenho igual ou superior ao desempenho de trabalhos anteriores nas mesmas bases. Comparações com outros métodos são apresentadas para a recém-desenvolvida Machinery Fault Database (MaFaulDa) e para a base de dados da Case Western Reserve University (CWRU). As técnicas propostas melhoraram a capacidade de generalização dos modelos de similaridade e do classificador final, atingindo acurácias de 98.5% na MaFaulDa e 98.9% na base de dados CWRU. Esses resultados apontam que a abordagem proposta baseada na técnica SBM tem potencial para ser investigada em mais profundidade

    Reliability Improvement On Feasibility Study For Selection Of Infrastructure Projects Using Data Mining And Machine Learning

    Get PDF
    With the progressive development of infrastructure construction, conventional analytical methods such as correlation index, quantifying factors, and peer review are no longer satisfactory in support for decision-making of implementing an infrastructure project in the age of big data. This study proposes using a mathematical model named Fuzzy-Neural Comprehensive Evaluation Model (FNCEM) to improve the reliability of the feasibility study of infrastructure projects by using data mining and machine learning. Specifically, the data collection on time-series data, including traffic videos (278 Gigabytes) and historical weather data, uses transportation cameras and online searching, respectively. Meanwhile, the researcher sent out a questionnaire for the collection of the public opinions upon the influencing factors that an infrastructure project may have. Then, this model implements the backpropagation Artificial Neural Network (BP-ANN) algorithm to simulate traffic flows and generate outputs as partial quantitative references for evaluation. The traffic simulation outputs used as partial inputs to the Analytic Hierarchy Process (AHP) based Fuzzy logic module of the system for the determination of the minimum traffic flows that a construction scheme in corresponding feasibility study should meet. This study bases on a real scenario of constructing a railway-crossing facility in a college town. The research results indicated that BP-ANN was well applied to simulate 15-minute small-scale pedestrian and vehicle flow with minimum overall logarithmic mean squared errors (Log-MSE) of 3.80 and 5.09, respectively. Also, AHP-based Fuzzy evaluation significantly decreased the evaluation subjectivity of selecting construction schemes by 62.5%. It concluded that the FNCEM model has strong potentials of enriching the methodology of conducting a feasibility study of the infrastructure project

    Predicting loss given default

    Get PDF
    The topic of credit risk modeling has arguably become more important than ever before given the recent financial turmoil. Conform the international Basel accords on banking supervision, financial institutions need to prove that they hold sufficient capital to protect themselves and the financial system against unforeseen losses caused by defaulters. In order to determine the required minimal capital, empirical models can be used to predict the loss given default (LGD). The main objectives of this doctoral thesis are to obtain new insights in how to develop and validate predictive LGD models through regression techniques. The first part reveals how good real-life LGD can be predicted and which techniques are best. Its value is in particular in the use of default data from six major international financial institutions and the evaluation of twenty-four different regression techniques, making this the largest LGD benchmarking study so far. Nonetheless, it is found that the resulting models have limited predictive performance no matter what technique is employed, although non-linear techniques yield higher performances than traditional linear techniques. The results of this study strongly advocate the need for financial institutions to invest in the collection of more relevant data. The second part introduces a novel validation framework to backtest the predictive performance of LGD models. The proposed key idea is to assess the test performance relative to the performance during model development with statistical hypothesis tests based on commonly used LGD predictive performance metrics. The value of this framework comprises a solution to the lack of reference values to determine acceptable performance and to possible performance bias caused by too little data. This study offers financial institutions a practical tool to prove the validity of their LGD models and corresponding predictions as required by national regulators. The third part uncovers whether the optimal regression technique can be selected based on typical characteristics of the data. Its value is especially in the use of the recently introduced concept of datasetoids which allows the generation of thousands of datasets representing real-life relations, thereby circumventing the scarcity problem of publicly available real-life datasets, making this the largest meta learning regression study so far. It is found that typical data based characteristics do not play any role in the performance of a technique. Nonetheless, it is proven that algorithm based characteristics are good drivers to select the optimal technique. This thesis may be valuable for any financial institution implementing credit risk models to determine their minimal capital requirements compliant with the Basel accords. The new insights provided in this thesis may support financial institutions to develop and validate their own LGD models. The results of the benchmarking and meta learning study can help financial institutions to select the appropriate regression technique to model their LGD portfolio's. In addition, the proposed backtesting framework, together with the benchmarking results can be employed to support the validation of the internally developed LGD models

    Proteochemometric Modeling of the Susceptibility of Mutated Variants of the HIV-1 Virus to Reverse Transcriptase Inhibitors

    Get PDF
    BACKGROUND: Reverse transcriptase is a major drug target in highly active antiretroviral therapy (HAART) against HIV, which typically comprises two nucleoside/nucleotide analog reverse transcriptase (RT) inhibitors (NRTIs) in combination with a non-nucleoside RT inhibitor or a protease inhibitor. Unfortunately, HIV is capable of escaping the therapy by mutating into drug-resistant variants. Computational models that correlate HIV drug susceptibilities to the virus genotype and to drug molecular properties might facilitate selection of improved combination treatment regimens. METHODOLOGY/PRINCIPAL FINDINGS: We applied our earlier developed proteochemometric modeling technology to analyze HIV mutant susceptibility to the eight clinically approved NRTIs. The data set used covered 728 virus variants genotyped for 240 sequence residues of the DNA polymerase domain of the RT; 165 of these residues contained mutations; totally the data-set covered susceptibility data for 4,495 inhibitor-RT combinations. Inhibitors and RT sequences were represented numerically by 3D-structural and physicochemical property descriptors, respectively. The two sets of descriptors and their derived cross-terms were correlated to the susceptibility data by partial least-squares projections to latent structures. The model identified more than ten frequently occurring mutations, each conferring more than two-fold loss of susceptibility for one or several NRTIs. The most deleterious mutations were K65R, Q151M, M184V/I, and T215Y/F, each of them decreasing susceptibility to most of the NRTIs. The predictive ability of the model was estimated by cross-validation and by external predictions for new HIV variants; both procedures showed very high correlation between the predicted and actual susceptibility values (Q2=0.89 and Q2ext=0.86). The model is available at www.hivdrc.org as a free web service for the prediction of the susceptibility to any of the clinically used NRTIs for any HIV-1 mutant variant. CONCLUSIONS/SIGNIFICANCE: Our results give directions how to develop approaches for selection of genome-based optimum combination therapy for patients harboring mutated HIV variants

    Reliability Models and Failure Detection Algorithms for Wind Turbines

    Get PDF
    Durante las pasadas décadas, la industria eólica ha sufrido un crecimiento muysignificativo en Europa llevando a la generación eólica al puesto más relevanteen cuanto a producción energética mediante fuentes renovables. Sin embargo, siconsideramos los aspectos económicos, el sector eólico todavía no ha alcanzadoel nivel competitivo necesario para batir a los sistemas de generación de energíaconvencionales.Los costes principales en la explotación de parques eólicos se asignan a lasactividades relacionadas con la Operación y Mantenimiento (O&M). Esto se debeal hecho de que, en la actualidad, la Operación y Mantenimiento está basadaprincipalmente en acciones correctivas o preventivas. Por tanto, el uso de técnicaspredictivas podría reducir de forma significativa los costes relacionados con lasactividades de mantenimiento mejorando así los beneficios globales de la explotaciónde los parques eólicos.Aunque los beneficios del mantenimiento predictivo se consideran cada díamás importantes, existen todavía la necesidad de investigar y explorar dichastécnicas. Modelos de fiabilidad avanzados y algoritmos de predicción de fallospueden facilitar a los operadores la detección anticipada de fallos de componentesen los aerogeneradores y, en base a ello, adaptar sus estrategias de mantenimiento.Hasta la fecha, los modelos de fiabilidad de turbinas eólicas se basan, casiexclusivamente, en la edad de la turbina. Esto es así porque fueron desarrolladosoriginalmente para máquinas que trabajan en entornos ‘amigables’, por ejemplo, enel interior de naves industriales. Los aerogeneradores, al contrario, están expuestosa condiciones ambientales altamente variables y, por tanto, los modelos clásicosde fiabilidad no reflejan la realidad con suficiente precisión. Es necesario, portanto, desarrollar nuevos modelos de fiabilidad que sean capaces de reproducir el comportamiento de los fallos de las turbinas eólicas y sus componentes, teniendoen cuenta las condiciones meteorológicas y operacionales en su emplazamiento.La predicción de fallos se realiza habitualmente utilizando datos que se obtienendel sistema de Supervisión Control y Adquisición de Datos (SCADA) o de Sistemasde Monitorización de Condición (CMS). Cabe destacar que en turbinas eólicasmodernas conviven ambos tipos de sistemas y la fusión de ambas fuentes de datospuede mejorar significativamente la detección de fallos. Esta tesis pretende mejorarlas prácticas actuales de Operación y Mantenimiento mediante: (1) el desarrollo demodelos avanzados de fiabilidad y detección de fallos basados en datos que incluyanlas condiciones ambientales y operacionales existentes en los parques eólicos y (2)la aplicación de nuevos algoritmos de detección de fallos que usen las condicionesambientales y operacionales del emplazamiento, así como datos procedentes tantode sistemas SCADA como CMS. Estos dos objetivos se han dividido en cuatrotareas.En la primera tarea se ha realizado un análisis exhaustivo tanto de los fallosproducidos en un amplio conjunto de aerogeneradores (amplio en número de turbinasy en longitud de los registros) como de sus tiempos de parada asociados. De estaforma, se han visualizado los componentes que más fallan en función de la tecnologíadel aerogenerador, así como sus modos de fallo. Esta información es vital para eldesarrollo posterior de modelos de fiabilidad y mantenimiento.En segundo lugar, se han investigado las condiciones meteorológicas previasa sucesos con fallos de los principales componentes de los aerogeneradores. Seha desarrollado un entorno de aprendizaje basado en datos utilizando técnicas deagrupamiento ‘k-means clustering’ y reglas de asociación ‘a priori’. Este entorno escapaz de manejar grandes cantidades de datos proporcionando resultados útiles yfácilmente visualizables. Adicionalmente, se han aplicado algoritmos de detecciónde anomalías y patrones para encontrar cambios abruptos y patrones recurrentesen la serie temporal de la velocidad del viento en momentos previos a los fallosde los componentes principales de los aerogeneradores. En la tercera tarea, sepropone un nuevo modelo de fiabilidad que incorpora directamente las condicionesmeteorológicas registradas durante los dos meses previos al fallo. El modelo usados procesos estadísticos separados, uno genera los sucesos de fallos, así comoceros ocasionales mientras que el otro genera los ceros estructurales necesarios paralos algoritmos de cálculo. Los posibles efectos no observados (heterogeneidad) en el parque eólico se tienen en cuenta de forma adicional. Para evitar problemas de‘over-fitting’ y multicolinearidades, se utilizan sofisticadas técnicas de regularización.Finalmente, la capacidad del modelo se verifica usando datos históricos de fallosy lecturas meteorológicas obtenidas en los mástiles meteorológicos de los parqueseólicos.En la última tarea se han desarrollado algoritmos de predicción basados encondiciones meteorológicas y en datos operacionales y de vibraciones. Se ha‘entrenado’ una red de Bayes, para predecir los fallos de componentes en unparque eólico, basada fundamentalmente en las condiciones meteorológicas delemplazamiento. Posteriormente, se introduce una metodología para fusionar datosde vibraciones obtenidos del CMS con datos obtenidos del sistema SCADA, conel objetivo de analizar las relaciones entre ambas fuentes. Estos datos se hanutilizado para la predicción de fallos en el eje principal utilizando varios algoritmosde inteligencia artificial, ‘random forests’, ‘gradient boosting machines’, modelosgeneralizados lineales y redes neuronales artificiales. Además, se ha desarrolladouna herramienta para la evaluación on-line de los datos de vibraciones (CMS)denominada DAVE (‘Distance Based Automated Vibration Evaluation’).Los resultados de esta tesis demuestran que el comportamiento de los fallos delos componentes de aerogeneradores está altamente influenciado por las condicionesmeteorológicas del emplazamiento. El entorno de aprendizaje basado en datos escapaz de identificar las condiciones generales y temporales específicas previas alos fallos de componentes. Además, se ha demostrado que, con los modelos defiabilidad y algoritmos de detección propuestos, la Operación y Mantenimiento delas turbinas eólicas puede mejorarse significativamente. Estos modelos de fiabilidady de detección de fallos son los primeros que proporcionan una representaciónrealística y específica del emplazamiento, al considerar combinaciones complejasde las condiciones ambientales, así como indicadores operacionales y de estadode operación obtenidos a partir de la fusión de datos de vibraciones CMS y datosdel SCADA. Por tanto, este trabajo proporciona entornos prácticos, modelos yalgoritmos que se podrán aplicar en el campo del mantenimiento predictivo deturbinas eólicas.<br /

    On computational models of animal movement behaviour

    Get PDF
    Finding structures and patterns in animal movement data is essential towards understanding a variety of behavioural phenomena, as well as shedding light into the relationships between animals among conspecifics and across different taxa with respect to their environments. The recent advances in the field of computational intelligence coupled with the proliferation of low-cost telemetry devices have made the gathering and analyses of behavioural data of animals in their natural habitat and in a wide range of context possible with aid of devices such as Global Positioning System (GPS). The sensory input that animals receive from their environment, and the corresponding motor output, as well as the neural basis of this relationship most especially as it affects movement, encode a lot of information regarding the welfare and survival of these animals and other organisms in nature's ecosystem. This has huge implications in the area of biodiversity monitoring, global health and understanding disease progression. Encoding, decoding and quantifying these functional relationships however can be challenging, boring and labour intensive. Artificial intelligence holds promise in solving some of these problems and even stand to benefit as understanding natural intelligence for instance can aid in the advancement of artificial intelligence. In this thesis, I investigate and propose several computational methods leveraging information theoretic metrics and also modern machine learning methods including supervised, unsupervised and a novel combination of both towards understanding, predicting, forecasting and quantifying a variety of animal movement phenomena at different time scales across different taxa and species. Most importantly the models proposed in this thesis tackle important problems bordering on human and animal welfare as well as their intersection. Crucially, I investigate several information theoretic metrics towards mining animal movement data, after which I propose machine learning and statistical techniques for automatically quantifying abnormal movement behaviour in sheep with Batten disease using unsupervised methods. In addition, I propose a predictive model capable of forecasting migration patterns in Turkey vulture as well as their stop-over decisions using bidirectional recurrent neural networks. And finally, I propose a model of sheep movement behaviour in a flock leveraging insights in cognitive neuroscience with modern deep learning models. Overall, the models of animal movement behaviour developed in this thesis are useful to a wide range of scientists in the field of neuroscience, ethology, veterinary science, conservation and public health. Although these models have been designed for understanding and predicting animal movement behaviour, in a lot of cases they scale easily into other domains such as human behaviour modelling with little modifications. I highlight the importance of continuous research in developing computational models of animal movement behaviour towards improving our understanding of nature in relation to the interaction between animals and their environments

    Collaborative Networks, Decision Systems, Web Applications and Services for Supporting Engineering and Production Management

    Get PDF
    This book focused on fundamental and applied research on collaborative and intelligent networks and decision systems and services for supporting engineering and production management, along with other kinds of problems and services. The development and application of innovative collaborative approaches and systems are of primer importance currently, in Industry 4.0. Special attention is given to flexible and cyber-physical systems, and advanced design, manufacturing and management, based on artificial intelligence approaches and practices, among others, including social systems and services
    corecore