30 research outputs found

    Supporting Safety Analysis of Deep Neural Networks with Automated Debugging and Repair

    Get PDF

    Unsupervised Methods for Condition-Based Maintenance in Non-Stationary Operating Conditions

    Get PDF
    Maintenance and operation of modern dynamic engineering systems requires the use of robust maintenance strategies that are reliable under uncertainty. One such strategy is condition-based maintenance (CBM), in which maintenance actions are determined based on the current health of the system. The CBM framework integrates fault detection and forecasting in the form of degradation modeling to provide real-time reliability, as well as valuable insight towards the future health of the system. Coupled with a modern information platform such as Internet-of-Things (IoT), CBM can deliver these critical functionalities at scale. The increasingly complex design and operation of engineering systems has introduced novel problems to CBM. Characteristics of these systems - such as the unavailability of historical data, or highly dynamic operating behaviour - has rendered many existing solutions infeasible. These problems have motivated the development of new and self-sufficient - or in other words - unsupervised CBM solutions. The issue, however, is that many of the necessary methods required by such frameworks have yet to be proposed within the literature. Key gaps pertaining to the lack of suitable unsupervised approaches for the pre-processing of non-stationary vibration signals, parameter estimation for fault detection, and degradation threshold estimation, need to be addressed in order to achieve an effective implementation. The main objective of this thesis is to propose set of three novel approaches to address each of the aforementioned knowledge gaps. A non-parametric pre-processing and spectral analysis approach, termed spectral mean shift clustering (S-MSC) - which applies mean shift clustering (MSC) to the short time Fourier transform (STFT) power spectrum for simultaneous de-noising and extraction of time-varying harmonic components - is proposed for the autonomous analysis of non-stationary vibration signals. A second pre-processing approach, termed Gaussian mixture model operating state decomposition (GMM-OSD) - which uses GMMs to cluster multi-modal vibration signals by their respective, unknown operating states - is proposed to address multi-modal non-stationarity. Applied in conjunction with S-MSC, these two approaches form a robust and unsupervised pre-processing framework tailored to the types of signals found in modern engineering systems. The final approach proposed in this thesis is a degradation detection and fault prediction framework, termed the Bayesian one class support vector machine (B-OCSVM), which tackles the key knowledge gaps pertaining to unsupervised parameter and degradation threshold estimation by re-framing the traditional fault detection and degradation modeling problem as a degradation detection and fault prediction problem. Validation of the three aforementioned approaches is performed across a wide range of machinery vibration data sets and applications, including data obtained from two full-scale field pilots located at Toronto Pearson International Airport. The first of which is located on the gearbox of the LINK Automated People Mover (APM) train at Toronto Pearson International Airport; and, the second which is located on a subset of passenger boarding tunnel pre-conditioned air units (PCA) in Terminal 1 of Pearson airport. Results from validation found that the proposed pre-processing approaches and combined pre-processing framework provides a robust and computationally efficient and robust methodology for the analysis of non-stationary vibration signals in unsupervised CBM. Validation of the B-OCSVM framework showed that the proposed parameter estimation approaches enables the earlier detection of the degradation process compared to existing approaches, and the proposed degradation threshold provides a reasonable estimate of the fault manifestation point. Holistically, the approaches proposed in thesis provide a crucial step forward towards the effective implementation of unsupervised CBM in complex, modern engineering systems

    Assuming Data Integrity and Empirical Evidence to The Contrary

    Get PDF
    Background: Not all respondents to surveys apply their minds or understand the posed questions, and as such provide answers which lack coherence, and this threatens the integrity of the research. Casual inspection and limited research of the 10-item Big Five Inventory (BFI-10), included in the dataset of the World Values Survey (WVS), suggested that random responses may be common. Objective: To specify the percentage of cases in the BRI-10 which include incoherent or contradictory responses and to test the extent to which the removal of these cases will improve the quality of the dataset. Method: The WVS data on the BFI-10, measuring the Big Five Personality (B5P), in South Africa (N=3 531), was used. Incoherent or contradictory responses were removed. Then the cases from the cleaned-up dataset were analysed for their theoretical validity. Results: Only 1 612 (45.7%) cases were identified as not including incoherent or contradictory responses. The cleaned-up data did not mirror the B5P- structure, as was envisaged. The test for common method bias was negative. Conclusion: In most cases the responses were incoherent. Cleaning up the data did not improve the psychometric properties of the BFI-10. This raises concerns about the quality of the WVS data, the BFI-10, and the universality of B5P-theory. Given these results, it would be unwise to use the BFI-10 in South Africa. Researchers are alerted to do a proper assessment of the psychometric properties of instruments before they use it, particularly in a cross-cultural setting

    Product Development within Artificial Intelligence, Ethics and Legal Risk

    Get PDF
    This open-access-book synthesizes a supportive developer checklist considering sustainable Team and agile Project Management in the challenge of Artificial Intelligence and limits of image recognition. The study bases on technical, ethical, and legal requirements with examples concerning autonomous vehicles. As the first of its kind, it analyzes all reported car accidents state wide (1.28 million) over a 10-year period. Integrating of highly sensitive international court rulings and growing consumer expectations make this book a helpful guide for product and team development from initial concept until market launch

    Data Science: Measuring Uncertainties

    Get PDF
    With the increase in data processing and storage capacity, a large amount of data is available. Data without analysis does not have much value. Thus, the demand for data analysis is increasing daily, and the consequence is the appearance of a large number of jobs and published articles. Data science has emerged as a multidisciplinary field to support data-driven activities, integrating and developing ideas, methods, and processes to extract information from data. This includes methods built from different knowledge areas: Statistics, Computer Science, Mathematics, Physics, Information Science, and Engineering. This mixture of areas has given rise to what we call Data Science. New solutions to the new problems are reproducing rapidly to generate large volumes of data. Current and future challenges require greater care in creating new solutions that satisfy the rationality for each type of problem. Labels such as Big Data, Data Science, Machine Learning, Statistical Learning, and Artificial Intelligence are demanding more sophistication in the foundations and how they are being applied. This point highlights the importance of building the foundations of Data Science. This book is dedicated to solutions and discussions of measuring uncertainties in data analysis problems

    Leading Towards Voice and Innovation: The Role of Psychological Contract

    Get PDF
    Background: Empirical evidence generally suggests that psychological contract breach (PCB) leads to negative outcomes. However, some literature argues that, occasionally, PCB leads to positive outcomes. Aim: To empirically determine when these positive outcomes occur, focusing on the role of psychological contract (PC) and leadership style (LS), and outcomes such as employ voice (EV) and innovative work behaviour (IWB). Method: A cross-sectional survey design was adopted, using reputable questionnaires on PC, PCB, EV, IWB, and leadership styles. Correlation analyses were used to test direct links within the model, while regression analyses were used to test for the moderation effects. Results: Data with acceptable psychometric properties were collected from 11 organisations (N=620). The results revealed that PCB does not lead to substantial changes in IWB. PCB correlated positively with prohibitive EV, but did not influence promotive EV, which was a significant driver of IWB. Leadership styles were weak predictors of EV and IWB, and LS only partially moderated the PCB-EV relationship. Conclusion: PCB did not lead to positive outcomes. Neither did LS influencing the relationships between PCB and EV or IWB. Further, LS only partially influenced the relationships between variables, and not in a manner which positively influence IWB

    13th International Conference on Modeling, Optimization and Simulation - MOSIM 2020

    Get PDF
    Comité d’organisation: Université Internationale d’Agadir – Agadir (Maroc) Laboratoire Conception Fabrication Commande – Metz (France)Session RS-1 “Simulation et Optimisation” / “Simulation and Optimization” Session RS-2 “Planification des Besoins Matières Pilotée par la Demande” / ”Demand-Driven Material Requirements Planning” Session RS-3 “Ingénierie de Systèmes Basées sur les Modèles” / “Model-Based System Engineering” Session RS-4 “Recherche Opérationnelle en Gestion de Production” / "Operations Research in Production Management" Session RS-5 "Planification des Matières et des Ressources / Planification de la Production” / “Material and Resource Planning / Production Planning" Session RS-6 “Maintenance Industrielle” / “Industrial Maintenance” Session RS-7 "Etudes de Cas Industriels” / “Industrial Case Studies" Session RS-8 "Données de Masse / Analyse de Données” / “Big Data / Data Analytics" Session RS-9 "Gestion des Systèmes de Transport” / “Transportation System Management" Session RS-10 "Economie Circulaire / Développement Durable" / "Circular Economie / Sustainable Development" Session RS-11 "Conception et Gestion des Chaînes Logistiques” / “Supply Chain Design and Management" Session SP-1 “Intelligence Artificielle & Analyse de Données pour la Production 4.0” / “Artificial Intelligence & Data Analytics in Manufacturing 4.0” Session SP-2 “Gestion des Risques en Logistique” / “Risk Management in Logistics” Session SP-3 “Gestion des Risques et Evaluation de Performance” / “Risk Management and Performance Assessment” Session SP-4 "Indicateurs Clés de Performance 4.0 et Dynamique de Prise de Décision” / ”4.0 Key Performance Indicators and Decision-Making Dynamics" Session SP-5 "Logistique Maritime” / “Marine Logistics" Session SP-6 “Territoire et Logistique : Un Système Complexe” / “Territory and Logistics: A Complex System” Session SP-7 "Nouvelles Avancées et Applications de la Logique Floue en Production Durable et en Logistique” / “Recent Advances and Fuzzy-Logic Applications in Sustainable Manufacturing and Logistics" Session SP-8 “Gestion des Soins de Santé” / ”Health Care Management” Session SP-9 “Ingénierie Organisationnelle et Gestion de la Continuité de Service des Systèmes de Santé dans l’Ere de la Transformation Numérique de la Société” / “Organizational Engineering and Management of Business Continuity of Healthcare Systems in the Era of Numerical Society Transformation” Session SP-10 “Planification et Commande de la Production pour l’Industrie 4.0” / “Production Planning and Control for Industry 4.0” Session SP-11 “Optimisation des Systèmes de Production dans le Contexte 4.0 Utilisant l’Amélioration Continue” / “Production System Optimization in 4.0 Context Using Continuous Improvement” Session SP-12 “Défis pour la Conception des Systèmes de Production Cyber-Physiques” / “Challenges for the Design of Cyber Physical Production Systems” Session SP-13 “Production Avisée et Développement Durable” / “Smart Manufacturing and Sustainable Development” Session SP-14 “L’Humain dans l’Usine du Futur” / “Human in the Factory of the Future” Session SP-15 “Ordonnancement et Prévision de Chaînes Logistiques Résilientes” / “Scheduling and Forecasting for Resilient Supply Chains

    Energy Efficiency

    Get PDF
    Energy efficiency is finally a common sense term. Nowadays almost everyone knows that using energy more efficiently saves money, reduces the emissions of greenhouse gasses and lowers dependence on imported fossil fuels. We are living in a fossil age at the peak of its strength. Competition for securing resources for fuelling economic development is increasing, price of fuels will increase while availability of would gradually decline. Small nations will be first to suffer if caught unprepared in the midst of the struggle for resources among the large players. Here it is where energy efficiency has a potential to lead toward the natural next step - transition away from imported fossil fuels! Someone said that the only thing more harmful then fossil fuel is fossilized thinking. It is our sincere hope that some of chapters in this book will influence you to take a fresh look at the transition to low carbon economy and the role that energy efficiency can play in that process

    A multi-armed bandit approach for enhancing test case prioritization in continuous integration environments

    Get PDF
    Orientador: Silvia Regina VergilioTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 10/12/2021Inclui referênciasÁrea de concentração: Ciência da ComputaçãoResumo: A Integração Contínua (do inglês Continuous Integration, CI) é uma prática comum e amplamente adotada na indústria que permite a integração frequente de mudanças de software, tornando a evolução do software mais rápida e econômica. Em ambientes que adotam CI, o Teste de Regressão (do inglês Regression Testing, RT) é fundamental para assegurar que mudanças realizadas não afetaram negativamente o comportamento do sistema. No entanto, RT é uma tarefa cara. Para reduzir os custos do RT, o uso de técnicas de priorização de casos de teste (do inglês Test Case Prioritization, TCP) desempenha um papel importante. Essas técnicas visam a identificar a ordem para os casos de teste que maximiza objetivos específicos, como a detecção antecipada de falhas. Recentemente, muitos estudos surgiram no contexto de TCP para ambientes de CI (do inglês Test Case Prioritization in Continuous Integration, TCPCI), mas poucos estudos consideram particularidades destes ambientes, tais como restrições de tempo e a volatilidade dos casos de teste, ou seja, eles não consideram o ambiente dinâmico do ciclo de vida do software no qual novos casos de teste podem ser adicionados ou removidos (descontinuados) de um ciclo para outro. A volatilidade de casos de teste está relacionada ao dilema de Exploração versus Intensificação (do inglês Exploration versus Exploitation, EvE). Para resolver este dilema uma abordagem precisa balancear: i) a diversidade do conjunto de testes; e ii) a quantidade de novos casos de teste e testes que possuem alta probabilidade de revelar defeitos. Para lidar com isso, a maioria das abordagens usa, além do histórico de falhas, outras métricas que consideram instrumentação de código ou necessitam de informações adicionais, tais como a cobertura de testes. Contudo, manter as informações atualizadas pode ser difícil e consumir tempo, e não ser escalável devido ao orçamento de teste do ambiente de CI. Neste contexto, e para lidar apropriadamente com o problema de TCPCI, este trabalho apresenta uma abordagem baseada em problemas Multi-Armed Bandit (MAB) chamada COLEMAN (Combinatorial VOlatiLE Multi-Armed BANdiT). Problemas MAB são uma classe de problemas de decisão sequencial que são intensamente estudados para resolver o dilema de EvE. O problema de TCPCI enquadra-se na categoria volátil e combinatorial, pois múltiplos braços (casos de teste) necessitam ser selecionados, e eles são adicionados ou removidos ao longos dos ciclos. COLEMAN foi avaliada em diferentes sistemas do mundo real, orçamentos de teste, funções de recompensa, e políticas MAB, em relação a diferentes abordagens da literatura, e também no contexto de Sistemas Altamente Configuráveis (do inglês Highly-Configurable Software, HCS). Diferentes indicadores de qualidade foram utilizados, englobando diferentes perspectivas tais como a eficácia da detecção de defeitos (com e sem considerar custo), rápida detecção de defeitos, redução do tempo de teste, tempo de priorização, e acurácia. Os resultados mostram que a abordagem COLEMAN é promissora e endossam sua aplicabilidade no problema de TCPCI. Em comparação com RETECS, uma abordagem do estado da arte baseada em Aprendizado por Reforço, COLEMAN apresenta uma melhor eficácia em detectar defeitos em ˜ 82% dos casos, e detecta-os mais rapidamente em 100% dos casos. COLEMAN gasta um tempo negligível, menos do que um segundo para executar, e é mais estável do que a abordagem RETECS, ou seja, melhor se adapta para lidar com os picos de defeitos. Quando comparada com uma abordagem baseada em busca, COLEMAN provê soluções próximas das ótimas em ˜ 90% dos casos, e soluções razoáveis em ˜ 92% dos casos em comparação com uma abordagem determinística. Portanto, a contribuição deste trabalho é introduzir uma abordagem eficiente e eficaz para o problema de TCPCI.Abstract: Continuous Integration (CI) is a practice commonly and widely adopted in the industry to allowfrequent integration of software changes, making software evolution faster and cost-effective. In CIenvironments, Regression Testing (RT) is fundamental to ensure that changes have not adverselyaffected existing features of the system. However, RT is an expensive task. To reduce RT costs,the use of Test Case Prioritization (TCP) techniques plays an important role. These techniquesattempt to identify the test case order that maximizes specific goals, such as early fault detection.Recently, many studies on TCP in CI environments (TCPCI) have arisen, but few pieces of workconsider CI particularities, such as the time constraint and the test case volatility, that is, they donot consider the dynamic environment of the software life-cycle in which new test cases can beadded or removed (discontinued) over time. The test case volatility is a characteristic related tothe Exploration versus Exploitation (EvE) dilemma. To solve such a dilemma an approach needsto balance: i) the diversity of the test suite; and ii) the quantity of new test cases and test casesthat are error-prone or that comprise high fault-detection capabilities. To deal with this, mostapproaches use, besides the failure-history, other measures that rely on code instrumentation orrequire additional information, such as testing coverage. However, maintaining this informationupdated can be difficult and time-consuming, not scalable due to the test budget of CI environments.In this context, and to properly deal with the TCPCI problem, this work presents an approachbased on Multi-Armed Bandit (MAB) called COLEMAN (Combinatorial VOlatiLE Multi-ArmedBANdiT). The MAB problems are a class of sequential decision problems that are intensivelystudied for solving the EvE dilemma. The TCPCI problem falls into the category of volatileand combinatorial MAB, because multiple arms (test cases) need to be selected, and they areadded or removed over the cycles. COLEMAN was evaluated under different real-world softwaresystems, time budgets, reward functions, and MAB policies, against different approaches fromthe literature, and also considering the Highly-Configurable Software context. Different qualityindicators were used to encompass different perspectives such as fault detection effectiveness (andwith cost consideration), early fault detection, test time reduction, prioritization time, and accuracy.The outcomes show that COLEMAN is promising and endorse its applicability for the TCPCIproblem. COLEMAN outperforms RETECS, a state-of-the-art approach based on ReinforcementLearning, and stands out mainly regarding fault detection effectiveness (in ~ 82% of the cases)and early fault detection (in 100%). COLEMAN spends a negligible time, less than one second toexecute, and is more stable than RETECS, that is, adapts better to deal with peak of faults. Whencompared with a search-based approach, COLEMAN provides near-optimal solutions in ~ 90% ofthe cases, and in comparison with a deterministic approach, provides reasonable solutions in 92%of the cases. Thus, the main contribution of this work is to provide an efficient and efficaciousMAB-based approach for the TCPCI problem
    corecore