98,024 research outputs found

    Dispatcher3 D5.1 - Verification and validation plan

    Get PDF
    In this deliverable, we present a verification and validation plan designed to carry out all necessary activities along Dispatcher3 prototype development. Given the nature of the project, the deliverable points to a data-centric approach to machine learning that treats training and testing models as an important production asset, together with the algorithm and infrastructure used throughout the development. The verification and validation activities will be presented in the document. The proposed framework will support the incremental development of the prototype based on the principle of iterative development paradigm. The core of the verification and validation approach is structured around three different and inter-related phases including data acquisition and preparation, predictive model development and advisory generator model development which are combined iteratively and in close coordination with the experts from the consortium and the Advisory Board. For each individual phase, a set of verification and validation activities will be performed to maximise the benefits of Dispatcher3. Thus, the methodological framework proposed in this deliverable attempts to address the specificities of the verification and validation approach in the domain of machine learning, as it differs from the canonical approach which are typically based on standardised procedures, and in the domain of the final prospective model. This means that the verification and validation of the machine learning models will also be considered as a part of the model development, since the tailoring and enhancement of the model highly relies on the verification and validation results. The deliverable provides an approach on the definition of preliminary case studies that ensure the flexibility and tractability in their selection through different machine learning model development. The deliverable finally details the organisation and schedule of the internal and external meetings, workshops and dedicated activities along with the specification of the questionnaires, flow-type diagrams and other tool and platforms which aim to facilitate the validation assessments with special focus on the predictive and prospective models

    Use of supervised machine learning for GNSS signal spoofing detection with validation on real-world meaconing and spoofing data : part I

    Get PDF
    The vulnerability of the Global Navigation Satellite System (GNSS) open service signals to spoofing and meaconing poses a risk to the users of safety-of-life applications. This risk consists of using manipulated GNSS data for generating a position-velocity-timing solution without the user's system being aware, resulting in presented hazardous misleading information and signal integrity deterioration without an alarm being triggered. Among the number of proposed spoofing detection and mitigation techniques applied at different stages of the signal processing, we present a method for the cross-correlation monitoring of multiple and statistically significant GNSS observables and measurements that serve as an input for the supervised machine learning detection of potentially spoofed or meaconed GNSS signals. The results of two experiments are presented, in which laboratory-generated spoofing signals are used for training and verification within itself, while two different real-world spoofing and meaconing datasets were used for the validation of the supervised machine learning algorithms for the detection of the GNSS spoofing and meaconing

    An Optimized Machine Learning and Deep Learning Framework for Facial and Masked Facial Recognition

    Get PDF
    In this study, we aimed to find an optimized approach to improving facial and masked facial recognition using machine learning and deep learning techniques. Prior studies only used a single machine learning model for classification and did not report optimal parameter values. In contrast, we utilized a grid search with hyperparameter tuning and nested cross-validation to achieve better results during the verification phase. We performed experiments on a large dataset of facial images with and without masks. Our findings showed that the SVM model with hyperparameter tuning had the highest accuracy compared to other models, achieving a recognition accuracy of 0.99912. The precision values for recognition without masks and with masks were 0.99925 and 0.98417, respectively. We tested our approach in real-life scenarios and found that it accurately identified masked individuals through facial recognition. Furthermore, our study stands out from others as it incorporates hyperparameter tuning and nested cross-validation during the verification phase to enhance the model's performance, generalization, and robustness while optimizing data utilization. Our optimized approach has potential implications for improving security systems in various domains, including public safety and healthcare. Doi: 10.28991/ESJ-2023-07-04-010 Full Text: PD

    A Framework for the Verification and Validation of Artificial Intelligence Machine Learning Systems

    Get PDF
    An effective verification and validation (V&V) process framework for the white-box and black-box testing of artificial intelligence (AI) machine learning (ML) systems is not readily available. This research uses grounded theory to develop a framework that leads to the most effective and informative white-box and black-box methods for the V&V of AI ML systems. Verification of the system ensures that the system adheres to the requirements and specifications developed and given by the major stakeholders, while validation confirms that the system properly performs with representative users in the intended environment and does not perform in an unexpected manner. Beginning with definitions, descriptions, and examples of ML processes and systems, the research results identify a clear and general process to effectively test these systems. The developed framework ensures the most productive and accurate testing results. Formerly, and occasionally still, the system definition and requirements exist in scattered documents that make it difficult to integrate, trace, and test through V&V. Modern system engineers along with system developers and stakeholders collaborate to produce a full system model using model-based systems engineering (MBSE). MBSE employs a Unified Modeling Language (UML) or System Modeling Language (SysML) representation of the system and its requirements that readily passes from each stakeholder for system information and additional input. The comprehensive and detailed MBSE model allows for direct traceability to the system requirements. xxiv To thoroughly test a ML system, one performs either white-box or black-box testing or both. Black-box testing is a testing method in which the internal model structure, design, and implementation of the system under test is unknown to the test engineer. Testers and analysts are simply looking at performance of the system given input and output. White-box testing is a testing method in which the internal model structure, design, and implementation of the system under test is known to the test engineer. When possible, test engineers and analysts perform both black-box and white-box testing. However, sometimes testers lack authorization to access the internal structure of the system. The researcher captures this decision in the ML framework. No two ML systems are exactly alike and therefore, the testing of each system must be custom to some degree. Even though there is customization, an effective process exists. This research includes some specialized methods, based on grounded theory, to use in the testing of the internal structure and performance. Through the study and organization of proven methods, this research develops an effective ML V&V framework. Systems engineers and analysts are able to simply apply the framework for various white-box and black-box V&V testing circumstances

    The Implementation of Machine Learning in Lithofacies Classification using Multi Well Logs Data

    Get PDF
    Lithofacies classification is a process to identify rock lithology by indirect measurements. Usually, the classification is processed manually by an experienced geoscientist. This research presents an automated lithofacies classification using a machine learning method to increase computational power in shortening the lithofacies classification process's time consumption. The support vector machine (SVM) algorithm has been applied successfully to the Damar field, Indonesia. The machine learning input is various well-log data sets, e.g., gamma-ray, density, resistivity, neutron porosity, and effective porosity. Machine learning can classify seven lithofacies and depositional environments, including channel, bar sand, beach sand, carbonate, volcanic, and shale. The classification accuracy in the verification phase with trained lithofacies class data reached more than 90%, while the accuracy in the validation phase with beyond trained data reached 65%. The classified lithofacies then can be used as the input for describing lateral and vertical rock distribution patterns

    Towards Scalable Characterization of Noisy, Intermediate-Scale Quantum Information Processors

    Get PDF
    In recent years, quantum information processors (QIPs) have grown from one or two qubits to tens of qubits. As a result, characterizing QIPs – measuring how well they work, and how they fail – has become much more challenging. The obstacles to characterizing today’s QIPs will grow even more difficult as QIPs grow from tens of qubits to hundreds, and enter what has been called the “noisy, intermediate-scale quantum” (NISQ) era. This thesis develops methods based on advanced statistics and machine learning algorithms to address the difficulties of “quantum character- ization, validation, and verification” (QCVV) of NISQ processors. In the first part of this thesis, I use statistical model selection to develop techniques for choosing between several models for a QIPs behavior. In the second part, I deploy machine learning algorithms to develop a new QCVV technique and to do experiment design. These investigations help lay a foundation for extending QCVV to characterize the next generation of NISQ processors
    • …
    corecore