4,340 research outputs found

    Automated testsystem of COGNISION headset for cognitive diagnosis.

    Get PDF
    There are more than 15 million Americans suffering from a chronic cognitive disability in the Unites States. Researchers have been exploring many different quantitative measures, such as event related potentials (ERP), electro-encephalogram (EEG), Magnetic Encephalogram (MEG) and Brain volumetry to accurately and repeatedly diagnose patients suffering from debilitating cognitive disorders. More than a million cases have been diagnosed every year, with many of those patients being misdiagnosed as a result of inadequate diagnostic and quality control tools. As a result, the medical device industry has been actively developing alternative diagnostic techniques, which implement one or more quantitative measures to improve diagnosis. For example, Neuronetrix (Louisville, KY) developed COGNISION™ that utilizes both ERP and EEG data to diagnose the cognitive ability of patients. The system has shown to be a powerful tool; however, its commercial success would be limited without lack of a fast and effective method of testing and validating the product. Thus, the goal of this study is to develop, test and validate a new “Testset” system for accurately and repeatedly validating the COGNISION™ Headset. A Testset was constructed that is comprised of a software control component designed using the Labview G programming language, which runs on a computer terminal, a Data Acquisition (DAQ) card and switching board. The Testset is connected to a series of testing fixtures for interfacing with the various components of the Headset. The Testset evaluates the Headset at multiple stages of the manufacturing process as a whole system or by its individual components. At the first stage of production the Electrode Strings, amplifier board (Uberyoke), and Headset Control Unit (HCU) are tested and operated as individual printed circuit boards (PCBs). These components are again tested as mid-level assemblies and/or at the finished product stage as a complete autonomous system with the Testset monitoring the process. All tests are automated, requiring only a few parameters to be defined before a test is initiated by a single button press, and then selected test sequences are begun for that particular component or system and are completed in a few minutes. A total of 2 Testsets were constructed and used to validate 10 Headsets. An automated software system was designed to control the Testset. The Testset demonstrated the ability to validate and test 100% of the individual components and completed assembled Headsets. The Testsets were found to be within 5% of the manufacturing specifications. Subsequently, the Automated Testset developed in this study enabled the manufacturer to provide a comprehensive report on the calibration parameters of the Headset, which is retained on file for each unit sold. The automated testsystem’s statistical analysis shows that the two Testsets yielded reliable and consistent results with each other

    CBR and MBR techniques: review for an application in the emergencies domain

    Get PDF
    The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system. RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to: a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location. In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations. This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version

    New perspectives in catheter ablation for atrial fibrillation Towards a better treatment to reach better outcomes

    Get PDF
    The overall aim of the studies presented in this thesis is to elucidate whether there is still room for improvement in the field of catheter ablation for AF either paroxysmal and persistent, and the following chapters will guide the reader in a virtual path that addresses this issue

    High-Confidence Medical Device Software Development

    Get PDF
    The design of bug-free and safe medical device software is challenging, especially in complex implantable devices. This is due to the device\u27s closed-loop interaction with the patient\u27s organs, which are stochastic physical environments. The life-critical nature and the lack of existing industry standards to enforce software validation make this an ideal domain for exploring design automation challenges for integrated functional and formal modeling with closed-loop analysis. The primary goal of high-confidence medical device software is to guarantee the device will never drive the patient into an unsafe condition even though we do not have complete understanding of the physiological plant. There are two major differences between modeling physiology and modeling man-made systems: first, physiology is much more complex and less well-understood than man-made systems like cars and airplanes, and spans several scales from the molecular to the entire human body. Secondly, the variability between humans is orders of magnitude larger than that between two cars coming off the assembly line. Using the implantable cardiac pacemaker as an example of closed-loop device, and the heart as the organ to be modeled, we present several of the challenges and early results in model-based device validation. We begin with detailed timed automata model of the pacemaker, based on the specifications and algorithm descriptions from Boston Scientific. For closed-loop evaluation, a real-time Virtual Heart Model (VHM) has been developed to model the electrophysiological operation of the functioning and malfunctioning (i.e., during arrhythmia) hearts. By extracting the timing properties of the heart and pacemaker device, we present a methodology to construct timed-automata models for formal model checking and functional testing of the closed-loop system. The VHM\u27s capability of generating clinically-relevant response has been validated for a variety of common arrhythmias. Based on a set of requirements, we describe a framework of Abstraction Trees that allows for interactive and physiologically relevant closed-loop model checking and testing for basic pacemaker device operations such as maintaining the heart rate, atrial-ventricle synchrony and complex conditions such as avoiding pacemaker-mediated tachycardia. Through automatic model translation of abstract models to simulation-based testing and code generation for platform-level testing, this model-based design approach ensures the closed-loop safety properties are retained through the design toolchain and facilitates the development of verified software from verified models. This system is a step toward a validation and testing approach for medical cyber-physical systems with the patient-in-the-loop

    An approach of ontology and knowledge base for railway maintenance

    Get PDF
    Maintenance methods have become automated and innovative, especially with the transition to maintenance 4.0. However, social issues such as coronavirus disease of 2019 (COVID-19) and the war in Ukraine have caused significant departures of maintenance experts, resulting in the loss of enormous know-how. As part of this work, we will propose a solution by exploring the knowledge and expertise of these experts for the purpose of sharing and conservation. In this perspective, we have built a knowledge base based on experience and feedback. The proposed method illustrates a case study based on the single excitation configuration interaction (SECI) method to optimally capture the explicit and tacit knowledge of each technician, as well as the theoretical basis, the model of Nonaka and Takeuchi

    The doctoral research abstracts. Vol:7 2015 / Institute of Graduate Studies, UiTM

    Get PDF
    Foreword: The Seventh Issue of The Doctoral Research Abstracts captures the novelty of 65 doctorates receiving their scrolls in UiTM’s 82nd Convocation in the field of Science and Technology, Business and Administration, and Social Science and Humanities. To the recipients I would like to say that you have most certainly done UiTM proud by journeying through the scholastic path with its endless challenges and impediments, and persevering right till the very end. This convocation should not be regarded as the end of your highest scholarly achievement and contribution to the body of knowledge but rather as the beginning of embarking into high impact innovative research for the community and country from knowledge gained during this academic journey. As alumni of UiTM, we will always hold you dear to our hearts. A new ‘handshake’ is about to take place between you and UiTM as joint collaborators in future research undertakings. I envisioned a strong research pact between you as our alumni and UiTM in breaking the frontier of knowledge through research. I wish you all the best in your endeavour and may I offer my congratulations to all the graduands. ‘UiTM sentiasa dihati ku’ / Tan Sri Dato’ Sri Prof Ir Dr Sahol Hamid Abu Bakar , FASc, PEng Vice Chancellor Universiti Teknologi MAR

    Sustainable Fault-handling Of Reconfigurable Logic Using Throughput-driven Assessment

    Get PDF
    A sustainable Evolvable Hardware (EH) system is developed for SRAM-based reconfigurable Field Programmable Gate Arrays (FPGAs) using outlier detection and group testing-based assessment principles. The fault diagnosis methods presented herein leverage throughput-driven, relative fitness assessment to maintain resource viability autonomously. Group testing-based techniques are developed for adaptive input-driven fault isolation in FPGAs, without the need for exhaustive testing or coding-based evaluation. The techniques maintain the device operational, and when possible generate validated outputs throughout the repair process. Adaptive fault isolation methods based on discrepancy-enabled pair-wise comparisons are developed. By observing the discrepancy characteristics of multiple Concurrent Error Detection (CED) configurations, a method for robust detection of faults is developed based on pairwise parallel evaluation using Discrepancy Mirror logic. The results from the analytical FPGA model are demonstrated via a self-healing, self-organizing evolvable hardware system. Reconfigurability of the SRAM-based FPGA is leveraged to identify logic resource faults which are successively excluded by group testing using alternate device configurations. This simplifies the system architect\u27s role to definition of functionality using a high-level Hardware Description Language (HDL) and system-level performance versus availability operating point. System availability, throughput, and mean time to isolate faults are monitored and maintained using an Observer-Controller model. Results are demonstrated using a Data Encryption Standard (DES) core that occupies approximately 305 FPGA slices on a Xilinx Virtex-II Pro FPGA. With a single simulated stuck-at-fault, the system identifies a completely validated replacement configuration within three to five positive tests. The approach demonstrates a readily-implemented yet robust organic hardware application framework featuring a high degree of autonomous self-control

    An Integrated Fuzzy Inference Based Monitoring, Diagnostic, and Prognostic System

    Get PDF
    To date the majority of the research related to the development and application of monitoring, diagnostic, and prognostic systems has been exclusive in the sense that only one of the three areas is the focus of the work. While previous research progresses each of the respective fields, the end result is a variable grab bag of techniques that address each problem independently. Also, the new field of prognostics is lacking in the sense that few methods have been proposed that produce estimates of the remaining useful life (RUL) of a device or can be realistically applied to real-world systems. This work addresses both problems by developing the nonparametric fuzzy inference system (NFIS) which is adapted for monitoring, diagnosis, and prognosis and then proposing the path classification and estimation (PACE) model that can be used to predict the RUL of a device that does or does not have a well defined failure threshold. To test and evaluate the proposed methods, they were applied to detect, diagnose, and prognose faults and failures in the hydraulic steering system of a deep oil exploration drill. The monitoring system implementing an NFIS predictor and sequential probability ratio test (SPRT) detector produced comparable detection rates to a monitoring system implementing an autoassociative kernel regression (AAKR) predictor and SPRT detector, specifically 80% vs. 85% for the NFIS and AAKR monitor respectively. It was also found that the NFIS monitor produced fewer false alarms. Next, the monitoring system outputs were used to generate symptom patterns for k-nearest neighbor (kNN) and NFIS classifiers that were trained to diagnose different fault classes. The NFIS diagnoser was shown to significantly outperform the kNN diagnoser, with overall accuracies of 96% vs. 89% respectively. Finally, the PACE implementing the NFIS was used to predict the RUL for different failure modes. The errors of the RUL estimates produced by the PACE-NFIS prognosers ranged from 1.2-11.4 hours with 95% confidence intervals (CI) from 0.67-32.02 hours, which are significantly better than the population based prognoser estimates with errors of ~45 hours and 95% CIs of ~162 hours

    Systematic review of energy theft practices and autonomous detection through artificial intelligence methods

    Get PDF
    Energy theft poses a significant challenge for all parties involved in energy distribution, and its detection is crucial for maintaining stable and financially sustainable energy grids. One potential solution for detecting energy theft is through the use of artificial intelligence (AI) methods. This systematic review article provides an overview of the various methods used by malicious users to steal energy, along with a discussion of the challenges associated with implementing a generalized AI solution for energy theft detection. In this work, we analyze the benefits and limitations of AI methods, including machine learning, deep learning, and neural networks, and relate them to the specific thefts also analyzing problems arising with data collection. The article proposes key aspects of generalized AI solutions for energy theft detection, such as the use of smart meters and the integration of AI algorithms with existing utility systems. Overall, we highlight the potential of AI methods to detect various types of energy theft and emphasize the need for further research to develop more effective and generalized detection systems, providing key aspects of possible generalized solutions
    corecore