478 research outputs found

    GA-Based fault diagnosis algorithms for distributed systems

    Get PDF
    Distributed Systems are becoming very popular day-by-day due to their applications in various fields such as electronic automotives, remote environment control like underwater sensor network, K-connected networks. Faults may aect the nodes of the system at any time. So diagnosing the faulty nodes in the distributed system is an worst necessity to make the system more reliable and ecient. This thesis describes about dierent types of faults, system and fault model, those are already in literature. As the evolutionary approaches give optimum outcome than probabilistic approaches, we have developed Genetic algorithm based fault diagnosis algorithm which provides better result than other fault diagnosis algorithms. The GA-based fault diagnosis algorithm has worked upon dierent types of faults like permanent as well as intermittent faults in a K-connected system. Simulation results demonstrate that the proposed Genetic Algorithm Based Permanent Fault Diagnosis Algorithm(GAPFDA) and Genetic Algorithm Based Intermittent Fault Diagnosis Algorithm (GAIFDA) decreases the number of messages transferred and the time needed to diagnose the faulty nodes in a K-connected distributed system. The decrease in CPU time and number of steps are due to the application of supervised mutation in the fault diagnosis algorithms. The time complexity and message complexity of GAPFDA are analyzed as O(n*P*K*ng) and O(n*K) respectively. The time complexity and message complexity of GAIFDA are O(r*n*P*K*ng) and O(r*n*K) respectively, where ’n’ is the number of nodes, ’P’ is the population size, ’K’ is the connectivity of the network, ’ng’ is the number of generations (steps), ’r’ is the number of rounds. Along with the design of fault diagnosis algorithm of O(r*k) for diagnosing the transient-leading-to-permanent faults in the actuators of a k-fault tolerant Fly-by-wire(FBW) system, an ecient scheduling algorithm has been developed to schedule dierent tasks of a FBW system, here ’r’ denotes the number of rounds. The proposed algorithm for scheduling the task graphs of a multi-rate FBW system demonstrates that, maximization in microcontroller’s execution period reduces the number of microcontrollers needed for performing diagnosis

    Enhancing Adherence to Prescribed Opioids Using a Mobile-Base Application: A Pilot Study of feasibility in Chronic Non-Cancer Pain

    Get PDF
    In this study we present feasibility of a mobile monitoring and reporting system that would provide an accurate unbiased screening tool to systematically analyze opioid adherence in Sickle cell disease patients. In addition, the software simultaneously measures pain. The Mobile Applications Rating Scale: a new and validated tool for assessing the quality of health mobile apps for engagement, functionality, aesthetics, information quality, subjective quality, relevance and overall impact was administered post usage to evaluate the application. A total of 28 patients were recruited to review and test the software at one sitting. The majority of the population found the application to be relevant for their care. Patients were also asked to report on the completeness of information within the app, the majority (96%) reported on the application’s completeness while 4% estimated the information to be minimal or overwhelming. The quality of information as it pertains to sickle cell patients was overwhelimingly reported to be relevant (91.7%); only 8.3% found the application to be poorly relevant to sickle cell disease. The application’s performance was positively rated while the ease of its use positively rated at 91.7%. Most participants (85.7%) found the application to be interesting to use while 74% found it entertaining. All users found the application’s navigation to be logical and accurate with consistent and intuitive gestural design. We conclude that surveyed patients believe it is feasible to use a smartphone application specifically targeted to monitor opioid use and behavior in patients with sickle cell disease (SCD)-associated pai

    A selected annotated bibliography for spaceborne multiprocessing study

    Get PDF
    Bibliography on application of multiprocessor systems to space mission

    Design and Evaluation of Online Fault Diagnosis Protocols forwireless Networks

    Get PDF
    Any node in a network, or a component of it may fail and show undesirable behavior due to physical defects, imperfections, or hardware and/or software related glitches. Presence of faulty hosts in the network affects the computational efficiency, and quality of service (QoS). This calls for the development of efficient fault diagnosis protocols to detect and handle faulty hosts. Fault diagnosis protocols designed for wired networks cannot directly be propagated to wireless networks, due to difference in characteristics, and requirements. This thesis work unravels system level fault diagnosis protocols for wireless networks, particularly for Mobile ad hoc Networks (MANETs), and Wireless Sensor Networks (WSNs), considering faults based on their persistence (permanent, intermittent, and transient), and node mobility. Based on the comparisons of outcomes of the same tasks (comparison model ), a distributed diagnosis protocol has been proposed for static topology MANETs, where a node requires to respond to only one test request from its neighbors, that reduces the communication complexity of the diagnosis process. A novel approach to handle more intractable intermittent faults in dynamic topology MANETs is also discussed.Based on the spatial correlation of sensor measurements, a distributed fault diagnosis protocol is developed to classify the nodes to be fault-free, permanently faulty, or intermittently faulty, in WSNs. The nodes affected by transient faults are often considered fault-free, and should not be isolated from the network. Keeping this objective in mind, we have developed a diagnosis algorithm for WSNs to discriminate transient faults from intermittent and permanent faults. After each node finds the status of all 1-hop neighbors (local diagnostic view), these views are disseminated among the fault-free nodes to deduce the fault status of all nodes in the network (global diagnostic view). A spanning tree based dissemination strategy is adopted, instead of conventional flooding, to have less communication complexity. Analytically, the proposed protocols are shown to be correct, and complete. The protocols are implemented using INET-20111118 (for MANETs) and Castalia-3.2 (forWSNs) on OMNeT++ 4.2 platform. The obtained simulation results for accuracy and false alarm rate vouch the feasibility and efficiency of the proposed algorithms over existing landmark protocols

    Learning with Low-Quality Data: Multi-View Semi-Supervised Learning with Missing Views

    Get PDF
    The focus of this thesis is on learning approaches for what we call ``low-quality data'' and in particular data in which only small amounts of labeled target data is available. The first part provides background discussion on low-quality data issues, followed by preliminary study in this area. The remainder of the thesis focuses on a particular scenario: multi-view semi-supervised learning. Multi-view learning generally refers to the case of learning with data that has multiple natural views, or sets of features, associated with it. Multi-view semi-supervised learning methods try to exploit the combination of multiple views along with large amounts of unlabeled data in order to learn better predictive functions when limited labeled data is available. However, lack of complete view data limits the applicability of multi-view semi-supervised learning to real world data. Commonly, one data view is readily and cheaply available, but additionally views may be costly or only available in some cases. This thesis work aims to make multi-view semi-supervised learning approaches more applicable to real world data specifically by addressing the issue of missing views through both feature generation and active learning, and addressing the issue of model selection for semi-supervised learning with limited labeled data. This thesis introduces a unified approach for handling missing view data in multi-view semi-supervised learning tasks, which applies to both data with completely missing additional views and data only missing views in some instances. The idea is to learn a feature generation function mapping one view to another with the mapping biased to encourage the features generated to be useful for multi-view semi-supervised learning algorithms. The mapping is then used to fill in views as pre-processing. Unlike previously proposed single-view multi-view learning approaches, the proposed approach is able to take advantage of additional view data when available, and for the case of partial view presence is the first feature-generation approach specifically designed to take into account the multi-view semi-supervised learning aspect. The next component of this thesis is the analysis of an active view completion scenario. In some tasks, it is possible to obtain missing view data for a particular instance, but with some associated cost. Recent work has shown an active selection strategy can be more effective than a random one. In this thesis, a better understanding of active approaches is sought, and it is demonstrated that the effectiveness of an active selection strategy over a random one can depend on the relationship between the views. Finally, an important component of making multi-view semi-supervised learning applicable to real world data is the task of model selection, an open problem which is often avoided entirely in previous work. For cases of very limited labeled training data the commonly used cross-validation approach can become ineffective. This thesis introduces a re-training alternative to the method-dependent approaches similar in motivation to cross-validation, that involves generating new training and test data by sampling from the large amount of unlabeled data and estimated conditional probabilities for the labels. The proposed approaches are evaluated on a variety of multi-view semi-supervised learning data sets, and the experimental results demonstrate their efficacy

    A deep learning palpebral fissure segmentation model in the context of computer user monitoring

    Get PDF
    The intense use of computers and visual terminals is a daily practice for many people. As a consequence, there are frequent complaints of visual and non-visual symptoms, such as headaches and neck pain. These symptoms make up Computer Vision Syndrome and among the factors related to this syndrome are: the distance between the user and the screen, the number of hours of use of the equipment and the reduction in the blink rate, and also the number of incomplete blinks while using the device. Although some of these items can be controlled by ergonomic measures, controlling blinks and their efficiency is more complex. A considerable number of studies have looked at measuring blinks, but few have dealt with the presence of incomplete blinks. Conventional measurement techniques have limitations when it comes to detecting and analyzing the completeness of blinks, especially due to the different eye and blink characteristics of individuals, as well as the position and movement of the user. Segmenting the palpebral fissure can be a first step towards solving this problem, by characterizing individuals well regardless of these factors. This work investigates with the development of Deep Learning models to perform palpebral fissure segmentation in situations where the eyes cover a small region of the images, such as images from a computer webcam. The segmentation of the palpebral fissure can be a first step in solving this problem, characterizing individuals well regardless of these factors. Training, validation and test sets were generated based on the CelebAMask-HQ and Closed Eyes in the Wild datasets. Various machine learning techniques are used, resulting in a final trained model with a Dice Coefficient metric close to 0.90 for the test data, a result similar to that obtained by models trained with images in which the eye region occupies most of the image.A utilização intensa de computadores e terminais visuais é algo cotidiano para muitas pessoas. Como consequência, queixas com sintomas visuais e não visuais, como dores de cabeça e no pescoço, são frequentes. Esses sintomas compõem a Síndrome da visão de computador e entre os fatores relacionados a essa síndrome estão: a distância entre o usuário e a tela, o número de horas de uso do equipamento e a redução da taxa de piscadas, e, também, o número de piscadas incompletas, durante a utilização do dispositivo. Ainda que alguns desses itens possam ser controlados por medidas ergonômicas, o controle das piscadas e a eficiência dessas é mais complexo. Um número considerável de estudos abordou a medição de piscadas, porém, poucos trataram da presença de piscadas incompletas. As técnicas convencionais de medição apresentam limitações para detecção e análise completeza das piscadas, em especial devido as diferentes características de olhos e de piscadas dos indivíduos, e ainda, pela posição e movimentação do usuário. A segmentação da fissura palpebral pode ser um primeiro passo na resolução desse problema, caracterizando bem os indivíduos independentemente desses fatores. Este trabalho aborda o desenvolvimento de modelos de Deep Learning para realizar a segmentação de fissura palpebral em situações em que os olhos cobrem uma região pequena das imagens, como são as imagens de uma webcam de computador. Foram gerados conjuntos de treinamento, validação e teste com base nos conjuntos de dados CelebAMask-HQ e Closed Eyes in the Wild. São utilizadas diversas técnicas de aprendizado de máquina, resultando em um modelo final treinado com uma métrica Coeficiente Dice próxima a 0,90 para os dados de teste, resultado similar ao obtido por modelos treinados com imagens nas quais a região dos olhos ocupa a maior parte da imagem

    Design and optimization of medical information services for decision support

    Get PDF

    Developing A Novel Theranostic Nano-Platform For Simultaneous Multimodal Imaging And Radionuclide Therapy

    Get PDF
    The aim of this project was to develop and evaluate a theranostic nano-platform to enable Radionuclide Therapy (RNT) and multimodal imaging to improve the therapy and diagnosis of lymph node metastases. The work presented in this thesis consists of four main studies. First, Feraheme (FH) and two other superparamagnetic iron-oxide nanoparticles (SPIONs) were radiolabelled with radioisotopes commonly used in the clinic (89Zr, 177Lu and 90Y) for imaging and therapy utilising a novel chelate-free technique, which produced a high radiochemical yield and purity (up to 98%). FH nanoparticles were successfully radiolabelled with 90Y and 177Lu which was the first experimental demonstration that the HIR technique can be extended to radiolabel FH with these isotopes. In the second study, a series of phantom experiments were performed and results demonstrated that 89Zr-FH is a novel nanotechnology for simultaneous PET/MR imaging providing the capability of integrating the spatial resolution and tissue contrast provided by MR imaging with the high sensitivity of PET. An additional phantom study demonstrated the ability to image 177Lu-FH using Single Photon Emission Computed Tomography. The third study was a proof of concept for 90Y RNT. Results revealed that in RNT, the kinetics of DNA double strand break (DSB) induction, repair and misrepair must be considered when deriving radiobiological parameters. The fourth study, a Monte Carlo simulation study, was performed to study the subcellular mechanisms of dose delivery of the radionuclide 223Ra when treating metastases. These simulations showed that indirect cell damage may play an important role in RNT with alpha emitters due to the stochastic nature of alpha particle energy deposition. In conclusion, these results open a pathway towards a novel nuclear nanoplatform for multimodal imaging and RNT of lymph node metastases

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 24th International Conference on Fundamental Approaches to Software Engineering, FASE 2021, which took place during March 27–April 1, 2021, and was held as part of the Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg but changed to an online format due to the COVID-19 pandemic. The 16 full papers presented in this volume were carefully reviewed and selected from 52 submissions. The book also contains 4 Test-Comp contributions
    corecore