351 research outputs found

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Real-Time Fault Diagnosis of Permanent Magnet Synchronous Motor and Drive System

    Get PDF
    Permanent Magnet Synchronous Motors (PMSMs) have gained massive popularity in industrial applications such as electric vehicles, robotic systems, and offshore industries due to their merits of efficiency, power density, and controllability. PMSMs working in such applications are constantly exposed to electrical, thermal, and mechanical stresses, resulting in different faults such as electrical, mechanical, and magnetic faults. These faults may lead to efficiency reduction, excessive heat, and even catastrophic system breakdown if not diagnosed in time. Therefore, developing methods for real-time condition monitoring and detection of faults at early stages can substantially lower maintenance costs, downtime of the system, and productivity loss. In this dissertation, condition monitoring and detection of the three most common faults in PMSMs and drive systems, namely inter-turn short circuit, demagnetization, and sensor faults are studied. First, modeling and detection of inter-turn short circuit fault is investigated by proposing one FEM-based model, and one analytical model. In these two models, efforts are made to extract either fault indicators or adjustments for being used in combination with more complex detection methods. Subsequently, a systematic fault diagnosis of PMSM and drive system containing multiple faults based on structural analysis is presented. After implementing structural analysis and obtaining the redundant part of the PMSM and drive system, several sequential residuals are designed and implemented based on the fault terms that appear in each of the redundant sets to detect and isolate the studied faults which are applied at different time intervals. Finally, real-time detection of faults in PMSMs and drive systems by using a powerful statistical signal-processing detector such as generalized likelihood ratio test is investigated. By using generalized likelihood ratio test, a threshold was obtained based on choosing the probability of a false alarm and the probability of detection for each detector based on which decision was made to indicate the presence of the studied faults. To improve the detection and recovery delay time, a recursive cumulative GLRT with an adaptive threshold algorithm is implemented. As a result, a more processed fault indicator is achieved by this recursive algorithm that is compared to an arbitrary threshold, and a decision is made in real-time performance. The experimental results show that the statistical detector is able to efficiently detect all the unexpected faults in the presence of unknown noise and without experiencing any false alarm, proving the effectiveness of this diagnostic approach.publishedVersio

    Deep Multimodality Image-Guided System for Assisting Neurosurgery

    Get PDF
    Intrakranielle Hirntumoren gehören zu den zehn häufigsten bösartigen Krebsarten und sind für eine erhebliche Morbidität und Mortalität verantwortlich. Die größte histologische Kategorie der primären Hirntumoren sind die Gliome, die ein äußerst heterogenes Erschei-nungsbild aufweisen und radiologisch schwer von anderen Hirnläsionen zu unterscheiden sind. Die Neurochirurgie ist meist die Standardbehandlung für neu diagnostizierte Gliom-Patienten und kann von einer Strahlentherapie und einer adjuvanten Temozolomid-Chemotherapie gefolgt werden. Die Hirntumorchirurgie steht jedoch vor großen Herausforderungen, wenn es darum geht, eine maximale Tumorentfernung zu erreichen und gleichzeitig postoperative neurologische Defizite zu vermeiden. Zwei dieser neurochirurgischen Herausforderungen werden im Folgenden vorgestellt. Erstens ist die manuelle Abgrenzung des Glioms einschließlich seiner Unterregionen aufgrund seines infiltrativen Charakters und des Vorhandenseins einer heterogenen Kontrastverstärkung schwierig. Zweitens verformt das Gehirn seine Form ̶ die so genannte "Hirnverschiebung" ̶ als Reaktion auf chirurgische Manipulationen, Schwellungen durch osmotische Medikamente und Anästhesie, was den Nutzen präopera-tiver Bilddaten für die Steuerung des Eingriffs einschränkt. Bildgesteuerte Systeme bieten Ärzten einen unschätzbaren Einblick in anatomische oder pathologische Ziele auf der Grundlage moderner Bildgebungsmodalitäten wie Magnetreso-nanztomographie (MRT) und Ultraschall (US). Bei den bildgesteuerten Instrumenten handelt es sich hauptsächlich um computergestützte Systeme, die mit Hilfe von Computer-Vision-Methoden die Durchführung perioperativer chirurgischer Eingriffe erleichtern. Die Chirurgen müssen jedoch immer noch den Operationsplan aus präoperativen Bildern gedanklich mit Echtzeitinformationen zusammenführen, während sie die chirurgischen Instrumente im Körper manipulieren und die Zielerreichung überwachen. Daher war die Notwendigkeit einer Bildführung während neurochirurgischer Eingriffe schon immer ein wichtiges Anliegen der Ärzte. Ziel dieser Forschungsarbeit ist die Entwicklung eines neuartigen Systems für die peri-operative bildgeführte Neurochirurgie (IGN), nämlich DeepIGN, mit dem die erwarteten Ergebnisse der Hirntumorchirurgie erzielt werden können, wodurch die Gesamtüberle-bensrate maximiert und die postoperative neurologische Morbidität minimiert wird. Im Rahmen dieser Arbeit werden zunächst neuartige Methoden für die Kernbestandteile des DeepIGN-Systems der Hirntumor-Segmentierung im MRT und der multimodalen präope-rativen MRT zur intraoperativen US-Bildregistrierung (iUS) unter Verwendung der jüngs-ten Entwicklungen im Deep Learning vorgeschlagen. Anschließend wird die Ergebnisvor-hersage der verwendeten Deep-Learning-Netze weiter interpretiert und untersucht, indem für den Menschen verständliche, erklärbare Karten erstellt werden. Schließlich wurden Open-Source-Pakete entwickelt und in weithin anerkannte Software integriert, die für die Integration von Informationen aus Tracking-Systemen, die Bildvisualisierung und -fusion sowie die Anzeige von Echtzeit-Updates der Instrumente in Bezug auf den Patientenbe-reich zuständig ist. Die Komponenten von DeepIGN wurden im Labor validiert und in einem simulierten Operationssaal evaluiert. Für das Segmentierungsmodul erreichte DeepSeg, ein generisches entkoppeltes Deep-Learning-Framework für die automatische Abgrenzung von Gliomen in der MRT des Gehirns, eine Genauigkeit von 0,84 in Bezug auf den Würfelkoeffizienten für das Bruttotumorvolumen. Leistungsverbesserungen wurden bei der Anwendung fort-schrittlicher Deep-Learning-Ansätze wie 3D-Faltungen über alle Schichten, regionenbasier-tes Training, fliegende Datenerweiterungstechniken und Ensemble-Methoden beobachtet. Um Hirnverschiebungen zu kompensieren, wird ein automatisierter, schneller und genauer deformierbarer Ansatz, iRegNet, für die Registrierung präoperativer MRT zu iUS-Volumen als Teil des multimodalen Registrierungsmoduls vorgeschlagen. Es wurden umfangreiche Experimente mit zwei Multi-Location-Datenbanken durchgeführt: BITE und RESECT. Zwei erfahrene Neurochirurgen führten eine zusätzliche qualitative Validierung dieser Studie durch, indem sie MRT-iUS-Paare vor und nach der deformierbaren Registrierung überlagerten. Die experimentellen Ergebnisse zeigen, dass das vorgeschlagene iRegNet schnell ist und die besten Genauigkeiten erreicht. Darüber hinaus kann das vorgeschlagene iRegNet selbst bei nicht trainierten Bildern konkurrenzfähige Ergebnisse liefern, was seine Allgemeingültigkeit unter Beweis stellt und daher für die intraoperative neurochirurgische Führung von Nutzen sein kann. Für das Modul "Erklärbarkeit" wird das NeuroXAI-Framework vorgeschlagen, um das Vertrauen medizinischer Experten in die Anwendung von KI-Techniken und tiefen neuro-nalen Netzen zu erhöhen. Die NeuroXAI umfasst sieben Erklärungsmethoden, die Visuali-sierungskarten bereitstellen, um tiefe Lernmodelle transparent zu machen. Die experimen-tellen Ergebnisse zeigen, dass der vorgeschlagene XAI-Rahmen eine gute Leistung bei der Extraktion lokaler und globaler Kontexte sowie bei der Erstellung erklärbarer Salienzkar-ten erzielt, um die Vorhersage des tiefen Netzwerks zu verstehen. Darüber hinaus werden Visualisierungskarten erstellt, um den Informationsfluss in den internen Schichten des Encoder-Decoder-Netzwerks zu erkennen und den Beitrag der MRI-Modalitäten zur end-gültigen Vorhersage zu verstehen. Der Erklärungsprozess könnte medizinischen Fachleu-ten zusätzliche Informationen über die Ergebnisse der Tumorsegmentierung liefern und somit helfen zu verstehen, wie das Deep-Learning-Modell MRT-Daten erfolgreich verar-beiten kann. Außerdem wurde ein interaktives neurochirurgisches Display für die Eingriffsführung entwickelt, das die verfügbare kommerzielle Hardware wie iUS-Navigationsgeräte und Instrumentenverfolgungssysteme unterstützt. Das klinische Umfeld und die technischen Anforderungen des integrierten multimodalen DeepIGN-Systems wurden mit der Fähigkeit zur Integration von (1) präoperativen MRT-Daten und zugehörigen 3D-Volumenrekonstruktionen, (2) Echtzeit-iUS-Daten und (3) positioneller Instrumentenver-folgung geschaffen. Die Genauigkeit dieses Systems wurde anhand eines benutzerdefi-nierten Agar-Phantom-Modells getestet, und sein Einsatz in einem vorklinischen Operati-onssaal wurde simuliert. Die Ergebnisse der klinischen Simulation bestätigten, dass die Montage des Systems einfach ist, in einer klinisch akzeptablen Zeit von 15 Minuten durchgeführt werden kann und mit einer klinisch akzeptablen Genauigkeit erfolgt. In dieser Arbeit wurde ein multimodales IGN-System entwickelt, das die jüngsten Fort-schritte im Bereich des Deep Learning nutzt, um Neurochirurgen präzise zu führen und prä- und intraoperative Patientenbilddaten sowie interventionelle Geräte in das chirurgi-sche Verfahren einzubeziehen. DeepIGN wurde als Open-Source-Forschungssoftware entwickelt, um die Forschung auf diesem Gebiet zu beschleunigen, die gemeinsame Nut-zung durch mehrere Forschungsgruppen zu erleichtern und eine kontinuierliche Weiter-entwicklung durch die Gemeinschaft zu ermöglichen. Die experimentellen Ergebnisse sind sehr vielversprechend für die Anwendung von Deep-Learning-Modellen zur Unterstützung interventioneller Verfahren - ein entscheidender Schritt zur Verbesserung der chirurgi-schen Behandlung von Hirntumoren und der entsprechenden langfristigen postoperativen Ergebnisse

    Edoardo Benvenuto Prize. Collection of papers

    Get PDF
    The promotion of studies and research on the science and art of building in their historical development constitutes the objective that the Edoardo Benvenuto Association has set itself, since its establishment, in order to honor the memory of Edoardo Benvenuto (1940-1998). The Association in recent years has achieved interesting results by developing various activities such as: organization of national and international meetings, conferences, study days; collaborations with national and foreign research institutions; promotion of the editorial series “Between Mechanics and Architecture"; activation of the portal Bibliotheca Mechanica Architectonica, first “open source” digitized library dedicated to historical research on mechanical and architectural texts. But perhaps the most qualifying initiative was the institution of the Edoardo Benvenuto Prize, arrived in 2019 in its twelfth edition, reserved for young researchers in the field of historical studies on science and the art of building. The awarding of the Prize takes place after an in-depth examination of the texts received by the Association by an international commission of experts. The purpose of this book is to collect and present the most recent studies and publications produced by the winners of the various editions of the Edoardo Benvenuto Prize

    Detecting cow behaviours associated with parturition using computer vision

    Get PDF
    Monitoring of dairy cows and their calf during parturition is essential in determining if there are any associated problems for mother and offspring. This is a critical period in the productive life of the mother and offspring. A difficult and assisted calving can impact on the subsequent milk production, health and fertility of a cow, and its potential survival. Furthermore, an alert to the need for any assistance would enhance animal and stockperson wellbeing. Manual monitoring of animal behaviour from images has been used for decades, but is very labour intensive. Recent technological advances in the field of Computer Vision based on the technique of Deep Learning have emerged, which now makes automated monitoring of surveillance video feeds feasible. The benefits of using image analysis compared to other monitoring systems is that image analysis relies upon neither transponder attachments, nor invasive tools and may provide more information at a relatively low cost. Image analysis can also detect and track the calf, which is not possible using other monitoring methods. Using cameras to monitor animals is commonly used, however, automated detection of behaviours is new especially for livestock. Using the latest state-of-the-art techniques in Computer Vision, and in particular the ground-breaking technique of Deep Learning, this thesis develops a vision-based model to detect the progress of parturition in dairy cows. A large-scale dataset of cow behaviour annotations was created, which included over 46 individual cow calvings and is approximately 690 hours of video footage with over 2.5k of video clips, each between 3-10 seconds. The model was trained on seven different behaviours, which included standing, walking, shuffle, lying, eating, drinking, and contractions while lying. The developed network correctly classified the seven behaviours with an accuracy of between 80 to 95%. The accuracy in predicting contractions while lying down was 83%, which in itself can be an early warning calving alert, as all cows start contractions one to two hours before giving birth. The performance of the model developed was also comparable to methods for human action classification using the Kinetics dataset

    Designing a New Tactile Display Technology and its Disability Interactions

    Get PDF
    People with visual impairments have a strong desire for a refreshable tactile interface that can provide immediate access to full page of Braille and tactile graphics. Regrettably, existing devices come at a considerable expense and remain out of reach for many. The exorbitant costs associated with current tactile displays stem from their intricate design and the multitude of components needed for their construction. This underscores the pressing need for technological innovation that can enhance tactile displays, making them more accessible and available to individuals with visual impairments. This research thesis delves into the development of a novel tactile display technology known as Tacilia. This technology's necessity and prerequisites are informed by in-depth qualitative engagements with students who have visual impairments, alongside a systematic analysis of the prevailing architectures underpinning existing tactile display technologies. The evolution of Tacilia unfolds through iterative processes encompassing conceptualisation, prototyping, and evaluation. With Tacilia, three distinct products and interactive experiences are explored, empowering individuals to manually draw tactile graphics, generate digitally designed media through printing, and display these creations on a dynamic pin array display. This innovation underscores Tacilia's capability to streamline the creation of refreshable tactile displays, rendering them more fitting, usable, and economically viable for people with visual impairments

    A low cost virtual reality interface for educational games

    Get PDF
    Mobile virtual reality has the potential to improve learning experiences by making them more immersive and engaging for students. This type of virtual reality also aims to be more cost effective by using a smartphone to drive the virtual reality experience. One issue with mobile virtual reality is that the screen (i.e. main interface) of the smartphone is occluded by the virtual reality headset. To investigate solutions to this issue, this project details the development and testing of a computer vision based controller that aims to have a cheaper per unit cost when compared to a conventional electronic controller by making use of 3D printing and the built-in camera of a smartphone. Reducing the cost per unit is useful for educational contexts as solutions would need to scale to classrooms sizes. The research question for this project is thus, “can a computer vision based virtual reality controller provide comparable immersion to a conventional electronic controller”. It was found that a computer vision based controller can provide comparable immersion, though it is more challenging to use. This challenge was found to contribute more towards engagement as it did not diminish the performance of users in terms of question scores

    Multiparametric Magnetic Resonance Imaging Artificial Intelligence Pipeline for Oropharyngeal Cancer Radiotherapy Treatment Guidance

    Get PDF
    Oropharyngeal cancer (OPC) is a widespread disease and one of the few domestic cancers that is rising in incidence. Radiographic images are crucial for assessment of OPC and aid in radiotherapy (RT) treatment. However, RT planning with conventional imaging approaches requires operator-dependent tumor segmentation, which is the primary source of treatment error. Further, OPC expresses differential tumor/node mid-RT response (rapid response) rates, resulting in significant differences between planned and delivered RT dose. Finally, clinical outcomes for OPC patients can also be variable, which warrants the investigation of prognostic models. Multiparametric MRI (mpMRI) techniques that incorporate simultaneous anatomical and functional information coupled to artificial intelligence (AI) approaches could improve clinical decision support for OPC by providing immediately actionable clinical rationale for adaptive RT planning. If tumors could be reproducibly segmented, rapid response could be classified, and prognosis could be reliably determined, overall patient outcomes would be optimized to improve the therapeutic index as a function of more risk-adapted RT volumes. Consequently, there is an unmet need for automated and reproducible imaging which can simultaneously segment tumors and provide predictive value for actionable RT adaptation. This dissertation primarily seeks to explore and optimize image processing, tumor segmentation, and patient outcomes in OPC through a combination of advanced imaging techniques and AI algorithms. In the first specific aim of this dissertation, we develop and evaluate mpMRI pre-processing techniques for use in downstream segmentation, response prediction, and outcome prediction pipelines. Various MRI intensity standardization and registration approaches were systematically compared and benchmarked. Moreover, synthetic image algorithms were developed to decrease MRI scan time in an effort to optimize our AI pipelines. We demonstrated that proper intensity standardization and image registration can improve mpMRI quality for use in AI algorithms, and developed a novel method to decrease mpMRI acquisition time. Subsequently, in the second specific aim of this dissertation, we investigated underlying questions regarding the implementation of RT-related auto-segmentation. Firstly, we quantified interobserver variability for an unprecedented large number of observers for various radiotherapy structures in several disease sites (with a particular emphasis on OPC) using a novel crowdsourcing platform. We then trained an AI algorithm on a series of extant matched mpMRI datasets to segment OPC primary tumors. Moreover, we validated and compared our best model\u27s performance to clinical expert observers. We demonstrated that AI-based mpMRI OPC tumor auto-segmentation offers decreased variability and comparable accuracy to clinical experts, and certain mpMRI input channel combinations could further improve performance. Finally, in the third specific aim of this dissertation, we predicted OPC primary tumor mid-therapy (rapid) treatment response and prognostic outcomes. Using co-registered pre-therapy and mid-therapy primary tumor manual segmentations of OPC patients, we generated and characterized treatment sensitive and treatment resistant pre-RT sub-volumes. These sub-volumes were used to train an AI algorithm to predict individual voxel-wise treatment resistance. Additionally, we developed an AI algorithm to predict OPC patient progression free survival using pre-therapy imaging from an international data science competition (ranking 1st place), and then translated these approaches to mpMRI data. We demonstrated AI models could be used to predict rapid response and prognostic outcomes using pre-therapy imaging, which could help guide treatment adaptation, though further work is needed. In summary, the completion of these aims facilitates the development of an image-guided fully automated OPC clinical decision support tool. The resultant deliverables from this project will positively impact patients by enabling optimized therapeutic interventions in OPC. Future work should consider investigating additional imaging timepoints, imaging modalities, uncertainty quantification, perceptual and ethical considerations, and prospective studies for eventual clinical implementation. A dynamic version of this dissertation is publicly available and assigned a digital object identifier through Figshare (doi: 10.6084/m9.figshare.22141871)

    Hybrid Algorithm for the Detection of Lung Cancer Using CNN and Image Segmentation

    Get PDF
    When it comes to cancer-related fatalities, lung cancer is by far the most common cause. Early detection is the key to a successful diagnosis and treatment plan for lung cancer, just as it is for other types of cancer. Automatic CAD systems for lung cancer screening using computed tomography scans primarily involve two steps: the first step is to detect all potentially malignant pulmonary nodules, and the second step is to determine whether or not the nodules are malignant. There have been a lot of books published recently about the first phase, but not many about the second stage. Screening for lung cancer requires a careful investigation on each suspicious nodule and the integration of information from all nodules. This is because the presence of pulmonary nodules does not always indicate cancer, and the morphology of nodules, including their shape, size, and contextual information, has a complex relationship with cancer. In order to overcome this problem, we suggest a deep CNN architecture that is different from the architectures that are commonly utilised in computer vision. After the suspicious nodules have been formed with the modified version of U-Net, they are used as an input data for our model. First, the suspicious nodules are generated with U-Net. To automatically diagnose lung cancer, the suggested model is a multi-path CNN that concurrently makes use of local characteristics as well as more general contextual characteristics from a wider geographical area. In order to accomplish this, the model consisted of three separate pathways, each of which used a different receptive field size, which contributed to the modelling of distant dependencies (short and long-range dependencies of the neighbouring pixels). After that, we concatenate characteristics from the three different pathways in order to further improve our model's performance. In conclusion, one of the contributions that we have made is the development of a retraining phase system. This system enables us to address issues that are caused by an uneven distribution of picture labels. The experimental findings from the KDSB 2017 challenge demonstrate that our model is more adaptable to the described inconsistency among the nodules' sizes and shapes. Furthermore, our model obtained better results in comparison to other researches

    Towards Phytoplankton Parasite Detection Using Autoencoders

    Full text link
    Phytoplankton parasites are largely understudied microbial components with a potentially significant ecological impact on phytoplankton bloom dynamics. To better understand their impact, we need improved detection methods to integrate phytoplankton parasite interactions in monitoring aquatic ecosystems. Automated imaging devices usually produce high amount of phytoplankton image data, while the occurrence of anomalous phytoplankton data is rare. Thus, we propose an unsupervised anomaly detection system based on the similarity of the original and autoencoder-reconstructed samples. With this approach, we were able to reach an overall F1 score of 0.75 in nine phytoplankton species, which could be further improved by species-specific fine-tuning. The proposed unsupervised approach was further compared with the supervised Faster R-CNN based object detector. With this supervised approach and the model trained on plankton species and anomalies, we were able to reach the highest F1 score of 0.86. However, the unsupervised approach is expected to be more universal as it can detect also unknown anomalies and it does not require any annotated anomalous data that may not be always available in sufficient quantities. Although other studies have dealt with plankton anomaly detection in terms of non-plankton particles, or air bubble detection, our paper is according to our best knowledge the first one which focuses on automated anomaly detection considering putative phytoplankton parasites or infections
    corecore