782 research outputs found

    Productivity equation and the m distributions of information processing in workflows

    Full text link
    This research investigates an equation of productivity for workflows regarding its robustness towards the definition of workflows as probabilistic distributions. The equation was formulated across its derivations through a theoretical framework about information theory, probabilities and complex adaptive systems. By defining the productivity equation for organism-object interactions, workflows mathematical derivations can be predicted and monitored without strict empirical methods and allows workflow flexibility for organism-object environments.Comment: 6 pages, 0 figure

    Alarm flood reduction using multiple data sources

    Get PDF
    The introduction of distributed control systems in the process industry has increased the number of alarms per operator exponentially. Modern plants present a high level of interconnectivity due to steam recirculation, heat integration and the complex control systems installed in the plant. When there is a disturbance in the plant it spreads through its material, energy and information connections affecting the process variables on the path. The alarms associated to these process variables are triggered. The alarm messages may overload the operator in the control room, who will not be able to properly investigate each one of these alarms. This undesired situation is called an “alarm flood”. In such situations the operator might not be able to keep the plant within safe operation. The aim of this thesis is to reduce alarm flood periods in process plants. Consequential alarms coming from the same process abnormality are isolated and a causal alarm suggestion is given. The causal alarm in an alarm flood is the alarm associated to the asset originating the disturbance that caused the flood. Multiple information sources are used: an alarm log containing all past alarms messages, process data and a topology model of the plant. The alarm flood reduction is achieved with a combination of alarm log analysis, process data root-cause analysis and connectivity analysis. The research findings are implemented in a software tool that guides the user through the different steps of the method. Finally the applicability of the method is proved with an industrial case study

    Software and Hardware-based Tools for Improving Ultrasound Guided Prostate Brachytherapy

    Get PDF
    Minimally invasive procedures for prostate cancer diagnosis and treatment, including biopsy and brachytherapy, rely on medical imaging such as two-dimensional (2D) and three-dimensional (3D) transrectal ultrasound (TRUS) and magnetic resonance imaging (MRI) for critical tasks such as target definition and diagnosis, treatment guidance, and treatment planning. Use of these imaging modalities introduces challenges including time-consuming manual prostate segmentation, poor needle tip visualization, and variable MR-US cognitive fusion. The objective of this thesis was to develop, validate, and implement software- and hardware-based tools specifically designed for minimally invasive prostate cancer procedures to overcome these challenges. First, a deep learning-based automatic 3D TRUS prostate segmentation algorithm was developed and evaluated using a diverse dataset of clinical images acquired during prostate biopsy and brachytherapy procedures. The algorithm significantly outperformed state-of-the-art fully 3D CNNs trained using the same dataset while a segmentation time of 0.62 s demonstrated a significant reduction compared to manual segmentation. Next, the impact of dataset size, image quality, and image type on segmentation performance using this algorithm was examined. Using smaller training datasets, segmentation accuracy was shown to plateau with as little as 1000 training images, supporting the use of deep learning approaches even when data is scarce. The development of an image quality grading scale specific to 3D TRUS images will allow for easier comparison between algorithms trained using different datasets. Third, a power Doppler (PD) US-based needle tip localization method was developed and validated in both phantom and clinical cases, demonstrating reduced tip error and variation for obstructed needles compared to conventional US. Finally, a surface-based MRI-3D TRUS deformable image registration algorithm was developed and implemented clinically, demonstrating improved registration accuracy compared to manual rigid registration and reduced variation compared to the current clinical standard of physician cognitive fusion. These generalizable and easy-to-implement tools have the potential to improve workflow efficiency and accuracy for minimally invasive prostate procedures

    Plant-Wide Diagnosis: Cause-and-Effect Analysis Using Process Connectivity and Directionality Information

    Get PDF
    Production plants used in modern process industry must produce products that meet stringent environmental, quality and profitability constraints. In such integrated plants, non-linearity and strong process dynamic interactions among process units complicate root-cause diagnosis of plant-wide disturbances because disturbances may propagate to units at some distance away from the primary source of the upset. Similarly, implemented advanced process control strategies, backup and recovery systems, use of recycle streams and heat integration may hamper detection and diagnostic efforts. It is important to track down the root-cause of a plant-wide disturbance because once corrective action is taken at the source, secondary propagated effects can be quickly eliminated with minimum effort and reduced down time with the resultant positive impact on process efficiency, productivity and profitability. In order to diagnose the root-cause of disturbances that manifest plant-wide, it is crucial to incorporate and utilize knowledge about the overall process topology or interrelated physical structure of the plant, such as is contained in Piping and Instrumentation Diagrams (P&IDs). Traditionally, process control engineers have intuitively referred to the physical structure of the plant by visual inspection and manual tracing of fault propagation paths within the process structures, such as the process drawings on printed P&IDs, in order to make logical conclusions based on the results from data-driven analysis. This manual approach, however, is prone to various sources of errors and can quickly become complicated in real processes. The aim of this thesis, therefore, is to establish innovative techniques for the electronic capture and manipulation of process schematic information from large plants such as refineries in order to provide an automated means of diagnosing plant-wide performance problems. This report also describes the design and implementation of a computer application program that integrates: (i) process connectivity and directionality information from intelligent P&IDs (ii) results from data-driven cause-and-effect analysis of process measurements and (iii) process know-how to aid process control engineers and plant operators gain process insight. This work explored process intelligent P&IDs, created with AVEVA® P&ID, a Computer Aided Design (CAD) tool, and exported as an ISO 15926 compliant platform and vendor independent text-based XML description of the plant. The XML output was processed by a software tool developed in Microsoft® .NET environment in this research project to computationally generate connectivity matrix that shows plant items and their connections. The connectivity matrix produced can be exported to Excel® spreadsheet application as a basis for other application and has served as precursor to other research work. The final version of the developed software tool links statistical results of cause-and-effect analysis of process data with the connectivity matrix to simplify and gain insights into the cause and effect analysis using the connectivity information. Process knowhow and understanding is incorporated to generate logical conclusions. The thesis presents a case study in an atmospheric crude heating unit as an illustrative example to drive home key concepts and also describes an industrial case study involving refinery operations. In the industrial case study, in addition to confirming the root-cause candidate, the developed software tool was set the task to determine the physical sequence of fault propagation path within the plant. This was then compared with the hypothesis about disturbance propagation sequence generated by pure data-driven method. The results show a high degree of overlap which helps to validate statistical data-driven technique and easily identify any spurious results from the data-driven multivariable analysis. This significantly increase control engineers confidence in data-driven method being used for root-cause diagnosis. The thesis concludes with a discussion of the approach and presents ideas for further development of the methods

    Whole-Brain Models to Explore Altered States of Consciousness from the Bottom Up.

    Get PDF
    The scope of human consciousness includes states departing from what most of us experience as ordinary wakefulness. These altered states of consciousness constitute a prime opportunity to study how global changes in brain activity relate to different varieties of subjective experience. We consider the problem of explaining how global signatures of altered consciousness arise from the interplay between large-scale connectivity and local dynamical rules that can be traced to known properties of neural tissue. For this purpose, we advocate a research program aimed at bridging the gap between bottom-up generative models of whole-brain activity and the top-down signatures proposed by theories of consciousness. Throughout this paper, we define altered states of consciousness, discuss relevant signatures of consciousness observed in brain activity, and introduce whole-brain models to explore the biophysics of altered consciousness from the bottom-up. We discuss the potential of our proposal in view of the current state of the art, give specific examples of how this research agenda might play out, and emphasize how a systematic investigation of altered states of consciousness via bottom-up modeling may help us better understand the biophysical, informational, and dynamical underpinnings of consciousness

    The Impact of Digital Technologies on Public Health in Developed and Developing Countries

    Get PDF
    This open access book constitutes the refereed proceedings of the 18th International Conference on String Processing and Information Retrieval, ICOST 2020, held in Hammamet, Tunisia, in June 2020.* The 17 full papers and 23 short papers presented in this volume were carefully reviewed and selected from 49 submissions. They cover topics such as: IoT and AI solutions for e-health; biomedical and health informatics; behavior and activity monitoring; behavior and activity monitoring; and wellbeing technology. *This conference was held virtually due to the COVID-19 pandemic

    Improving Access and Mental Health for Youth Through Virtual Models of Care

    Get PDF
    The overall objective of this research is to evaluate the use of a mobile health smartphone application (app) to improve the mental health of youth between the ages of 14–25 years, with symptoms of anxiety/depression. This project includes 115 youth who are accessing outpatient mental health services at one of three hospitals and two community agencies. The youth and care providers are using eHealth technology to enhance care. The technology uses mobile questionnaires to help promote self-assessment and track changes to support the plan of care. The technology also allows secure virtual treatment visits that youth can participate in through mobile devices. This longitudinal study uses participatory action research with mixed methods. The majority of participants identified themselves as Caucasian (66.9%). Expectedly, the demographics revealed that Anxiety Disorders and Mood Disorders were highly prevalent within the sample (71.9% and 67.5% respectively). Findings from the qualitative summary established that both staff and youth found the software and platform beneficial

    Improving the interpretability of causality maps for fault identification

    Get PDF
    Thesis (MEng)--Stellenbosch University, 2020.ENGLISH ABSTRACT: Worldwide competition forces modern mineral processing plants to operate at high productivity. This high productivity is achieved by implementing process monitoring to maintain the desired operating conditions. However, a fault originating in one section of a plant can propagate throughout the plant and so obscure its root cause. Causality analysis is a method that identifies the cause-effect relationships between process variables and presents these in a causality map which can be used to track the propagation path of a fault back to its root cause. A major obstacle to the wide acceptance of causality analysis as a tool for fault diagnosis in industry is the poor interpretability of causality maps. This study identified, proposed and assessed ways to improve the interpretability of causality maps for fault identification. All approaches were tested on a simulated case study and the resulting maps compared to a standard causality map or its transitive reduction. The ideal causality map was defined and all comparisons were performed based on its characteristics. Causality maps were produced using conditional Granger causality (GC), with a novel heuristic approach for selecting sampling period and time window. Conditional GC was found to be ill-suited to plant-wide causality analysis, due to large data requirements, poor model order selection using AIC, and inaccuracy in the presence of multiple different residence times and time delays. Methods to incorporate process knowledge to constrain connections and potential root causes were investigated and found to remove all spurious connections and decrease the pool of potential root cause variables respectively. Tools such as visually displaying node rankings on the causality map and incorporating sliders to manipulate connections and variables were also investigated. Furthermore, a novel hierarchical approach for plant-wide causality analysis was proposed, where causality maps were constructed in two subsequent stages. In the first stage, a less-detailed plant-wide map was constructed using representatives for groups of variables, and used to localise the fault to one of those groups of variables. Variables were grouped according to plant sections or modules identified in the data, and the first principal component (PC1) was used to represent each group (PS-PC1 and Mod-PC1 respectively). PS-PC1 was found to be the most promising approach, as its plant-wide map clearly identified the true root cause location, and the stage-wise application of conditional GC significantly reduced the required number of samples from 13 562 to 602. Lastly, a usability study in the form of a survey was performed to investigate the potential for industrial application of the tools and approaches presented in this study. Twenty responses were obtained, with participants consisting of Stellenbosch University final-year/postgraduate students, employees of an industrial IoT firm, and Anglo American Platinum employees. Main findings include that process knowledge is vital; grouping variables improves interpretability by decreasing the number of nodes; accuracy must be maintained during causality map simplification; and sliders add confusion by causing significant changes in the causality map. In addition, survey results found PS-PC1 to be the most user-friendly approach, further emphasizing its potential for application in industry.AFRIKAANSE OPSOMMING: Wêreldwye kompetisie forseer moderne mineraalprosesseringaanlegte om by hoë produktiwiteit bedryf te word. Hierdie hoë produktiwiteit word bereik deur prosesmonitering te implementeer om die gewenste bedryfskondisies te handhaaf. ’n Fout wat in een deel van ’n aanleg ontstaan kan egter regdeur die aanleg voortplant en so die grondoorsaak verberg. Oorsaaklikheidanalise is ’n metode wat die oorsaak-en-gevolg-verhouding tussen prosesveranderlikes identifiseer en hierdie in ’n oorsaaklikheidskaart toon wat gebruik kan word om die voortplantings roete van ’n fout terug na sy grondoorsaak te volg. ’n Groot hindernis vir die wye aanvaarding van oorsaaklikheidanalise as instrument vir foutdiagnose in industrie, is die swak interpreteerbaarheid van oorsaaklikheidskaarte. Hierdie studie het maniere om die interpreteerbaarheid van oorsaaklikheidskaarte vir foutidentifikasie te verbeter, geïdentifiseer, voorgestel en geassesseer. Alle benaderings is getoets op ’n gesimuleerde gevallestudie en die resulterende kaarte is vergelyk met ’n standaard oorsaaklikheidskaart of sy transitiewe inkrimping. Die ideale oorsaaklikheidskaart is gedefinieer en alle vergelykings is uitgevoer gebaseer op sy karakteristieke. Oorsaaklikheidskaarte is geproduseer deur kondisionele Granger-oorsaaklikheid (GC) te gebruik, met ’n nuwe heuristiese benadering om steekproefperiode en tydgleuf te selekteer. Kondisionele GC is gevind om nie gepas te wees vir aanlegwye oorsaaklikheidanalise nie, as gevolg van groot datavereistes, swak seleksie van modelorde as AIC gebruik word, en onakkuraatheid in die teenwoordigheid van veelvoudige, verskillende verblyftye en tydvertraging. Metodes om proseskennis te inkorporeer om konneksies en potensiële grondoorsake te bedwing, is ondersoek en gevind om alle konneksies wat vals is te verwyder en die groep van potensiële grondoorsaakveranderlikes te verminder, onderskeidelik. Instrumente soos om node-ordes op die oorsaaklikheidskaart visueel te vertoon en skuiwers te inkorporeer om konneksies en veranderlikes te manipuleer is ook ondersoek. Verder is ’n nuwe hiërargiese benadering vir aanlegwye oorsaaklikheidanalise voorgestel, waar oorsaaklikheidskaarte in twee opeenvolgende fases gebou is. In die eerste fase is ’n minder gedetaileerde aanlegwye kaart gebou deur verteenwoordigers vir groepe veranderlikes te gebruik, en is gebruik om die fout na een van daardie groepe van veranderlikes te lokaliseer. Veranderlikes is gegroepeer volgens aanlegdele of modules geïdentifiseer in die data, en die eerste hoof komponent (PC1) is gebruik om elke groep te verteenwoordig (PS-PC1 en Mod-PC1 onderskeidelik). PS-PC1 is gevind om die mees belowende benadering te wees, want sy aanlegwye kaart het duidelik die ware grondoorsaakligging geïdentifiseer, en die stap-gewyse toepassing van kondisionele GC het die vereisde aantal steekproewe beduidend verminder van 13 562 tot 602. Laastens, ’n bruikbaarheidstudie in die vorm van ’n opname is uitgevoer om die potensiaal vir industriële toepassing van die instrumente en benaderinge voorgestel in hierdie studie, te ondersoek. Twintig antwoorde is verkry, met deelnemers wat bestaan het uit Universiteit van Stellenbosch se finale jaar/nagraadse studente, werknemers van ’n industriële IoT-firma, en Anglo American Platinum werknemers. Hoofbevindinge het ingehou dat proseskennis noodsaaklik is; om veranderlikes te groepeer verbeter interpreteerbaarheid deur die aantal nodes te verminder; akkuraatheid moet gehandhaaf word gedurende vereenvoudiging van oorsaaklikheidskaarte; en skuiwers dra by tot verwarring deur beduidende veranderinge in die oorsaaklikheidskaart te maak. Daarmee saam het die opname se resultate gevind dat PS-PC1 die meer gebruiksvriendelike benadering was, wat sy potensiaal vir toepassing verder beklemtoon.Master

    Treatment Outcome Prediction in Locally Advanced Cervical Cancer: A Machine Learning Approach using Feature Selection on Multi-Source Data

    Get PDF
    Cancer is a significant global health issue, and cervical cancer, one of the most common types among women, has far-reaching impacts worldwide. Researchers are studying cervical cancer from various perspectives, conducting thorough investigations, and utilizing novel technologies to gain a deeper understanding of the disease and its risk factors. Machine learning has increasingly found applications in cancer research due to its ability to analyze complex data relationships, recognize patterns, adapt to new information, and integrate with other technologies. By harnessing predictive machine learning models to anticipate treatment outcomes before commencing any therapies, healthcare providers might be able to make more informed decisions, allocate resources effectively, and provide personalized care. Despite significant efforts in the scientific community, the development of accurate machine learning models for cervical cancer treatment outcome prediction faces several open challenges and unresolved questions. A major challenge in developing accurate prediction models is the limited availability and quality of data. The quantity and quality of data differ across various datasets, which can significantly affect the performance and applicability of machine learning models. Additionally, it is crucial to identify the most informative and relevant features from diverse data sources, including clinical, imaging, and molecular data, to ensure accurate outcome prediction. Moreover, cancer datasets often suffer from class imbalance. Addressing this issue is another essential step to prevent biased predictions and enhance the overall performance of the models. This study aims to improve the prediction of treatment outcomes in patients with locally advanced cervical cancer by utilizing a multi-source dataset and developing different machine-learning models. The dataset includes various data sources, such as medical images, gene scores, and clinical data. A preprocessing pipeline is developed to optimize the data for training machine-learning models. The Repeated Elastic Net Technique (RENT) is also employed as a feature selection method to reduce dataset dimensionality, improve model training time, and identify the most influential features for classifying patients' treatment results. Furthermore, the Synthetic Minority Oversampling Technique (SMOTE) is used to address data imbalance in the dataset, and its impact on model performance is assessed. The study's findings indicate that the available data exhibit promising capabilities in early predicting patients' treatment outcomes, suggesting that the developed models have the potential to serve as valuable auxiliary tools for medical professionals. Although the performance of the models remained relatively unchanged after implementing the RENT method, the models' average training time was reduced by over 8-fold in the worst case. Moreover, when imposing stricter feature selection criteria, clinical features were shown to have a more prominent role in predicting treatment results than other data sources. Ultimately, the study revealed that by balancing the dataset using the SMOTE technique, the average performance of specific models could be enhanced by up to 44 times

    Unveiling healthcare data archiving: Exploring the role of artificial intelligence in medical image analysis

    Get PDF
    Gli archivi sanitari digitali possono essere considerati dei moderni database progettati per immagazzinare e gestire ingenti quantità di informazioni mediche, dalle cartelle cliniche dei pazienti, a studi clinici fino alle immagini mediche e a dati genomici. I dati strutturati e non strutturati che compongono gli archivi sanitari sono oggetto di scrupolose e rigorose procedure di validazione per garantire accuratezza, affidabilità e standardizzazione a fini clinici e di ricerca. Nel contesto di un settore sanitario in continua e rapida evoluzione, l’intelligenza artificiale (IA) si propone come una forza trasformativa, capace di riformare gli archivi sanitari digitali migliorando la gestione, l’analisi e il recupero di vasti set di dati clinici, al fine di ottenere decisioni cliniche più informate e ripetibili, interventi tempestivi e risultati migliorati per i pazienti. Tra i diversi dati archiviati, la gestione e l’analisi delle immagini mediche in archivi digitali presentano numerose sfide dovute all’eterogeneità dei dati, alla variabilità della qualità delle immagini, nonché alla mancanza di annotazioni. L’impiego di soluzioni basate sull’IA può aiutare a risolvere efficacemente queste problematiche, migliorando l’accuratezza dell’analisi delle immagini, standardizzando la qualità dei dati e facilitando la generazione di annotazioni dettagliate. Questa tesi ha lo scopo di utilizzare algoritmi di IA per l’analisi di immagini mediche depositate in archivi sanitari digitali. Il presente lavoro propone di indagare varie tecniche di imaging medico, ognuna delle quali è caratterizzata da uno specifico dominio di applicazione e presenta quindi un insieme unico di sfide, requisiti e potenziali esiti. In particolare, in questo lavoro di tesi sarà oggetto di approfondimento l’assistenza diagnostica degli algoritmi di IA per tre diverse tecniche di imaging, in specifici scenari clinici: i) Immagini endoscopiche ottenute durante esami di laringoscopia; ciò include un’esplorazione approfondita di tecniche come la detection di keypoints per la stima della motilità delle corde vocali e la segmentazione di tumori del tratto aerodigestivo superiore; ii) Immagini di risonanza magnetica per la segmentazione dei dischi intervertebrali, per la diagnosi e il trattamento di malattie spinali, così come per lo svolgimento di interventi chirurgici guidati da immagini; iii) Immagini ecografiche in ambito reumatologico, per la valutazione della sindrome del tunnel carpale attraverso la segmentazione del nervo mediano. Le metodologie esposte in questo lavoro evidenziano l’efficacia degli algoritmi di IA nell’analizzare immagini mediche archiviate. I progressi metodologici ottenuti sottolineano il notevole potenziale dell’IA nel rivelare informazioni implicitamente presenti negli archivi sanitari digitali
    corecore