30 research outputs found

    Fat Quantitation in Liver Biopsies Using a Pretrained Classification Based System

    Get PDF
    Non-Alcoholic Fatty Liver Disease (NAFLD) is a common syndrome that mainly leads to fat accumulation in liver and steatohepatitis. It is targeted as a severe medical condition ranging from 20% to 40% in adult populations of the Western World. Its effect is identified through insulin resistance, which places patients at high mortality rates. An increased fat aggregation rate, can dramatically increase the development of liver steatosis, which in later stages may advance into fibrosis and cirrhosis. During recent years, new studies have focused on building new methodologies capable of detecting fat cells, based on the histology method with digital image processing techniques. The current study, expands previous work on the detection of fatty liver, by identifying once more a number of diverse histological findings. It is a combined study of both image analysis and supervised learning of fat droplet features, with a specific goal to exclude other findings from fat ratio calculation. The method is evaluated in a total set of 40 liver biopsy images with different magnification capabilities, performing satisfyingly (1.95% absolute error)

    Development of oxygen transfer materials for chemical looping reforming processes

    No full text
    Current production methods of H2 and the rationale for H2 production through the use of sustainable resources using alternative feedstock are covered in Chapter 1. A literature review of the two processes investigated (steam reforming 'SR' and chemical looping reforming 'CLR'), of the catalysts used in steam reforming applications with focus on Ni-based catalysts, and of two alternative types of hydrogen feedstock from the transport sector (waste automotive lubricating oil 'WALO' and scrap tyres pyrolysis oil 'STPO') is conducted in Chapter 2. The materials and methods are described in Chapter 3, including the methodology of manufacturing the catalysts made in house (Ni, Co, and Ce-based), materials characterisation, the process procedures, and analyses techniques. Chapter 4 first presents the results and discusses the outputs of the SR and CLR experiments of the W ALO and STPO. Both oils were shown as potentially good feedstock for SR and CLR, but removal of catalyst poisons would be necessary to prevent the gradual ~\ deactivation of the catalyst with increasing number of cycles of CLR. Especially for the STPO almost 12 wt% of the original tyre was converted to H2. Thermodynamic equilibrium of SR of CH4 with different steam to carbon ratios (SIC) and temperatures showed the CLR of methane would be best conducted at around 700 "C with SIC of 3. SR and CLR of Cllafor 10 cycles indicated the trimetallic catalyst made in house (Ni-Co-Ce/y-Al203) had very good potential for CLR, with outputs equal and maintained to equilibrium values, and evidence of significantly higher oxygen transfer than a 25 wt% NiO/y-Al203 commercial catalyst.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Microarray image processing using intelligent information systems

    No full text
    Microarrays provide a simple way to measure the level of hybridization of known probes of interest with one or more samples under different conditions. Microarray image processing consists of three main stages. The first stage, called spot addressing and gridding, is the procedure for the detection of each spot in the image and the isolation of each spot into a cell. In the second stage, which is known as segmentation stage, each spot of the image is segmented, to separate the signal from the background pixels. Finally, in the third stage the intensity of each spot is extracted and several quantities can be calculated.For the first stage a generalized method for the spot addressing and the gridding of microarray images is introduced, where the spots are structured either in hexagonal or in rectangular grid. Initially the method indentifies the grid of the image. Next, the method utilizes the properties of both the rectangular and the hexagonal grid, to estimate the locations of the non-hybridized spots. This step detects a number of empty spots in each iteration of the proposed Growing Concentric Polygon (GCP) algorithm. The GCP algorithm grows a rectangular or a hexagonal form, for the rectangular structured images or the hexagonal structured images, respectively, detecting a number of spots on the polygon’s contour. For the segmentation stage a novel pixel-by-pixel supervised segmentation method, which is based on classification techniques, is proposed. The method classifies the pixels of the image into three categories using Bayes classifier or Support Vector Machines (SVM). Apart from the signal and the background pixels, the third class includes pixels of artefacts, pixels of the contour of the spot, and pixels of inner holes which exist in donut spots. A set of features from each pixel is used as input for the classification. The proposed method, is advantageous compared to the clustering-based methods, due to the direct characterization of each pixel to the designated category. Otherwise, using clustering techniques different clusters are generated but no distinction exists between them unless a set of rules is applied to separate them. For the evaluation of the method, both rectangular and hexagonal structured images are employed from the Standard Microarray Database, CNV370 beadchip of Illumina, as well as, simulated images generated by the Nykters simulator. The method results in high accuracy in the detection of the spots ranging from 92 – 99% depending on the dataset used. Additionally, high accuracy results are performed in the segmentation stage, where the signal and background pixels are extracted. Pixels from artefacts are excluded from the intensity extraction stage, to provide more reliable values for the levels of hybridization.Η παρούσα διδακτορική διατριβή στοχεύει στην εξαγωγή βιολογικών δεδομένων από εικόνες που παράγονται από το πείραμα των μικροσυστοιχιών. Σύμφωνα με την υπάρχουσα βιβλιογραφία διάφορες μέθοδοι έχουν προταθεί την επεξεργασία των εικόνων των μικροσυστοιχιών, οι οποίες συνήθως ακολουθούν τα τρία ακόλουθα στάδια:1. Εντοπισμός των κηλίδων και τοποθέτηση πλέγματος: στοχεύει στον ακριβή εντοπισμό της θέσης κάθε κηλίδας. Η αυτοματοποίηση αυτού του βήματος είναι κρίσιμη για τη διευκόλυνση της ανάλυσης του μεγάλου αριθμού πειραμάτων.2. Κατάτμηση: διαχωρίζει τα εικονοστοιχεία σήματος από τα εικονοστοιχεία υποβάθρου. 3. Εξαγωγή έντασης: αποτελείται από τον υπολογισμό της μέσης έντασης των κηλίδων σε σχέση με την ένταση του υποβάθρου. Κατά την διάρκεια της διατριβής υλοποιήθηκε μια αυτόματη μέθοδος για το πρώτο στάδιο της επεξεργασίας, τον εντοπισμό των κηλίδων. Για πρώτη φορά παρουσιάστηκε στην βιβλιογραφία μια μέθοδος, η οποία μπορεί να επεξεργαστεί εικόνες μικροσυστοιχιών με διαφορετική προσέγγιση εκτύπωσης των μικροσυστοιχιών. Δύο είναι οι προσεγγίσεις για την εκτύπωση της μικροσυστοιχίας: (α) ανιχνευτές τοποθετημένοι στις κορυφές τετραγωνικού πλέγματος και (β) ανιχνευτές τοποθετημένοι στις κορυφές εξαγωνικού πλέγματος. Η γενικευμένη μέθοδος η οποία υλοποιήθηκε αρχικά αναγνωρίζει το πλέγμα της εικόνας, και στην συνέχεια υλοποιεί έναν καινοτόμο αλγόριθμο για να εντοπίσει τα κέντρα όλων των ανιχνευτών στην εικόνα. Ο αλγόριθμος αυτός, οποίος ονομάστηκε Αλγόριθμος Ομόκεντρων Αναπτυσσόμενων Πολυγώνων, εκμεταλλεύεται τις κοινές ιδιότητες των δύο πλεγμάτων εντοπίζοντας τα κέντρα επάνω στο περίγραμμα ομόκεντρων τετραγώνων ή εξαγώνων. Επίσης βελτιστοποιεί τις θέσεις των κέντρων που παράγονται. Τέλος η μέθοδος κάνει χρήση του διαγράμματος Βορονόι για να απομονώσει τον κάθε ανιχνευτή.Στην συνέχεια της διατριβής προχωρήσαμε στην υλοποίηση μιας καινοτόμου μεθόδου για την κατάτμηση (δεύτερο στάδιο επεξεργασίας) των εικόνων με χρήση τεχνικών ταξινόμησης. Η μέθοδος αυτή καινοτομεί σε τρία σημεία, με σκοπό να αποκλείσει από την ποσοτικοποίηση της υβριδοποίησης τα εικονοστοιχεία που προέρχονται από διάφορα τεχνουργήματα της εικόνας. Για τον σκοπό αυτό (α) επεκτάθηκε το διάνυσμα των χαρακτηριστικών των εικονοστοιχείων, (β) εισήχθη τρίτη κλάση εικονοστοιχείων με σκοπό να λειτουργήσει ως δεξαμενή των εικονοστοιχείων που δεν θέλουμε να συμπεριλάβουμε στις ποσοτικοποιημένες τιμές και τέλος (γ) για πρώτη φορά έγινε τεχνικών ταξινόμησης για τον χαρακτηρισμό των εικονοστοιχείων στις τρεις κλάσεις.Για την αξιολόγηση των μεθόδων χρησιμοποιήθηκαν εικόνες μικροσυστοιχιών από την βάση δεδομένων μικροσυστοιχιών του Στάνφορντ, εικόνες μικροσυστοιχιών από σταθμό της Illumina, αλλά και προσομοιωμένες εικόνες από των προσομοιωτή του Nykter και των συνεργατών του, ο οποίος χρησιμοποιείται ευρέως στην βιβλιογραφία. Τα αποτελέσματα της διατριβή για τον εντοπισμό των ανιχνευτών παρουσίασαν υψηλά ποσοστά ακρίβεια, τα οποία κυμαίνονται από 92-99%, ενώ η γενικευμένη μέθοδος που υλοποιήθηκε παρουσίασε εξίσου υψηλά ποσοστά με εξειδικευμένες μεθόδους για τις δύο προσεγγίσεις εκτύπωσης. Όσον αφορά στην κατάτμηση επιλέχθηκαν με ακρίβεια τα εικονοστοιχεία σήματος και τα εικονοστοιχεία υποβάθρου, από τα οποία εξήχθησαν οι ποσοτικοποιημένες τιμές. Από την ποσοτικοποίηση εξαιρέθηκαν με επιτυχία εικονοστοιχεία από τεχνουργήματα που εμφανίστηκαν στις εικόνε

    Personalized UV Radiation Risk Monitoring Using Wearable Devices and Fuzzy Modeling

    No full text
    This paper presents a solution for monitoring of solar ultraviolet (UV) exposure and alerting about risks in real time. The novel system provides smart personalized indications for solar radiation protection. The system consists of a sensing device and a mobile application. The sensing device monitors solar radiation in real time and transmits the values wirelessly to a smart device, in which the mobile application is installed. Then, the mobile application processes the values from the sensory apparatus, based on a fuzzy expert system (FES) created from personal information (hair and eye color, tanning and burning frequency), which are entered by the user answering a short questionnaire. The FES provides an estimation of the recommended time of safe exposure in direct sunlight. The proposed system is designed to be portable (a wearable sensing device and smartphone) and low cost, while supporting multiple users

    Rotationplasty outcomes assessed by gait analysis following resection of lower extremity bone neoplasms: a systematic review and meta-analysis

    No full text
    Aims: The standard of surgical treatment for lower limb neoplasms had been characterized by highly interventional techniques, leading to severe kinetic impairment of the patients and incidences of phantom pain. Rotationplasty had arisen as a potent limb salvage treatment option for young cancer patients with lower limb bone tumours, but its impact on the gait through comparative studies still remains unclear several years after the introduction of the procedure. The aim of this study is to assess the effect of rotationplasty on gait parameters measured by gait analysis compared to healthy individuals. Methods: The MEDLINE, Scopus, and Cochrane databases were systematically searched without time restriction until 10 January 2022 for eligible studies. Gait parameters measured by gait analysis were the outcomes of interest. Results: Three studies were eligible for analyses. Compared to healthy individuals, rotationplasty significantly decreased gait velocity (-1.45 cm/sec; 95% confidence interval (CI) -1.98 to -0.93; p < 0.001), stride length (-1.20 cm; 95% CI -2.31 to -0.09; p < 0.001), cadence (-0.83 stride/min; 95% (CI -1.29 to -0.36; p < 0.001), and non-significantly increased cycle time (0.54 sec; 95% CI -0.42 to 1.51; p = 0.184). Conclusion: Rotationplasty is a valid option for the management of lower limb bone tumours in young cancer patients. Larger studies, with high patient accrual, refined surgical techniques, and well planned rehabilitation strategies, are required to further improve the reported outcomes of this procedure. Cite this article: Bone Jt Open 2023;4(11):817–824

    Evaluation of the User Adaptation in a BCI Game Environment

    No full text
    Brain-computer interface (BCI) technology is a developing field of study with numerous applications. The purpose of this paper is to discuss the use of brain signals as a direct communication pathway to an external device. In this work, Zombie Jumper is developed, which consists of 2 brain commands, imagining moving forward and blinking. The goal of the game is to jump over static or moving &ldquo;zombie&rdquo; characters in order to complete the level. To record the raw EEG data, a Muse 2 headband is used, and the OpenViBE platform is employed to process and classify the brain signals. The Unity engine is used to build the game, and the lab streaming layer (LSL) protocol is the connective link between Muse 2, OpenViBE and the Unity engine for this BCI-controlled game. A total of 37 subjects tested the game and played it at least 20 times. The average classification accuracy was 98.74%, ranging from 97.06% to 99.72%. Finally, playing the game for longer periods of time resulted in greater control

    A Low-Cost Indoor Activity Monitoring System for Detecting Frailty in Older Adults

    No full text
    Indoor localization systems have already wide applications mainly for providing localized information and directions. The majority of them focus on commercial applications providing information such us advertisements, guidance and asset tracking. Medical oriented localization systems are uncommon. Given the fact that an individual’s indoor movements can be indicative of his/her clinical status, in this paper we present a low-cost indoor localization system with room-level accuracy used to assess the frailty of older people. We focused on designing a system with easy installation and low cost to be used by non technical staff. The system was installed in older people houses in order to collect data about their indoor localization habits. The collected data were examined in combination with their frailty status, showing a correlation between them. The indoor localization system is based on the processing of Received Signal Strength Indicator (RSSI) measurements by a tracking device, from Bluetooth Beacons, using a fingerprint-based procedure. The system has been tested in realistic settings achieving accuracy above 93% in room estimation. The proposed system was used in 271 houses collecting data for 1–7-day sessions. The evaluation of the collected data using ten-fold cross-validation showed an accuracy of 83% in the classification of a monitored person regarding his/her frailty status (Frail, Pre-frail, Non-frail)

    Automated Detection of Liver Histopathological Findings Based on Biopsy Image Processing

    No full text
    Hepatic steatosis is the accumulation of fat in the hepatic cells and the liver. Triglycerides and other kinds of molecules are included in the lipids. When there is some defect in the process, hepatic steatosis arise, during which the free fatty acids are taken by the liver and exuded as lipoproteins. Alcohol is the main cause of steatosis when excessive amounts are consumed for a long period of time. In many cases, steatosis can lead to inflammation that is mentioned as steatohepatitis or non-alcoholic steatohepatitis (NASH), which can later lead to fibrosis and finally cirrhosis. For automated detection and quantification of hepatic steatosis, a novel two-stage methodology is developed in this study. Initially, the image is processed in order to become more suitable for the detection of fat regions and steatosis quantification. In the second stage, initial candidate image regions are detected, and then they are either validated or discarded based on a series of criteria. The methodology is based on liver biopsy image analysis, and has been tested using 40 liver biopsy images obtained from patients who suffer from hepatitis C. The obtained results indicate that the proposed methodology can accurately assess liver steatosis
    corecore