6 research outputs found

    A data mining based clinical decision support system for survival in lung cancer

    Get PDF
    Background: A clinical decision support system (CDSS) has been designed to predict the outcome (overall survival) by extracting and analyzing information from routine clinical activity as a complement to clinical guidelines in lung cancer patients. Materials and methods: Prospective multicenter data from 543 consecutive (2013–2017) lung cancer patients with 1167 variables were used for development of the CDSS. Data Mining analyses were based on the XGBoost and Generalized Linear Models algorithms. The predictions from guidelines and the CDSS proposed were compared. Results: Overall, the highest (> 0.90) areas under the receiver-operating characteristics curve AUCs for predicting survival were obtained for small cell lung cancer patients. The AUCs for predicting survival using basic items included in the guidelines were mostly below 0.70 while those obtained using the CDSS were mostly above 0.70. The vast majority of comparisons between the guideline and CDSS AUCs were statistically significant (p < 0.05). For instance, using the guidelines, the AUC for predicting survival was 0.60 while the predictive power of the CDSS enhanced the AUC up to 0.84 (p = 0.0009). In terms of histology, there was only a statistically significant difference when comparing the AUCs of small cell lung cancer patients (0.96) and all lung cancer patients with longer (≄ 18 months) follow up (0.80; p < 0.001). Conclusions: The CDSS successfully showed potential for enhancing prediction of survival. The CDSS could assist physicians in formulating evidence-based management advice in patients with lung cancer, guiding an individualized discussion according to prognosis.Instituto de Salud Carlos III PI16/02104Junta de AndalucĂ­a PIN-0476-2017Ministerio de EconomĂ­a y Competitividad FPAP13-1E-242

    Gas Detection and Identification Using Multimodal Artificial Intelligence Based Sensor Fusion

    Get PDF
    With the rapid industrialization and technological advancements, innovative engineering technologies which are cost effective, faster and easier to implement are essential. One such area of concern is the rising number of accidents happening due to gas leaks at coal mines, chemical industries, home appliances etc. In this paper we propose a novel approach to detect and identify the gaseous emissions using the multimodal AI fusion techniques. Most of the gases and their fumes are colorless, odorless, and tasteless, thereby challenging our normal human senses. Sensing based on a single sensor may not be accurate, and sensor fusion is essential for robust and reliable detection in several real-world applications. We manually collected 6400 gas samples (1600 samples per class for four classes) using two specific sensors: the 7-semiconductor gas sensors array, and a thermal camera. The early fusion method of multimodal AI, is applied The network architecture consists of a feature extraction module for individual modality, which is then fused using a merged layer followed by a dense layer, which provides a single output for identifying the gas. We obtained the testing accuracy of 96% (for fused model) as opposed to individual model accuracies of 82% (based on Gas Sensor data using LSTM) and 93% (based on thermal images data using CNN model). Results demonstrate that the fusion of multiple sensors and modalities outperforms the outcome of a single sensor.Comment: 14 Pages, 9 Figure

    Classification of Data from Electronic Nose Using Gradient Tree Boosting Algorithm

    No full text
    In this paper, an approach that can fast classify the data from the electronic nose is presented. In this approach the gradient tree boosting algorithm is used to classify the gas data and the experiment results show that the proposed gradient tree boosting algorithm achieved high performance on this classification problem, outperforming other algorithms as comparison. In addition, electronic nose we used only requires a few seconds of data after the gas reaction begins. Therefore, the proposed approach can realize a fast recognition of gas, as it does not need to wait for the gas reaction to reach steady state

    Classification of Data from Electronic Nose Using Gradient Tree Boosting Algorithm

    No full text
    In this paper, an approach that can fast classify the data from the electronic nose is presented. In this approach the gradient tree boosting algorithm is used to classify the gas data and the experiment results show that the proposed gradient tree boosting algorithm achieved high performance on this classification problem, outperforming other algorithms as comparison. In addition, electronic nose we used only requires a few seconds of data after the gas reaction begins. Therefore, the proposed approach can realize a fast recognition of gas, as it does not need to wait for the gas reaction to reach steady state

    Virtual Synaesthesia: Crossmodal Correspondences and Synesthetic Experiences

    Get PDF
    As technology develops to allow for the integration of additional senses into interactive experiences, there is a need to bridge the divide between the real and the virtual in a manner that stimulates the five senses consistently and in harmony with the sensory expectations of the user. Applying the philosophy of a neurological condition known as synaesthesia and crossmodal correspondences, defined as the coupling of the senses, can provide numerous cognitive benefits and offers an insight into which senses are most likely to be ‘bound’ together. This thesis aims to present a design paradigm called ‘virtual synaesthesia’ the goal of the paradigm is to make multisensory experiences more human-orientated by considering how the brain combines senses in both the general population (crossmodal correspondences) and within a select few individuals (natural synaesthesia). Towards this aim, a literature review is conducted covering the related areas of research umbrellaed by the concept of ‘virtual synaesthesia’. Its research areas are natural synaesthesia, crossmodal correspondences, multisensory experiences, and sensory substitution/augmentation. This thesis examines augmenting interactive and multisensory experiences with strong (natural synaesthesia) and weak (crossmodal correspondences) synaesthesia. This thesis answers the following research questions: Is it possible to replicate the underlying cognitive benefits of odour-vision synaesthesia? Do people have consistent correspondences between olfaction and an aggregate of different sensory modalities? What is the nature and origin of these correspondences? And Is it possible to predict the crossmodal correspondences attributed to odours? The benefits of augmenting a human-machine interface using an artificial form of odour-vision synaesthesia are explored to answer these questions. This concept is exemplified by transforming odours transduced using a custom-made electronic nose and transforming an odour's ‘chemical footprint’ into a 2D abstract shape representing the current odour. Electronic noses can transform odours in the vapour phase generating a series of electrical signals that represent the current odour source. Weak synaesthesia (crossmodal correspondences) is then investigated to determine if people have consistent correspondences between odours and the angularity of shapes, the smoothness of texture, perceived pleasantness, pitch, musical, and emotional dimensions. Following on from this research, the nature and origin of these correspondences were explored using the underlying hedonic (values relating to pleasantness), semantic (knowledge of the identity of the odour) and physicochemical (the physical and chemical characteristics of the odour) dependencies. The final research chapter investigates the possibility of removing the bottleneck of conducting extensive human trials by determining what the crossmodal correspondences towards specific odours are by developing machine learning models to predict the crossmodal perception of odours using their underlying physicochemical features. The work presented in this thesis provides some insight and evidence of the benefits of incorporating the concept ‘virtual synaesthesia’ into human-machine interfaces and research into the methodology embodied by ‘virtual synaesthesia’, namely crossmodal correspondences. Overall, the work presented in this thesis shows potential for augmenting multisensory experiences with more refined capabilities leading to more enriched experiences, better designs, and a more intuitive way to convey information crossmodally
    corecore