1,684 research outputs found

    Artificial intelligence for predictive biomarker discovery in immuno-oncology: a systematic review

    Get PDF
    Background: The widespread use of immune checkpoint inhibitors (ICIs) has revolutionised treatment of multiple cancer types. However, selecting patients who may benefit from ICI remains challenging. Artificial intelligence (AI) approaches allow exploitation of high-dimension oncological data in research and development of precision immuno-oncology. Materials and methods: We conducted a systematic literature review of peer-reviewed original articles studying the ICI efficacy prediction in cancer patients across five data modalities: genomics (including genomics, transcriptomics, and epigenomics), radiomics, digital pathology (pathomics), and real-world and multimodality data. Results: A total of 90 studies were included in this systematic review, with 80% published in 2021-2022. Among them, 37 studies included genomic, 20 radiomic, 8 pathomic, 20 real-world, and 5 multimodal data. Standard machine learning (ML) methods were used in 72% of studies, deep learning (DL) methods in 22%, and both in 6%. The most frequently studied cancer type was non-small-cell lung cancer (36%), followed by melanoma (16%), while 25% included pan-cancer studies. No prospective study design incorporated AI-based methodologies from the outset; rather, all implemented AI as a post hoc analysis. Novel biomarkers for ICI in radiomics and pathomics were identified using AI approaches, and molecular biomarkers have expanded past genomics into transcriptomics and epigenomics. Finally, complex algorithms and new types of AI-based markers, such as meta-biomarkers, are emerging by integrating multimodal/multi-omics data. Conclusion: AI-based methods have expanded the horizon for biomarker discovery, demonstrating the power of integrating multimodal data from existing datasets to discover new meta-biomarkers. While most of the included studies showed promise for AI-based prediction of benefit from immunotherapy, none provided high-level evidence for immediate practice change. A priori planned prospective trial designs are needed to cover all lifecycle steps of these software biomarkers, from development and validation to integration into clinical practice

    SUPPORT VECTOR MACHINE FOR HUMAN IDENTIFICATION BASED ON NON-FIDUCIAL FEATURES OF THE ECG

    Get PDF
    The demand for reliable identification systems has grown recently. Using the mean frequency, median frequency, band power, and Welch power spectral density (PSD) of ECG data, we proposed a novel biometric approach in this study. ECG signals are more secure than other traditional biometric modalities because they are impossible to forge and duplicate. Three different support vector machine classifiers—linear SVM, quadratic SVM, and cubic SVM—are employed for the classification. The MIT-BIH arrhythmia database is used to evaluate the suggested method's precision. For the linear SVM, quadratic SVM, and cubic SVM, respectively, test accuracy of 93.6%, 96.4%, and 97.0% was obtained

    Unveiling the frontiers of deep learning: innovations shaping diverse domains

    Full text link
    Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table

    Application of radiomics in diagnosis and treatment of lung cancer

    Get PDF
    Radiomics has become a research field that involves the process of converting standard nursing images into quantitative image data, which can be combined with other data sources and subsequently analyzed using traditional biostatistics or artificial intelligence (Al) methods. Due to the capture of biological and pathophysiological information by radiomics features, these quantitative radiomics features have been proven to provide fast and accurate non-invasive biomarkers for lung cancer risk prediction, diagnosis, prognosis, treatment response monitoring, and tumor biology. In this review, radiomics has been emphasized and discussed in lung cancer research, including advantages, challenges, and drawbacks

    Optimization of neural networks for deep learning and applications to CT image segmentation

    Full text link
    [eng] During the last few years, AI development in deep learning has been going so fast that even important researchers, politicians, and entrepreneurs are signing petitions to try to slow it down. The newest methods for natural language processing and image generation are achieving results so unbelievable that people are seriously starting to think they can be dangerous for society. In reality, they are not dangerous (at the moment) even if we have to admit we reached a point where we have no more control over the flux of data inside the deep networks. It is impossible to open a modern deep neural network and interpret how it processes the information and, in many cases, explain how or why it gives back that particular result. One of the goals of this doctoral work has been to study the behavior of weights in convolutional neural networks and in transformers. We hereby present a work that demonstrates how to invert 3x3 convolutions after training a neural network able to learn how to classify images, with the future aim of having precisely invertible convolutional neural networks. We demonstrate that a simple network can learn to classify images on an open-source dataset without loss in accuracy, with respect to a non-invertible one. All that with the ability to reconstruct the original image without detectable error (on 8-bit images) in up to 20 convolutions stacked in a row. We present a thorough comparison between our method and the standard. We tested the performances of the five most used transformers for image classification on an open- source dataset. Studying the embedded matrices, we have been able to provide two criteria that can help transformers learn with a training time reduction of up to 30% and with no impact on classification accuracy. The evolution of deep learning techniques is also touching the field of digital health. With tens of thousands of new start-ups and more than 1B $ of investments only in the last year, this field is growing rapidly and promising to revolutionize healthcare. In this thesis, we present several neural networks for the segmentation of lungs, lung nodules, and areas affected by pneumonia induced by COVID-19, in chest CT scans. The architecturesm we used are all residual convolutional neural networks inspired by UNet and Inception. We customized them with novel loss functions and layers studied to achieve high performances on these particular applications. The errors on the surface of nodule segmentation masks are not over 1mm in more than 99% of the cases. Our algorithm for COVID-19 lesion detection has a specificity of 100% and overall accuracy of 97.1%. In general, it surpasses the state-of-the-art in all the considered statistics, using UNet as a benchmark. Combining these with other algorithms able to detect and predict lung cancer, the whole work was presented in a European innovation program and judged of high interest by worldwide experts. With this work, we set the basis for the future development of better AI tools in healthcare and scientific investigation into the fundamentals of deep learning.[spa] Durante los últimos años, el desarrollo de la IA en el aprendizaje profundo ha ido tan rápido que Incluso importantes investigadores, políticos y empresarios están firmando peticiones para intentar para ralentizarlo. Los métodos más nuevos para el procesamiento y la generación de imágenes y lenguaje natural, están logrando resultados tan increíbles que la gente está empezando a preocuparse seriamente. Pienso que pueden ser peligrosos para la sociedad. En realidad, no son peligrosos (al menos de momento) incluso si tenemos que admitir que llegamos a un punto en el que ya no tenemos control sobre el flujo de datos dentro de las redes profundas. Es imposible abrir una moderna red neuronal profunda e interpretar cómo procesa la información y, en muchos casos, explique cómo o por qué devuelve ese resultado en particular, uno de los objetivos de este doctorado. El trabajo ha consistido en estudiar el comportamiento de los pesos en redes neuronales convolucionales y en transformadores. Por la presente presentamos un trabajo que demuestra cómo invertir 3x3 convoluciones después de entrenar una red neuronal capaz de aprender a clasificar imágenes, con el objetivo futuro de tener redes neuronales convolucionales precisamente invertibles. Nosotros queremos demostrar que una red simple puede aprender a clasificar imágenes en un código abierto conjunto de datos sin pérdida de precisión, con respecto a uno no invertible. Todo eso con la capacidad de reconstruir la imagen original sin errores detectables (en imágenes de 8 bits) en hasta 20 convoluciones apiladas en fila. Presentamos una exhaustiva comparación entre nuestro método y el estándar. Probamos las prestaciones de los cinco transformadores más utilizados para la clasificación de imágenes en abierto. conjunto de datos de origen. Al estudiar las matrices incrustadas, hemos sido capaz de proporcionar dos criterios que pueden ayudar a los transformadores a aprender con un tiempo de capacitación reducción de hasta el 30% y sin impacto en la precisión de la clasificación. La evolución de las técnicas de aprendizaje profundo también está afectando al campo de la salud digital. Con decenas de miles de nuevas empresas y más de mil millones de dólares en inversiones sólo en el año pasado, este campo está creciendo rápidamente y promete revolucionar la atención médica. En esta tesis, presentamos varias redes neuronales para la segmentación de pulmones, nódulos pulmonares, y zonas afectadas por neumonía inducida por COVID-19, en tomografías computarizadas de tórax. La arquitectura que utilizamos son todas redes neuronales convolucionales residuales inspiradas en UNet. Las personalizamos con nuevas funciones y capas de pérdida, estudiado para lograr altos rendimientos en estas aplicaciones particulares. Los errores en la superficie de las máscaras de segmentación de los nódulos no supera 1 mm en más del 99% de los casos. Nuestro algoritmo para la detección de lesiones de COVID-19 tiene una especificidad del 100% y en general precisión del 97,1%. En general supera el estado del arte en todos los aspectos considerados, estadísticas, utilizando UNet como punto de referencia. Combinando estos con otros algoritmos capaces de detectar y predecir el cáncer de pulmón, todo el trabajo se presentó en una innovación europea programa y considerado de gran interés por expertos de todo el mundo. Con este trabajo, sentamos las bases para el futuro desarrollo de mejores herramientas de IA en Investigación sanitaria y científica sobre los fundamentos del aprendizaje profundo

    The 2023 wearable photoplethysmography roadmap

    Get PDF
    Photoplethysmography is a key sensing technology which is used in wearable devices such as smartwatches and fitness trackers. Currently, photoplethysmography sensors are used to monitor physiological parameters including heart rate and heart rhythm, and to track activities like sleep and exercise. Yet, wearable photoplethysmography has potential to provide much more information on health and wellbeing, which could inform clinical decision making. This Roadmap outlines directions for research and development to realise the full potential of wearable photoplethysmography. Experts discuss key topics within the areas of sensor design, signal processing, clinical applications, and research directions. Their perspectives provide valuable guidance to researchers developing wearable photoplethysmography technology

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Brain Tumor Classification using SLIC Segmentation with Superpixel Fusion, GoogleNet, and Linear Neighborhood Semantic Segmentation

    Get PDF
    Brain tumor is an abnormal tissue mass resultant of uncontrolled growth of cells. Brain tumors often reduce life expectancy and cause death in the later stages. Automatic detection of brain tumors is a challenging and important task in computer-aided disease diagnosis systems. This paper presents a deep learning-based approach to the classification of brain tumors. The noise in the brain MRI image is removed using Edge Directional Total Variation Denoising. The brain MRI image is segmented using SLIC segmentation with superpixel fusion. The segments are given to a trained GoogleNet model, which identifies the tumor parts in the image. Once the tumor is identified, a Convolution Neural Network (CNN) based modified semantic segmentation model is used to classify the pixels along the edges of the tumor segments. The modified sematic segmentation uses a linear neighborhood of the pixel for better classification. The final tumor identified is accurate as pixels at the border are classified precisely. The experimental results show that the proposed method has produced an accuracy of 97.3% with GoogleNet classification model, and the linear neighborhood semantic segmentation has delivered an accuracy of 98%

    Multiphase flow measurement and data analytic based on multi-modal sensors

    Get PDF
    Accurate multiphase flow measurement is crucial in the energy industry. Over the past decades, separation of the multiphase flow into single-phase flows has been a standard method for measuring multiphase flowrate. However, in-situ, non-invasive, and real-time imaging and measuring the key parameters of multiphase flows remain a long-standing challenge. To tackle the challenge, this thesis first explores the feasibility of performing time-difference and frequency-difference imaging of multiphase flows with complex-valued electrical capacitance tomography (CVECT). The multiple measurement vector (MMV) model-based CVECT imaging algorithm is proposed to reconstruct conductivity and permittivity distribution simultaneously, and the alternating direction method of multipliers (ADMM) is applied to solve the multi-frequency image reconstruction problem. The proposed multiphase flow imaging approach is verified and benchmarked with widely adopted tomographic image reconstruction algorithms. Another focus of this thesis is multiphase flowrate estimation based on low-cost, multi-modal sensors. Machine learning (ML) has recently emerged as a powerful tool to deal with time series sensing data from multi-modal sensors. This thesis investigates three prevailing machine learning methods, i.e., deep neural network (DNN), support vector machine (SVM), and convolutional neural network (CNN), to estimate the flowrate of oil/gas/water three-phase flows based on the Venturi tube. The improvement of CNN with the combination of long-short term memory machine (LSTM) is made and a temporal convolution network (TCN) model is introduced to analyse the collected time series sensing data from the Venturi tube installed in a pilot-scale multiphase flow facility. Furthermore, a multi-modal approach for multiphase flowrate measurement is developed by combining the Venturi tube and a dual-plane ECT sensor. An improved TCN model is built to predict the multiphase flowrate with various data pre-processing methods. The results provide guidance on data pre-processing methods for multiphase flowrate measurement and suggest that the proposed combination of low-cost flow sensing techniques and machine learning can effectively translate the time series sensing data to achieve satisfactory flowrate measurement under various flow conditions

    Cerebrovascular dysfunction in cerebral small vessel disease

    Get PDF
    INTRODUCTION: Cerebral small vessel disease (SVD) is the cause of a quarter of all ischaemic strokes and is postulated to have a role in up to half of all dementias. SVD pathophysiology remains unclear but cerebrovascular dysfunction may be important. If confirmed many licensed medications have mechanisms of action targeting vascular function, potentially enabling new treatments via drug repurposing. Knowledge is limited however, as most studies assessing cerebrovascular dysfunction are small, single centre, single imaging modality studies due to the complexities in measuring cerebrovascular dysfunctions in humans. This thesis describes the development and application of imaging techniques measuring several cerebrovascular dysfunctions to investigate SVD pathophysiology and trial medications that may improve small blood vessel function in SVD. METHODS: Participants with minor ischaemic strokes were recruited to a series of studies utilising advanced MRI techniques to measure cerebrovascular dysfunction. Specifically MRI scans measured the ability of different tissues in the brain to change blood flow in response to breathing carbon dioxide (cerebrovascular reactivity; CVR) and the flow and pulsatility through the cerebral arteries, venous sinuses and CSF spaces. A single centre observational study optimised and established feasibility of the techniques and tested associations of cerebrovascular dysfunctions with clinical and imaging phenotypes. Then a randomised pilot clinical trial tested two medications’ (cilostazol and isosorbide mononitrate) ability to improve CVR and pulsatility over a period of eight weeks. The techniques were then expanded to include imaging of blood brain barrier permeability and utilised in multi-centre studies investigating cerebrovascular dysfunction in both sporadic and monogenetic SVDs. RESULTS: Imaging protocols were feasible, consistently being completed with usable data in over 85% of participants. After correcting for the effects of age, sex and systolic blood pressure, lower CVR was associated with higher white matter hyperintensity volume, Fazekas score and perivascular space counts. Lower CVR was associated with higher pulsatility of blood flow in the superior sagittal sinus and lower CSF flow stroke volume at the foramen magnum. Cilostazol and isosorbide mononitrate increased CVR in white matter. The CVR, intra-cranial flow and pulsatility techniques, alongside blood brain barrier permeability and microstructural integrity imaging were successfully employed in a multi-centre observational study. A clinical trial assessing the effects of drugs targeting blood pressure variability is nearing completion. DISCUSSION: Cerebrovascular dysfunction in SVD has been confirmed and may play a more direct role in disease pathogenesis than previously established risk factors. Advanced imaging measures assessing cerebrovascular dysfunction are feasible in multi-centre studies and trials. Identifying drugs that improve cerebrovascular dysfunction using these techniques may be useful in selecting candidates for definitive clinical trials which require large sample sizes and long follow up periods to show improvement against outcomes of stroke and dementia incidence and cognitive function
    • …
    corecore