155 research outputs found

    Improving diagnosis and prognosis of lung cancer using vision transformers: A scoping review

    Full text link
    Vision transformer-based methods are advancing the field of medical artificial intelligence and cancer imaging, including lung cancer applications. Recently, many researchers have developed vision transformer-based AI methods for lung cancer diagnosis and prognosis. This scoping review aims to identify the recent developments on vision transformer-based AI methods for lung cancer imaging applications. It provides key insights into how vision transformers complemented the performance of AI and deep learning methods for lung cancer. Furthermore, the review also identifies the datasets that contributed to advancing the field. Of the 314 retrieved studies, this review included 34 studies published from 2020 to 2022. The most commonly addressed task in these studies was the classification of lung cancer types, such as lung squamous cell carcinoma versus lung adenocarcinoma, and identifying benign versus malignant pulmonary nodules. Other applications included survival prediction of lung cancer patients and segmentation of lungs. The studies lacked clear strategies for clinical transformation. SWIN transformer was a popular choice of the researchers; however, many other architectures were also reported where vision transformer was combined with convolutional neural networks or UNet model. It can be concluded that vision transformer-based models are increasingly in popularity for developing AI methods for lung cancer applications. However, their computational complexity and clinical relevance are important factors to be considered for future research work. This review provides valuable insights for researchers in the field of AI and healthcare to advance the state-of-the-art in lung cancer diagnosis and prognosis. We provide an interactive dashboard on lung-cancer.onrender.com/.Comment: submitted to BMC Medical Imaging journa

    Artificial Intelligence-Based Methods for Fusion of Electronic Health Records and Imaging Data

    Full text link
    Healthcare data are inherently multimodal, including electronic health records (EHR), medical images, and multi-omics data. Combining these multimodal data sources contributes to a better understanding of human health and provides optimal personalized healthcare. Advances in artificial intelligence (AI) technologies, particularly machine learning (ML), enable the fusion of these different data modalities to provide multimodal insights. To this end, in this scoping review, we focus on synthesizing and analyzing the literature that uses AI techniques to fuse multimodal medical data for different clinical applications. More specifically, we focus on studies that only fused EHR with medical imaging data to develop various AI methods for clinical applications. We present a comprehensive analysis of the various fusion strategies, the diseases and clinical outcomes for which multimodal fusion was used, the ML algorithms used to perform multimodal fusion for each clinical application, and the available multimodal medical datasets. We followed the PRISMA-ScR guidelines. We searched Embase, PubMed, Scopus, and Google Scholar to retrieve relevant studies. We extracted data from 34 studies that fulfilled the inclusion criteria. In our analysis, a typical workflow was observed: feeding raw data, fusing different data modalities by applying conventional machine learning (ML) or deep learning (DL) algorithms, and finally, evaluating the multimodal fusion through clinical outcome predictions. Specifically, early fusion was the most used technique in most applications for multimodal learning (22 out of 34 studies). We found that multimodality fusion models outperformed traditional single-modality models for the same task. Disease diagnosis and prediction were the most common clinical outcomes (reported in 20 and 10 studies, respectively) from a clinical outcome perspective.Comment: Accepted in Nature Scientific Reports. 20 page

    stairs and fire

    Get PDF

    Discutindo a educação ambiental no cotidiano escolar: desenvolvimento de projetos na escola formação inicial e continuada de professores

    Get PDF
    A presente pesquisa buscou discutir como a Educação Ambiental (EA) vem sendo trabalhada, no Ensino Fundamental e como os docentes desta escola compreendem e vem inserindo a EA no cotidiano escolar., em uma escola estadual do município de Tangará da Serra/MT, Brasil. Para tanto, realizou-se entrevistas com os professores que fazem parte de um projeto interdisciplinar de EA na escola pesquisada. Verificou-se que o projeto da escola não vem conseguindo alcançar os objetivos propostos por: desconhecimento do mesmo, pelos professores; formação deficiente dos professores, não entendimento da EA como processo de ensino-aprendizagem, falta de recursos didáticos, planejamento inadequado das atividades. A partir dessa constatação, procurou-se debater a impossibilidade de tratar do tema fora do trabalho interdisciplinar, bem como, e principalmente, a importância de um estudo mais aprofundado de EA, vinculando teoria e prática, tanto na formação docente, como em projetos escolares, a fim de fugir do tradicional vínculo “EA e ecologia, lixo e horta”.Facultad de Humanidades y Ciencias de la Educació

    Improving diagnosis and prognosis of lung cancer using vision transformers: a scoping review

    No full text
    Abstract Background Vision transformer-based methods are advancing the field of medical artificial intelligence and cancer imaging, including lung cancer applications. Recently, many researchers have developed vision transformer-based AI methods for lung cancer diagnosis and prognosis. Objective This scoping review aims to identify the recent developments on vision transformer-based AI methods for lung cancer imaging applications. It provides key insights into how vision transformers complemented the performance of AI and deep learning methods for lung cancer. Furthermore, the review also identifies the datasets that contributed to advancing the field. Methods In this review, we searched Pubmed, Scopus, IEEEXplore, and Google Scholar online databases. The search terms included intervention terms (vision transformers) and the task (i.e., lung cancer, adenocarcinoma, etc.). Two reviewers independently screened the title and abstract to select relevant studies and performed the data extraction. A third reviewer was consulted to validate the inclusion and exclusion. Finally, the narrative approach was used to synthesize the data. Results Of the 314 retrieved studies, this review included 34 studies published from 2020 to 2022. The most commonly addressed task in these studies was the classification of lung cancer types, such as lung squamous cell carcinoma versus lung adenocarcinoma, and identifying benign versus malignant pulmonary nodules. Other applications included survival prediction of lung cancer patients and segmentation of lungs. The studies lacked clear strategies for clinical transformation. SWIN transformer was a popular choice of the researchers; however, many other architectures were also reported where vision transformer was combined with convolutional neural networks or UNet model. Researchers have used the publicly available lung cancer datasets of the lung imaging database consortium and the cancer genome atlas. One study used a cluster of 48 GPUs, while other studies used one, two, or four GPUs. Conclusion It can be concluded that vision transformer-based models are increasingly in popularity for developing AI methods for lung cancer applications. However, their computational complexity and clinical relevance are important factors to be considered for future research work. This review provides valuable insights for researchers in the field of AI and healthcare to advance the state-of-the-art in lung cancer diagnosis and prognosis. We provide an interactive dashboard on lung-cancer.onrender.com/

    Evaluation the Phytoremediation of Oil-contaminated Soils Around Isfahan Oil Refinery

    No full text
    Petroleum compounds are pollutants that most commonly occur in soils around oil refineries and that often find their ways into groundwater resources. Phytoremediation is a cost-effective alternative to physicochemical methods for oil-contaminated soil remediation, where feasible. In this study, a greenhouse experiment was conducted to evaluate the phytoremediation of oil-contaminated soils around Isfahan Oil Refinery. Four different plants (namely, sorghum, barley, agropyron, and festuca) were initially evaluated in terms of their germinability in both contaminated and control (non-contaminated) soils. Sorghum and barley (recording the highest germinability values) were chosen as the species for use in the phytoremediation experiments. Shoot and root dry weights, total and oil-degrading bacteria counts, microbial activity, and total concentrations of petroleum hydrocarbons (TPHs) were determined at harvest 120 days after planting. A significant difference was observed in the bacterial counts (total and oil-degrading bacteria) between the planted soils and the control. In contaminated soils, a higher microbial activity was observed in the rhizosphere of the sorghum soil than in that of barley. TPHs concentration decreased by 52%‒64% after 120 days in contaminated soil in which sorghum and barley had been cultivated. This represented an improvement of 30% compared to the contaminated soil without plants. Based on the results obtained, sorghum and barley may be recommended for the removal of petro-contaminants in areas close to Isfahan Oil Refinery. Nevertheless, caution must be taken as such cultivated lands may need to be protected against grazing animals

    Construction and Eukaryotic Expression of Recombinant Large Hepatitis Delta Antigen

    No full text
    Background: Hepatitis delta virus (HDV) is a subviral human pathogen that exploits host RNA editing activity to produce two essential forms of the sole viral protein, hepatitis delta antigen (HDAg). Editing at the amber/W site of HDV antigenomic RNA leads to the production of the large form (L-HDAg), which is required for RNA packaging. Methods: In this study, PCR-based site-directed mutagenesis by the overlap extension method was used to create the point mutation converting the small-HDAg (S-HDAg) stop codon to a tryptophan codon through three stages. Results: Sequencing confirmed the desirable mutation and integrity of the L-HDAg open reading frame. The amplicon was ligated into pcDNA3.1 and transfected to Huh7 and HEK 293 cell lines. Western blot analysis using enhanced chemiluminescence confirmed L-HDAg expression. The recombinant L-HDAg localized within the nuclei of cells as determined by immunofluorescence and confocal microscopy. Conclusion: Because L-HDAg requires extensive post-translational modifications, the recombinant protein expressed in a mammalian system might be fully functional and applicable as a tool in HDV molecular studies, as well as in future vaccine research

    The Role of Artificial Intelligence in Decoding Speech from EEG Signals: A Scoping Review

    No full text
    Background: Brain traumas, mental disorders, and vocal abuse can result in permanent or temporary speech impairment, significantly impairing one’s quality of life and occasionally resulting in social isolation. Brain–computer interfaces (BCI) can support people who have issues with their speech or who have been paralyzed to communicate with their surroundings via brain signals. Therefore, EEG signal-based BCI has received significant attention in the last two decades for multiple reasons: (i) clinical research has capitulated detailed knowledge of EEG signals, (ii) inexpensive EEG devices, and (iii) its application in medical and social fields. Objective: This study explores the existing literature and summarizes EEG data acquisition, feature extraction, and artificial intelligence (AI) techniques for decoding speech from brain signals. Method: We followed the PRISMA-ScR guidelines to conduct this scoping review. We searched six electronic databases: PubMed, IEEE Xplore, the ACM Digital Library, Scopus, arXiv, and Google Scholar. We carefully selected search terms based on target intervention (i.e., imagined speech and AI) and target data (EEG signals), and some of the search terms were derived from previous reviews. The study selection process was carried out in three phases: study identification, study selection, and data extraction. Two reviewers independently carried out study selection and data extraction. A narrative approach was adopted to synthesize the extracted data. Results: A total of 263 studies were evaluated; however, 34 met the eligibility criteria for inclusion in this review. We found 64-electrode EEG signal devices to be the most widely used in the included studies. The most common signal normalization and feature extractions in the included studies were the bandpass filter and wavelet-based feature extraction. We categorized the studies based on AI techniques, such as machine learning and deep learning. The most prominent ML algorithm was a support vector machine, and the DL algorithm was a convolutional neural network. Conclusions: EEG signal-based BCI is a viable technology that can enable people with severe or temporal voice impairment to communicate to the world directly from their brain. However, the development of BCI technology is still in its infancy

    A scoping review of artificial intelligence-based methods for diabetes risk prediction

    No full text
    Abstract The increasing prevalence of type 2 diabetes mellitus (T2DM) and its associated health complications highlight the need to develop predictive models for early diagnosis and intervention. While many artificial intelligence (AI) models for T2DM risk prediction have emerged, a comprehensive review of their advancements and challenges is currently lacking. This scoping review maps out the existing literature on AI-based models for T2DM prediction, adhering to the PRISMA extension for Scoping Reviews guidelines. A systematic search of longitudinal studies was conducted across four databases, including PubMed, Scopus, IEEE-Xplore, and Google Scholar. Forty studies that met our inclusion criteria were reviewed. Classical machine learning (ML) models dominated these studies, with electronic health records (EHR) being the predominant data modality, followed by multi-omics, while medical imaging was the least utilized. Most studies employed unimodal AI models, with only ten adopting multimodal approaches. Both unimodal and multimodal models showed promising results, with the latter being superior. Almost all studies performed internal validation, but only five conducted external validation. Most studies utilized the area under the curve (AUC) for discrimination measures. Notably, only five studies provided insights into the calibration of their models. Half of the studies used interpretability methods to identify key risk predictors revealed by their models. Although a minority highlighted novel risk predictors, the majority reported commonly known ones. Our review provides valuable insights into the current state and limitations of AI-based models for T2DM prediction and highlights the challenges associated with their development and clinical integration
    corecore