6,079 research outputs found

    A scoping review of educational programmes on artificial intelligence (AI) available to medical imaging staff

    Get PDF
    Introduction Medical imaging is arguably the most technologically advanced field in healthcare, encompassing a range of technologies which continually evolve as computing power and human knowledge expand. Artificial Intelligence (AI) is the next frontier which medical imaging is pioneering. The rapid development and implementation of AI has the potential to revolutionise healthcare, however, to do so, staff must be competent and confident in its application, hence AI readiness is an important precursor to AI adoption. Research to ascertain the best way to deliver this AI-enabled healthcare training is in its infancy. The aim of this scoping review is to compare existing studies which investigate and evaluate the efficacy of AI educational interventions for medical imaging staff. Methods Following the creation of a search strategy and keyword searches, screening was conducted to determine study eligibility. This consisted of a title and abstract scan, then subsequently a full-text review. Articles were included if they were empirical studies wherein an educational intervention on AI for medical imaging staff was created, delivered, and evaluated. Results Of the initial 1309 records returned, n = 5 (∼0.4 %) of studies met the eligibility criteria of the review. The curricula and delivery in each of the five studies shared similar aims and a ‘flipped classroom’ delivery was the most utilised method. However, the depth of content covered in the curricula of each varied and measured outcomes differed greatly. Conclusion The findings of this review will provide insights into the evaluation of existing AI educational interventions, which will be valuable when planning AI education for healthcare staff. Implications for practice This review highlights the need for standardised and comprehensive AI training programs for imaging staff

    Long-term land cover changes assessment in the Jiului Valley mining basin in Romania

    Get PDF
    Introduction: Highlighting and assessing land cover changes in a heterogeneous landscape, such as those with surface mining activities, allows for understanding the dynamics and status of the analyzed area. This paper focuses on the long-term land cover changes in the Jiului Valley, the largest mining basin in Romania, using Landsat temporal image series from 1988 to 2017.Methods: The images were classified using the supervised Support Vector Machine (SVM) algorithm incorporating four kernel functions and two common algorithms (Maximum Likelihood Classification - MLC) and (Minimum Distance - MD). Seven major land cover classes have been identified: forest, pasture, agricultural land, built-up areas, mined areas, dump sites, and water bodies. The accuracy of every classification algorithm was evaluated through independent validation, and the differences in accuracy were subsequently analyzed. Using the best-performing SVM-RBF algorithm, classified maps of the study area were developed and used for assessing land cover changes by post-classification comparison (PCC).Results and discussions: All three algorithms displayed an overall accuracy, ranging from 76.56% to 90.68%. The SVM algorithms outperformed MLC by 4.87%–8.80% and MD by 6.82%–10.67%. During the studied period, changes occurred within analyzed classes, both directly and indirectly: forest, built-up areas, mined areas, and water bodies experienced increases, whereas pasture, agricultural land, and dump areas saw declines. The most notable changes between 1988 and 2017 were observed in built-up and dump areas: the built-up areas increased by 110.7%, while the dump sites decreased by 53.0%. The mined class showed an average growth of 6.5%. By highlighting and mapping long-term land cover changes in this area, along with their underlying causes, it became possible to analyze the impact of land management and usage on sustainable development and conservation effort over time

    Deep learning-based multimodality classification of chronic mild traumatic brain injury using resting-state functional MRI and PET imaging

    Get PDF
    Mild traumatic brain injury (mTBI) is a public health concern. The present study aimed to develop an automatic classifier to distinguish between patients with chronic mTBI (n = 83) and healthy controls (HCs) (n = 40). Resting-state functional MRI (rs-fMRI) and positron emission tomography (PET) imaging were acquired from the subjects. We proposed a novel deep-learning-based framework, including an autoencoder (AE), to extract high-level latent and rectified linear unit (ReLU) and sigmoid activation functions. Single and multimodality algorithms integrating multiple rs-fMRI metrics and PET data were developed. We hypothesized that combining different imaging modalities provides complementary information and improves classification performance. Additionally, a novel data interpretation approach was utilized to identify top-performing features learned by the AEs. Our method delivered a classification accuracy within the range of 79–91.67% for single neuroimaging modalities. However, the performance of classification improved to 95.83%, thereby employing the multimodality model. The models have identified several brain regions located in the default mode network, sensorimotor network, visual cortex, cerebellum, and limbic system as the most discriminative features. We suggest that this approach could be extended to the objective biomarkers predicting mTBI in clinical settings

    Pedestrian level of service for sidewalks in Tangier City

    Get PDF
    The pedestrian level of service (PLOS) is a measure that quantifies walkway comfort levels. PLOS defined into six categories (A, B, C, D, E, and F) each level defines the range of values, for example, a good level (best traffic condition) is defined with the letter A until reaching the worst level, F (high congestion). This article aims to define the PLOS on sidewalks considering walking conditions in Tangier City (Morocco). Sidewalks are analyzed using video recording in the urban center of Tangier City. The collected data are pedestrian flow and effective sidewalk width. Each level contains a range of values that corresponds to the pedestrian flow that defines the level of service. Clustering techniques are used to identify the threshold of each level using a self-organizing map (SOM). The results are different from those obtained with other methods because pedestrian traffic differs from country to country

    Land use classification in mine-agriculture compound area based on multi-feature random forest: a case study of Peixian

    Get PDF
    IntroductionLand use classification plays a critical role in analyzing land use/cover change (LUCC). Remote sensing land use classification based on machine learning algorithm is one of the hot spots in current remote sensing technology research. The diversity of surface objects and the complexity of their distribution in mixed mining and agricultural areas have brought challenges to the classification of traditional remote sensing images, and the rich information contained in remote sensing images has not been fully utilized.MethodsA quantitative difference index was proposed quantify and select the texture features of easily confused land types, and a random forest (RF) classification method with multi-feature combination classification schemes for remote sensing images was developed, and land use information of the mine-agriculture compound area of Peixian in Xuzhou, China was extracted.ResultsThe quantitative difference index proved effective in reducing the dimensionality of feature parameters and resulted in a reduction of the optimal feature scheme dimension from 57 to 22. Among the four classification methods based on the optimal feature classification scheme, the RF algorithm emerged as the most efficient with a classification accuracy of 92.38% and a Kappa coefficient of 0.90, which outperformed the support vector machine (SVM), classification and regression tree (CART), and neural network (NN) algorithm.ConclusionThe findings indicate that the quantitative differential index is a novel and effective approach for discerning distinct texture features among various land types. It plays a crucial role in the selection and optimization of texture features in multispectral remote sensing imagery. Random forest (RF) classification method, leveraging a multi-feature combination, provides a fresh method support for the precise classification of intricate ground objects within the mine-agriculture compound area

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Data- og ekspertdreven variabelseleksjon for prediktive modeller i helsevesenet : mot økt tolkbarhet i underbestemte maskinlæringsproblemer

    Get PDF
    Modern data acquisition techniques in healthcare generate large collections of data from multiple sources, such as novel diagnosis and treatment methodologies. Some concrete examples are electronic healthcare record systems, genomics, and medical images. This leads to situations with often unstructured, high-dimensional heterogeneous patient cohort data where classical statistical methods may not be sufficient for optimal utilization of the data and informed decision-making. Instead, investigating such data structures with modern machine learning techniques promises to improve the understanding of patient health issues and may provide a better platform for informed decision-making by clinicians. Key requirements for this purpose include (a) sufficiently accurate predictions and (b) model interpretability. Achieving both aspects in parallel is difficult, particularly for datasets with few patients, which are common in the healthcare domain. In such cases, machine learning models encounter mathematically underdetermined systems and may overfit easily on the training data. An important approach to overcome this issue is feature selection, i.e., determining a subset of informative features from the original set of features with respect to the target variable. While potentially raising the predictive performance, feature selection fosters model interpretability by identifying a low number of relevant model parameters to better understand the underlying biological processes that lead to health issues. Interpretability requires that feature selection is stable, i.e., small changes in the dataset do not lead to changes in the selected feature set. A concept to address instability is ensemble feature selection, i.e. the process of repeating the feature selection multiple times on subsets of samples of the original dataset and aggregating results in a meta-model. This thesis presents two approaches for ensemble feature selection, which are tailored towards high-dimensional data in healthcare: the Repeated Elastic Net Technique for feature selection (RENT) and the User-Guided Bayesian Framework for feature selection (UBayFS). While RENT is purely data-driven and builds upon elastic net regularized models, UBayFS is a general framework for ensembles with the capabilities to include expert knowledge in the feature selection process via prior weights and side constraints. A case study modeling the overall survival of cancer patients compares these novel feature selectors and demonstrates their potential in clinical practice. Beyond the selection of single features, UBayFS also allows for selecting whole feature groups (feature blocks) that were acquired from multiple data sources, as those mentioned above. Importance quantification of such feature blocks plays a key role in tracing information about the target variable back to the acquisition modalities. Such information on feature block importance may lead to positive effects on the use of human, technical, and financial resources if systematically integrated into the planning of patient treatment by excluding the acquisition of non-informative features. Since a generalization of feature importance measures to block importance is not trivial, this thesis also investigates and compares approaches for feature block importance rankings. This thesis demonstrates that high-dimensional datasets from multiple data sources in the medical domain can be successfully tackled by the presented approaches for feature selection. Experimental evaluations demonstrate favorable properties of both predictive performance, stability, as well as interpretability of results, which carries a high potential for better data-driven decision support in clinical practice.Moderne datainnsamlingsteknikker i helsevesenet genererer store datamengder fra flere kilder, som for eksempel nye diagnose- og behandlingsmetoder. Noen konkrete eksempler er elektroniske helsejournalsystemer, genomikk og medisinske bilder. Slike pasientkohortdata er ofte ustrukturerte, høydimensjonale og heterogene og hvor klassiske statistiske metoder ikke er tilstrekkelige for optimal utnyttelse av dataene og god informasjonsbasert beslutningstaking. Derfor kan det være lovende å analysere slike datastrukturer ved bruk av moderne maskinlæringsteknikker for å øke forståelsen av pasientenes helseproblemer og for å gi klinikerne en bedre plattform for informasjonsbasert beslutningstaking. Sentrale krav til dette formålet inkluderer (a) tilstrekkelig nøyaktige prediksjoner og (b) modelltolkbarhet. Å oppnå begge aspektene samtidig er vanskelig, spesielt for datasett med få pasienter, noe som er vanlig for data i helsevesenet. I slike tilfeller må maskinlæringsmodeller håndtere matematisk underbestemte systemer og dette kan lett føre til at modellene overtilpasses treningsdataene. Variabelseleksjon er en viktig tilnærming for å håndtere dette ved å identifisere en undergruppe av informative variabler med hensyn til responsvariablen. Samtidig som variabelseleksjonsmetoder kan lede til økt prediktiv ytelse, fremmes modelltolkbarhet ved å identifisere et lavt antall relevante modellparametere. Dette kan gi bedre forståelse av de underliggende biologiske prosessene som fører til helseproblemer. Tolkbarhet krever at variabelseleksjonen er stabil, dvs. at små endringer i datasettet ikke fører til endringer i hvilke variabler som velges. Et konsept for å adressere ustabilitet er ensemblevariableseleksjon, dvs. prosessen med å gjenta variabelseleksjon flere ganger på en delmengde av prøvene i det originale datasett og aggregere resultater i en metamodell. Denne avhandlingen presenterer to tilnærminger for ensemblevariabelseleksjon, som er skreddersydd for høydimensjonale data i helsevesenet: "Repeated Elastic Net Technique for feature selection" (RENT) og "User-Guided Bayesian Framework for feature selection" (UBayFS). Mens RENT er datadrevet og bygger på elastic net-regulariserte modeller, er UBayFS et generelt rammeverk for ensembler som muliggjør inkludering av ekspertkunnskap i variabelseleksjonsprosessen gjennom forhåndsbestemte vekter og sidebegrensninger. En case-studie som modellerer overlevelsen av kreftpasienter sammenligner disse nye variabelseleksjonsmetodene og demonstrerer deres potensiale i klinisk praksis. Utover valg av enkelte variabler gjør UBayFS det også mulig å velge blokker eller grupper av variabler som representerer de ulike datakildene som ble nevnt over. Kvantifisering av viktigheten av variabelgrupper spiller en nøkkelrolle for forståelsen av hvorvidt datakildene er viktige for responsvariablen. Tilgang til slik informasjon kan føre til at bruken av menneskelige, tekniske og økonomiske ressurser kan forbedres dersom informasjonen integreres systematisk i planleggingen av pasientbehandlingen. Slik kan man redusere innsamling av ikke-informative variabler. Siden generaliseringen av viktighet av variabelgrupper ikke er triviell, undersøkes og sammenlignes også tilnærminger for rangering av viktigheten til disse variabelgruppene. Denne avhandlingen viser at høydimensjonale datasett fra flere datakilder fra det medisinske domenet effektivt kan håndteres ved bruk av variabelseleksjonmetodene som er presentert i avhandlingen. Eksperimentene viser at disse kan ha positiv en effekt på både prediktiv ytelse, stabilitet og tolkbarhet av resultatene. Bruken av disse variabelseleksjonsmetodene bærer et stort potensiale for bedre datadrevet beslutningsstøtte i klinisk praksis

    AI-based design methodologies for hot form quench (HFQ®)

    Get PDF
    This thesis aims to develop advanced design methodologies that fully exploit the capabilities of the Hot Form Quench (HFQ®) stamping process in stamping complex geometric features in high-strength aluminium alloy structural components. While previous research has focused on material models for FE simulations, these simulations are not suitable for early-phase design due to their high computational cost and expertise requirements. This project has two main objectives: first, to develop design guidelines for the early-stage design phase; and second, to create a machine learning-based platform that can optimise 3D geometries under hot stamping constraints, for both early and late-stage design. With these methodologies, the aim is to facilitate the incorporation of HFQ capabilities into component geometry design, enabling the full realisation of its benefits. To achieve the objectives of this project, two main efforts were undertaken. Firstly, the analysis of aluminium alloys for stamping deep corners was simplified by identifying the effects of corner geometry and material characteristics on post-form thinning distribution. New equation sets were proposed to model trends and design maps were created to guide component design at early stages. Secondly, a platform was developed to optimise 3D geometries for stamping, using deep learning technologies to incorporate manufacturing capabilities. This platform combined two neural networks: a geometry generator based on Signed Distance Functions (SDFs), and an image-based manufacturability surrogate model. The platform used gradient-based techniques to update the inputs to the geometry generator based on the surrogate model's manufacturability information. The effectiveness of the platform was demonstrated on two geometry classes, Corners and Bulkheads, with five case studies conducted to optimise under post-stamped thinning constraints. Results showed that the platform allowed for free morphing of complex geometries, leading to significant improvements in component quality. The research outcomes represent a significant contribution to the field of technologically advanced manufacturing methods and offer promising avenues for future research. The developed methodologies provide practical solutions for designers to identify optimal component geometries, ensuring manufacturing feasibility and reducing design development time and costs. The potential applications of these methodologies extend to real-world industrial settings and can significantly contribute to the continued advancement of the manufacturing sector.Open Acces

    Immune contexture monitoring in solid tumors focusing on Head and Neck Cancer

    Get PDF
    Forti evidenze dimostrano una stretta interazione tra il sistema immunitario e lo sviluppo biologico e la progressione clinica dei tumori solidi. L'effetto che il microambiente immunitario del tumore può avere sul comportamento clinico della malattia è indicato come "immunecontexture". Nonostante ciò, l'attuale gestione clinica dei pazienti affetti da cancro non tiene conto di alcuna caratteristica immunologica né per la stadiazione né per le scelte terapeutiche. Il tumore della testa e del collo (HNSCC) rappresenta il 7° tumore più comune al mondo ed è caratterizzato da una prognosi relativamente sfavorevole e dall'effetto negativo dei trattamenti sulla qualità della vita dei pazienti. Oltre alla chirurgia e alla radioterapia, sono disponibili pochi trattamenti sistemici, rappresentati principalmente dalla chemioterapia a base di platino-derivati o dal cetuximab. L'immunoterapia è una nuova strategia terapeutica ancora limitata al setting palliativo (malattia ricorrente non resecabile o metastatica). La ricerca di nuovi biomarcatori o possibili nuovi meccanismi target è molto rilevante quindi nel contesto clinico dell'HNSCC. In questa tesi ci si concentrerà sullo studio di tre possibili popolazioni immunitarie pro-tumorali studiate nell'HNSCC: i neutrofili tumore-associati (TAN), le cellule B intratumorali con fenotipo immunosoppressivo e i T-reg CD8+. Particolare attenzione è data all'applicazione di moderne tecniche biostatistiche e bioinformatiche per riassumere informazioni complesse derivate da variabili cliniche e immunologiche multiparametriche e per validare risultati derivati ​​in situ, attraverso dati di espressione genica derivati da dataset pubblici. Infine, la seconda parte della tesi prenderà in considerazione progetti di ricerca clinica rilevanti, volti a migliorare l'oncologia di precisione nell'HNSCC, sviluppando modelli predittivi di sopravvivenza, confrontando procedure oncologiche alternative, validando nuovi classificatori o testando l'uso di nuovi protocolli clinici come l'uso dell'immunonutrizione.Strong evidences demonstrate a close interplay between the immune system and the biological development and clinical progression of solid tumors. The effect that the tumor immune microenvironment can have on the clinical behavior of the disease is referred as the immuno contexture. Nevertheless, the current clinical management of patients affected by cancer does not take into account any immunological features either for the staging or for the treatment choices. Head and Neck Cancer (HNSCC) represents the 7th most common cancer worldwide and it is characterized by a relatively poor prognosis and detrimental effect of treatments on the quality of life of patients. Beyond surgery and radiotherapy, few systemic treatments are available, mainly represented by platinum-based chemotherapy or cetuximab. Immunotherapy is a new therapeutical strategy still limited to the palliative setting (recurrent not resectable or metastatic disease). The search for new biomarkers or possible new targetable mechanisms is meaningful especially in the clinical setting of HNSCC. In this thesis a focus will be given on the study of three possible pro-tumoral immune populations studied in HNSCC: the tumor associated neutrophils (TAN), intratumoral B-cells with a immunosuppressive phenotype and the CD8+ T-regs. Biostatistical and bioinformatical techniques are applied to summarize complex information derived from multiparametric clinical and immunological variables and to validate in-situ derived findings through gene expression data of public available datasets. Lastly, the second part of the thesis will take into account relevant clinical research projects, aimed at improving the precision oncology in HNSCC developing survival prediction models, comparing alternative oncological procedures, validating new classifiers or testing the use of novel clinical protocols as the use of immunnutrition

    Investigating the learning potential of the Second Quantum Revolution: development of an approach for secondary school students

    Get PDF
    In recent years we have witnessed important changes: the Second Quantum Revolution is in the spotlight of many countries, and it is creating a new generation of technologies. To unlock the potential of the Second Quantum Revolution, several countries have launched strategic plans and research programs that finance and set the pace of research and development of these new technologies (like the Quantum Flagship, the National Quantum Initiative Act and so on). The increasing pace of technological changes is also challenging science education and institutional systems, requiring them to help to prepare new generations of experts. This work is placed within physics education research and contributes to the challenge by developing an approach and a course about the Second Quantum Revolution. The aims are to promote quantum literacy and, in particular, to value from a cultural and educational perspective the Second Revolution. The dissertation is articulated in two parts. In the first, we unpack the Second Quantum Revolution from a cultural perspective and shed light on the main revolutionary aspects that are elevated to the rank of principles implemented in the design of a course for secondary school students, prospective and in-service teachers. The design process and the educational reconstruction of the activities are presented as well as the results of a pilot study conducted to investigate the impact of the approach on students' understanding and to gather feedback to refine and improve the instructional materials. The second part consists of the exploration of the Second Quantum Revolution as a context to introduce some basic concepts of quantum physics. We present the results of an implementation with secondary school students to investigate if and to what extent external representations could play any role to promote students’ understanding and acceptance of quantum physics as a personal reliable description of the world
    • …
    corecore