292 research outputs found

    Deep Learning Techniques for Multi-Dimensional Medical Image Analysis

    Get PDF

    Elements of Ion Linear Accelerators, Calm in The Resonances, Other_Tales

    Full text link
    The main part of this book, Elements of Linear Accelerators, outlines in Part 1 a framework for non-relativistic linear accelerator focusing and accelerating channel design, simulation, optimization and analysis where space charge is an important factor. Part 1 is the most important part of the book; grasping the framework is essential to fully understand and appreciate the elements within it, and the myriad application details of the following Parts. The treatment concentrates on all linacs, large or small, intended for high-intensity, very low beam loss, factory-type application. The Radio-Frequency-Quadrupole (RFQ) is especially developed as a representative and the most complicated linac form (from dc to bunched and accelerated beam), extending to practical design of long, high energy linacs, including space charge resonances and beam halo formation, and some challenges for future work. Also a practical method is presented for designing Alternating-Phase- Focused (APF) linacs with long sequences and high energy gain. Full open-source software is available. The following part, Calm in the Resonances and Other Tales, contains eyewitness accounts of nearly 60 years of participation in accelerator technology. (September 2023) The LINACS codes are released at no cost and, as always,with fully open-source coding. (p.2 & Ch 19.10)Comment: 652 pages. Some hundreds of figures - all images, there is no data in the figures. (September 2023) The LINACS codes are released at no cost and, as always,with fully open-source coding. (p.2 & Ch 19.10

    Applications

    Get PDF
    Volume 3 describes how resource-aware machine learning methods and techniques are used to successfully solve real-world problems. The book provides numerous specific application examples: in health and medicine for risk modelling, diagnosis, and treatment selection for diseases in electronics, steel production and milling for quality control during manufacturing processes in traffic, logistics for smart cities and for mobile communications

    Multiscale optimisation of dynamic properties for additively manufactured lattice structures

    Get PDF
    A framework for tailoring the dynamic properties of functionally graded lattice structures through the use of multiscale optimisation is presented in this thesis. The multiscale optimisation utilises a two scale approach to allow for complex lattice structures to be simulated in real time at a similar computational expense to traditional finite element problems. The micro and macro scales are linked by a surrogate model that predicts the homogenised material properties of the underlying lattice geometry based on the lattice design parameters. Optimisation constraints on the resonant frequencies and the Modal Assurance Criteria are implemented that can induce the structure to resonate at specific frequencies whilst simultaneously tracking and ensuring the correct mode shapes are maintained. This is where the novelty of the work lies, as dynamic properties have not previously been optimised for in a multiscale, functionally graded lattice structure. Multiscale methods offer numerous benefits and increased design freedom when generating optimal structures for dynamic environments. These benefits are showcased in a series of optimised cantilever structures. The results show a significant improvement in dynamic behavior when compared to the unoptimised case as well as when compared to a single scale topology optimised structure. The validation of the resonant properties for the lattice structures is performed through a series of mechanical tests on additive manufactured lattices. These tests address both the micro and the macro scale of the multiscale method. The homogeneous and surrogate model assumptions of the micro scale are investigated through both compression and tensile tests of uniform lattice samples. The resonant frequency predictions of the macro scale optimisation are verified through mechanical shaker testing and computed tomography scans of the lattice structure. Sources of discrepancy between the predicted and observed behavior are also investigated and explained.Open Acces

    Towards a data-driven treatment of epilepsy: computational methods to overcome low-data regimes in clinical settings

    Get PDF
    Epilepsy is the most common neurological disorder, affecting around 1 % of the population. One third of patients with epilepsy are drug-resistant. If the epileptogenic zone can be localized precisely, curative resective surgery may be performed. However, only 40 to 70 % of patients remain seizure-free after surgery. Presurgical evaluation, which in part aims to localize the epileptogenic zone (EZ), is a complex multimodal process that requires subjective clinical decisions, often relying on a multidisciplinary team’s experience. Thus, the clinical pathway could benefit from data-driven methods for clinical decision support. In the last decade, deep learning has seen great advancements due to the improvement of graphics processing units (GPUs), the development of new algorithms and the large amounts of generated data that become available for training. However, using deep learning in clinical settings is challenging as large datasets are rare due to privacy concerns and expensive annotation processes. Methods to overcome the lack of data are especially important in the context of presurgical evaluation of epilepsy, as only a small proportion of patients with epilepsy end up undergoing surgery, which limits the availability of data to learn from. This thesis introduces computational methods that pave the way towards integrating data-driven methods into the clinical pathway for the treatment of epilepsy, overcoming the challenge presented by the relatively small datasets available. We used transfer learning from general-domain human action recognition to characterize epileptic seizures from video–telemetry data. We developed a software framework to predict the location of the epileptogenic zone given seizure semiologies, based on retrospective information from the literature. We trained deep learning models using self-supervised and semi-supervised learning to perform quantitative analysis of resective surgery by segmenting resection cavities on brain magnetic resonance images (MRIs). Throughout our work, we shared datasets and software tools that will accelerate research in medical image computing, particularly in the field of epilepsy

    Deep Learning Techniques for Multi-Dimensional Medical Image Analysis

    Get PDF

    Development and clinical translation of optical and software methods for endomicroscopic imaging

    Get PDF
    Endomicroscopy is an emerging technology that aims to improve clinical diagnostics by allowing for in vivo microscopy in difficult to reach areas of the body. This is most commonly achieved by using coherent fibre bundles to relay light for illumination and imaging to and from the area under investigation. Endomicroscopy’s attraction for researchers and clinicians is two-fold: on the one hand, its use can reduce the invasiveness of a diagnostic procedure by removing the need for biopsies; On the other hand, it allows for structural and functional in vivo imaging. Endomicroscopic images acquired through optical fibre bundles exhibit artefacts that deteriorate image quality and contrast. This thesis aims to improve an existing endomicroscopy imaging system by exploring two methods that mitigate these artefacts. The first, software-based method takes several processing steps from literature and implements them in an existing endomicroscopy device with a focus on real-time application to enable clinical use, after image quality was found to be inadequate without further processing. A contribution to the field is that two different approaches are implemented and compared in quantitative and qualitative means that have not been compared directly in this manner before. This first attempt at improving endomicroscopy image quality relies solely on digital image processing methods and is developed with a strong focus on real-time applicability in clinical use. Both approaches are compared on pre-clinical and clinical human imaging data. The second method targets the effect of inter-core coupling, which reduces contrast in fibre images. A parallelised confocal imaging method is developed in which a sequence of images is acquired while selectively illuminating groups of fibre cores through the use of a spatial light modulator. A bespoke algorithm creates a composite image in a final processing step. In doing so, unwanted light is detected and removed from the final image. This method is shown to reduce the negative impact of inter-core coupling on image contrast on small imaging targets, while no benefit was found in large, scattering samples

    Electronic Imaging & the Visual Arts. EVA 2017 Florence

    Get PDF
    The Publication is following the yearly Editions of EVA FLORENCE. The State of Art is presented regarding the Application of Technologies (in particular of digital type) to Cultural Heritage. The more recent results of the Researches in the considered Area are presented. Information Technologies of interest for Culture Heritage are presented: multimedia systems, data-bases, data protection, access to digital content, Virtual Galleries. Particular reference is reserved to digital images (Electronic Imaging & the Visual Arts), regarding Cultural Institutions (Museums, Libraries, Palace - Monuments, Archaeological Sites). The International Conference includes the following Sessions: Strategic Issues; New Sciences and Culture Developments and Applications; New Technical Developments & Applications; Museums - Virtual Galleries and Related Initiatives; Art and Humanities Ecosystem & Applications; Access to the Culture Information. Two Workshops regard: Innovation and Enterprise; the Cloud Systems connected to the Culture (eCulture Cloud) in the Smart Cities context. The more recent results of the Researches at national and international are reported in the Area of Technologies and Culture Heritage, also with experimental demonstrations of developed Activities

    Entropy in Image Analysis III

    Get PDF
    Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future

    Computed-Tomography (CT) Scan

    Get PDF
    A computed tomography (CT) scan uses X-rays and a computer to create detailed images of the inside of the body. CT scanners measure, versus different angles, X-ray attenuations when passing through different tissues inside the body through rotation of both X-ray tube and a row of X-ray detectors placed in the gantry. These measurements are then processed using computer algorithms to reconstruct tomographic (cross-sectional) images. CT can produce detailed images of many structures inside the body, including the internal organs, blood vessels, and bones. This book presents a comprehensive overview of CT scanning. Chapters address such topics as instrumental basics, CT imaging in coronavirus, radiation and risk assessment in chest imaging, positron emission tomography (PET), and feature extraction
    • …
    corecore