405 research outputs found

    Spatial frequency domain imaging towards improved detection of gastrointestinal cancers

    Get PDF
    Early detection and treatment of gastrointestinal cancers has been shown to drastically improve patients survival rates. However, wide population based screening for gastrointestinal cancers is not feasible due to its high cost, risk of potential complications, and time consuming nature. This thesis forms the proposal for the development of a cost-effective, minimally invasive device to return quantitative tissue information for gastrointestinal cancer detection in-vivo using spatial frequency domain imaging (SFDI). SFDI is a non-invasive imaging technique which can return close to real time maps of absorption and reduced scattering coefficients by projecting a 2D sinusoidal pattern onto a sample of interest. First a low-cost, conventional bench top system was constructed to characterise tissue mimicking phantoms. Phantoms were fabricated with specific absorption and reduced scattering coefficients, mimicking the variation in optical properties typically seen in healthy, cancerous, and pre-cancerous oesophageal tissue. The system shows accurate retrieval of absorption and reduced scattering coefficients of 19% and 11% error respectively. However, this bench top system consists of a bulky projector and is therefore not feasible for in-vivo imaging. For SFDI systems to be feasible for in-vivo imaging, they are required to be miniaturised. Many conditions must be considered when doing this such as various illumination conditions, lighting conditions and system geometries. Therefore to aid in the miniaturisation of the bench top system, an SFDI system was simulated in the open-source ray tracing software Blender, where the capability to simulate these conditions is possible. A material of tunable absorption and scattering properties was characterised such that the specific absorption and reduced scattering coefficients of the material were known. The simulated system shows capability in detecting optical properties of typical gastrointestinal conditions in an up-close, planar geometry, as well in a non-planar geometry of a tube simulating a lumen. Optical property imaging in the non-planar, tubular geometry was done with the use of a novel illumination pattern, developed for this work. Finally, using the knowledge gained from the simulation model, the bench top system was miniaturised to a 3 mm diameter prototype. The novel use of a fiber array producing the necessary interfering fringe patterns replaced the bulky projector. The system showed capability to image phantoms simulating typical gastrointestinal conditions at two wavelengths (515 and 660 nm), measuring absorption and reduced scattering coefficients with 15% and 6% accuracy in comparison to the bench top system for the fabricated phantoms. It is proposed that this system may be used for cost-effective, minimally invasive, quantitative imaging of the gastrointestinal tract in-vivo, providing enhanced contrast for difficult to detect cancers

    Roadmap for optical tweezers

    Full text link
    Artículo escrito por un elevado número de autores, solo se referencian el que aparece en primer lugar, el nombre del grupo de colaboración, si le hubiere, y los autores pertenecientes a la UAMOptical tweezers are tools made of light that enable contactless pushing, trapping, and manipulation of objects, ranging from atoms to space light sails. Since the pioneering work by Arthur Ashkin in the 1970s, optical tweezers have evolved into sophisticated instruments and have been employed in a broad range of applications in the life sciences, physics, and engineering. These include accurate force and torque measurement at the femtonewton level, microrheology of complex fluids, single micro- and nano-particle spectroscopy, single-cell analysis, and statistical-physics experiments. This roadmap provides insights into current investigations involving optical forces and optical tweezers from their theoretical foundations to designs and setups. It also offers perspectives for applications to a wide range of research fields, from biophysics to space explorationEuropean Commission (Horizon 2020, Project No. 812780

    Metallurgical Process Simulation and Optimization

    Get PDF
    Metallurgy involves the art and science of extracting metals from their ores and modifying the metals for use. With thousands of years of development, many interdisciplinary technologies have been introduced into this traditional and large-scale industry. In modern metallurgical practices, modelling and simulation are widely used to provide solutions in the areas of design, control, optimization, and visualization, and are becoming increasingly significant in the progress of digital transformation and intelligent metallurgy. This Special Issue (SI), entitled “Metallurgical Process Simulation and Optimization”, has been organized as a platform to present the recent advances in the field of modelling and optimization of metallurgical processes, which covers the processes of electric/oxygen steel-making, secondary metallurgy, (continuous) casting, and processing. Eighteen articles have been included that concern various aspects of the topic

    Roadmap for optical tweezers

    Get PDF
    Optical tweezers are tools made of light that enable contactless pushing, trapping, and manipulation of objects, ranging from atoms to space light sails. Since the pioneering work by Arthur Ashkin in the 1970s, optical tweezers have evolved into sophisticated instruments and have been employed in a broad range of applications in the life sciences, physics, and engineering. These include accurate force and torque measurement at the femtonewton level, microrheology of complex fluids, single micro- and nano-particle spectroscopy, single-cell analysis, and statistical-physics experiments. This roadmap provides insights into current investigations involving optical forces and optical tweezers from their theoretical foundations to designs and setups. It also offers perspectives for applications to a wide range of research fields, from biophysics to space exploration.journal articl

    Roadmap for Optical Tweezers 2023

    Get PDF
    Optical tweezers are tools made of light that enable contactless pushing, trapping, and manipulation of objects ranging from atoms to space light sails. Since the pioneering work by Arthur Ashkin in the 1970s, optical tweezers have evolved into sophisticated instruments and have been employed in a broad range of applications in life sciences, physics, and engineering. These include accurate force and torque measurement at the femtonewton level, microrheology of complex fluids, single micro- and nanoparticle spectroscopy, single-cell analysis, and statistical-physics experiments. This roadmap provides insights into current investigations involving optical forces and optical tweezers from their theoretical foundations to designs and setups. It also offers perspectives for applications to a wide range of research fields, from biophysics to space exploration

    New methods for continuous non-invasive blood pressure measurement

    Get PDF
    Hlavním cílem této práce je nalezení nové metodiky pro měření kontinuálního neinvazivního krevního tlaku na základě rychlosti šíření pulzní vlny v krevním řečišti. Práce se opírá o rešerši zabývající se základním modelem pro stanovení kontinuálního neinvazivního krevního tlaku na základě měření zpoždění pulzní vlny a jeho rozšířením. Z informací získaných z rešerše se upravila metodika měření doby zpoždění pulzní vlny / rychlosti šíření pulzní vlny, aby bylo možné docílit přesnějších výsledků a omezit tak lidský faktor, který způsobuje významnou nepřesnost vlivem nedokonalého rozmístění senzorů. Rešerše se rovněž podrobně zabývá modely pro stanovení kontinuálního neinvazivního krevního tlaku a jejich úprav zajištujících zvýšení přesnosti. Mezi úpravy modelů zejména patří vstupní parametry popisující krevní oběh - systémový cévní odpor, elasticita cév, tuhost cév. Práce se taky zabývá úpravami stávajícího modelu krevního řečiště pro bližší přizpůsobení fyzického modelu k reálnému cévnímu systému lidského těla. Mezi tyto úpravy patří i funkce baroreflexu či simulace různé tvrdosti stěny umělých cévních segmentů. Protože se jedná o simulační model krevního řečiště, důležitým krokem je také měření tlakové a objemové pulzní vlny, kde není možné využít konvenční senzory pro fotopletysmografii kvůli absenci částic pohlcující světlo. Na základě experimentálního měření pro různé nastavení modelu krevního řečiště bylo provedeno měření pulzní vlny pomocí tlakových a kapacitních senzorů s následným zpracováním měřených signálů a detekcí příznaků charakterizující pulzní vlnu. Na základě příznaku byly stanoveny predikční regresní modely, které vykazovaly dostatečnou přesnost jejich určení, a tak následovaly dvě metody pro získání parametru o tvrdosti cévní stěny na základě měřitelných parametrů. První metodou byl predikční regresní model, který vykazoval přesnost 74,1 % a druhou metodou byl adaptivní neuro-fuzzy inferenční systém, který vykazoval přesnost 98,7 %. Tyto stanovení rychlosti pulzní vlny bylo ověřeno dalším přímým měřením pulzní vlny a výsledky byly srovnány. Výsledkem disertační práce je určení rychlosti šíření pulzní vlny s využitím pouze jednoho pletysmografického senzoru bez nutnosti měření na dvou různých místech s přesným měřením vzdálenosti a možnosti aplikace v klinické praxi.The main objective of this work is to find a new methodology for measuring continuous non-invasive blood pressure based on the pulse wave velocity in the vascular system. The work is based on the literature research of the basic model for the determination of non-invasive continuous blood pressure based on the measurement of pulse transit time. From the information obtained from the review, the methodology of measuring the pulse transit time/pulse wave velocity was modified in order to achieve more accurate results and to reduce the human factor that causes significant inaccuracy due to imperfect sensor placement. The review discusses in detail the models for continuous non-invasive blood pressure estimation and their modifications to ensure increased accuracy. In particular, model modifications include input parameters describing blood circulation - systemic vascular resistance, vascular elasticity, and vascular stiffness. The thesis deals with modifications to the existing physical vascular model to more closely mimic the real vascular system of the human body. These modifications include the baroreflex function or the simulation of different wall hardness of artificial arterial segments. As this is a simulation model of the vascular system, the measurement of pressure and volume pulse wave is also an important step, where it is not possible to use photoplethysmography method due to the absence of light absorbing particles. Based on the experimental measurements for different settings of the vascular model, pulse wave measurements were performed using pressure and capacitive sensors with subsequent processing of the measured signals and detection of the pulse wave features. Predictive regression models were established based on the pulse wave features and showed sufficient accuracy in their determination, followed by two methods for obtaining the parameter on the hardness of the vascular wall based on the measurable parameters. The first method was a predictive regression model, which showed an accuracy of 74.1 %, and the second method was an adaptive neuro-fuzzy inference system, which showed an accuracy of 98.7 %. These pulse wave velocity determinations were verified by further direct pulse wave measurements and the results were compared. The dissertation results in the determination of pulse wave propagation velocity using only one plethysmographic sensor without the need for measurements at two different locations with accurate distance measurements and the possibility of application in clinical practice.450 - Katedra kybernetiky a biomedicínského inženýrstvívyhově

    Exploring the adoption of a conceptual data analytics framework for subsurface energy production systems: a study of predictive maintenance, multi-phase flow estimation, and production optimization

    Get PDF
    Als die Technologie weiter fortschreitet und immer stärker in der Öl- und Gasindustrie integriert wird, steht eine enorme Menge an Daten in verschiedenen Wissenschaftsdisziplinen zur Verfügung, die neue Möglichkeiten bieten, informationsreiche und handlungsorientierte Informationen zu gewinnen. Die Konvergenz der digitalen Transformation mit der Physik des Flüssigkeitsflusses durch poröse Medien und Pipeline hat die Entwicklung und Anwendung von maschinellem Lernen (ML) vorangetrieben, um weiteren Mehrwert aus diesen Daten zu gewinnen. Als Folge hat sich die digitale Transformation und ihre zugehörigen maschinellen Lernanwendungen zu einem neuen Forschungsgebiet entwickelt. Die Transformation von Brownfields in digitale Ölfelder kann bei der Energieproduktion helfen, indem verschiedene Ziele erreicht werden, einschließlich erhöhter betrieblicher Effizienz, Produktionsoptimierung, Zusammenarbeit, Datenintegration, Entscheidungsunterstützung und Workflow-Automatisierung. Diese Arbeit zielt darauf ab, ein Rahmenwerk für diese Anwendungen zu präsentieren, insbesondere durch die Implementierung virtueller Sensoren, Vorhersageanalytik mithilfe von Vorhersagewartung für die Produktionshydraulik-Systeme (mit dem Schwerpunkt auf elektrischen Unterwasserpumpen) und präskriptiven Analytik für die Produktionsoptimierung in Dampf- und Wasserflutprojekten. In Bezug auf virtuelle Messungen ist eine genaue Schätzung von Mehrphasenströmen für die Überwachung und Verbesserung von Produktionsprozessen entscheidend. Diese Studie präsentiert einen datengetriebenen Ansatz zur Berechnung von Mehrphasenströmen mithilfe von Sensormessungen in elektrischen untergetauchten Pumpbrunnen. Es wird eine ausführliche exploratorische Datenanalyse durchgeführt, einschließlich einer Ein Variablen Studie der Zielausgänge (Flüssigkeitsrate und Wasseranteil), einer Mehrvariablen-Studie der Beziehungen zwischen Eingaben und Ausgaben sowie einer Datengruppierung basierend auf Hauptkomponentenprojektionen und Clusteralgorithmen. Feature Priorisierungsexperimente werden durchgeführt, um die einflussreichsten Parameter in der Vorhersage von Fließraten zu identifizieren. Die Modellvergleich erfolgt anhand des mittleren absoluten Fehlers, des mittleren quadratischen Fehlers und des Bestimmtheitskoeffizienten. Die Ergebnisse zeigen, dass die CNN-LSTM-Netzwerkarchitektur besonders effektiv bei der Zeitreihenanalyse von ESP-Sensordaten ist, da die 1D-CNN-Schichten automatisch Merkmale extrahieren und informative Darstellungen von Zeitreihendaten erzeugen können. Anschließend wird in dieser Studie eine Methodik zur Umsetzung von Vorhersagewartungen für künstliche Hebesysteme, insbesondere bei der Wartung von Elektrischen Untergetauchten Pumpen (ESP), vorgestellt. Conventional maintenance practices for ESPs require extensive resources and manpower, and are often initiated through reactive monitoring of multivariate sensor data. Um dieses Problem zu lösen, wird die Verwendung von Hauptkomponentenanalyse (PCA) und Extreme Gradient Boosting Trees (XGBoost) zur Analyse von Echtzeitsensordaten und Vorhersage möglicher Ausfälle in ESPs eingesetzt. PCA wird als unsupervised technique eingesetzt und sein Ausgang wird weiter vom XGBoost-Modell für die Vorhersage des Systemstatus verarbeitet. Das resultierende Vorhersagemodell hat gezeigt, dass es Signale von möglichen Ausfällen bis zu sieben Tagen im Voraus bereitstellen kann, mit einer F1-Bewertung größer als 0,71 im Testset. Diese Studie integriert auch Model-Free Reinforcement Learning (RL) Algorithmen zur Unterstützung bei Entscheidungen im Rahmen der Produktionsoptimierung. Die Aufgabe, die optimalen Injektionsstrategien zu bestimmen, stellt Herausforderungen aufgrund der Komplexität der zugrundeliegenden Dynamik, einschließlich nichtlinearer Formulierung, zeitlicher Variationen und Reservoirstrukturheterogenität. Um diese Herausforderungen zu bewältigen, wurde das Problem als Markov-Entscheidungsprozess reformuliert und RL-Algorithmen wurden eingesetzt, um Handlungen zu bestimmen, die die Produktion optimieren. Die Ergebnisse zeigen, dass der RL-Agent in der Lage war, den Netto-Barwert (NPV) durch kontinuierliche Interaktion mit der Umgebung und iterative Verfeinerung des dynamischen Prozesses über mehrere Episoden signifikant zu verbessern. Dies zeigt das Potenzial von RL-Algorithmen, effektive und effiziente Lösungen für komplexe Optimierungsprobleme im Produktionsbereich zu bieten.As technology continues to advance and become more integrated in the oil and gas industry, a vast amount of data is now prevalent across various scientific disciplines, providing new opportunities to gain insightful and actionable information. The convergence of digital transformation with the physics of fluid flow through porous media and pipelines has driven the advancement and application of machine learning (ML) techniques to extract further value from this data. As a result, digital transformation and its associated machine-learning applications have become a new area of scientific investigation. The transformation of brownfields into digital oilfields can aid in energy production by accomplishing various objectives, including increased operational efficiency, production optimization, collaboration, data integration, decision support, and workflow automation. This work aims to present a framework of these applications, specifically through the implementation of virtual sensing, predictive analytics using predictive maintenance on production hydraulic systems (with a focus on electrical submersible pumps), and prescriptive analytics for production optimization in steam and waterflooding projects. In terms of virtual sensing, the accurate estimation of multi-phase flow rates is crucial for monitoring and improving production processes. This study presents a data-driven approach for calculating multi-phase flow rates using sensor measurements located in electrical submersible pumped wells. An exhaustive exploratory data analysis is conducted, including a univariate study of the target outputs (liquid rate and water cut), a multivariate study of the relationships between inputs and outputs, and data grouping based on principal component projections and clustering algorithms. Feature prioritization experiments are performed to identify the most influential parameters in the prediction of flow rates. Model comparison is done using the mean absolute error, mean squared error and coefficient of determination. The results indicate that the CNN-LSTM network architecture is particularly effective in time series analysis for ESP sensor data, as the 1D-CNN layers are capable of extracting features and generating informative representations of time series data automatically. Subsequently, the study presented herein a methodology for implementing predictive maintenance on artificial lift systems, specifically regarding the maintenance of Electrical Submersible Pumps (ESPs). Conventional maintenance practices for ESPs require extensive resources and manpower and are often initiated through reactive monitoring of multivariate sensor data. To address this issue, the study employs the use of principal component analysis (PCA) and extreme gradient boosting trees (XGBoost) to analyze real-time sensor data and predict potential failures in ESPs. PCA is utilized as an unsupervised technique and its output is further processed by the XGBoost model for prediction of system status. The resulting predictive model has been shown to provide signals of potential failures up to seven days in advance, with an F1 score greater than 0.71 on the test set. In addition to the data-driven modeling approach, The present study also in- corporates model-free reinforcement learning (RL) algorithms to aid in decision-making in production optimization. The task of determining the optimal injection strategy poses challenges due to the complexity of the underlying dynamics, including nonlinear formulation, temporal variations, and reservoir heterogeneity. To tackle these challenges, the problem was reformulated as a Markov decision process and RL algorithms were employed to determine actions that maximize production yield. The results of the study demonstrate that the RL agent was able to significantly enhance the net present value (NPV) by continuously interacting with the environment and iteratively refining the dynamic process through multiple episodes. This showcases the potential for RL algorithms to provide effective and efficient solutions for complex optimization problems in the production domain. In conclusion, this study represents an original contribution to the field of data-driven applications in subsurface energy systems. It proposes a data-driven method for determining multi-phase flow rates in electrical submersible pumped (ESP) wells utilizing sensor measurements. The methodology includes conducting exploratory data analysis, conducting experiments to prioritize features, and evaluating models based on mean absolute error, mean squared error, and coefficient of determination. The findings indicate that a convolutional neural network-long short-term memory (CNN-LSTM) network is an effective approach for time series analysis in ESPs. In addition, the study implements principal component analysis (PCA) and extreme gradient boosting trees (XGBoost) to perform predictive maintenance on ESPs and anticipate potential failures up to a seven-day horizon. Furthermore, the study applies model-free reinforcement learning (RL) algorithms to aid decision-making in production optimization and enhance net present value (NPV)

    Intraoperative Quantification of Bone Perfusion in Lower Extremity Injury Surgery

    Get PDF
    Orthopaedic surgery is one of the most common surgical categories. In particular, lower extremity injuries sustained from trauma can be complex and life-threatening injuries that are addressed through orthopaedic trauma surgery. Timely evaluation and surgical debridement following lower extremity injury is essential, because devitalized bones and tissues will result in high surgical site infection rates. However, the current clinical judgment of what constitutes “devitalized tissue” is subjective and dependent on surgeon experience, so it is necessary to develop imaging techniques for guiding surgical debridement, in order to control infection rates and to improve patient outcome. In this thesis work, computational models of fluorescence-guided debridement in lower extremity injury surgery will be developed, by quantifying bone perfusion intraoperatively using Dynamic contrast-enhanced fluorescence imaging (DCE-FI) system. Perfusion is an important factor of tissue viability, and therefore quantifying perfusion is essential for fluorescence-guided debridement. In Chapters 3-7 of this thesis, we explore the performance of DCE-FI in quantifying perfusion from benchtop to translation: We proposed a modified fluorescent microsphere quantification technique using cryomacrotome in animal model. This technique can measure bone perfusion in periosteal and endosteal separately, and therefore to validate bone perfusion measurements obtained by DCE-FI; We developed pre-clinical rodent contaminated fracture model to correlate DCE-FI with infection risk, and compare with multi-modality scanning; Furthermore in clinical studies, we investigated first-pass kinetic parameters of DCE-FI and arterial input functions for characterization of perfusion changes during lower limb amputation surgery; We conducted the first in-human use of dynamic contrast-enhanced texture analysis for orthopaedic trauma classification, suggesting that spatiotemporal features from DCE-FI can classify bone perfusion intraoperatively with high accuracy and sensitivity; We established clinical machine learning infection risk predictive model on open fracture surgery, where pixel-scaled prediction on infection risk will be accomplished. In conclusion, pharmacokinetic and spatiotemporal patterns of dynamic contrast-enhanced imaging show great potential for quantifying bone perfusion and prognosing bone infection. The thesis work will decrease surgical site infection risk and improve successful rates of lower extremity injury surgery

    From Fully-Supervised Single-Task to Semi-Supervised Multi-Task Deep Learning Architectures for Segmentation in Medical Imaging Applications

    Get PDF
    Medical imaging is routinely performed in clinics worldwide for the diagnosis and treatment of numerous medical conditions in children and adults. With the advent of these medical imaging modalities, radiologists can visualize both the structure of the body as well as the tissues within the body. However, analyzing these high-dimensional (2D/3D/4D) images demands a significant amount of time and effort from radiologists. Hence, there is an ever-growing need for medical image computing tools to extract relevant information from the image data to help radiologists perform efficiently. Image analysis based on machine learning has pivotal potential to improve the entire medical imaging pipeline, providing support for clinical decision-making and computer-aided diagnosis. To be effective in addressing challenging image analysis tasks such as classification, detection, registration, and segmentation, specifically for medical imaging applications, deep learning approaches have shown significant improvement in performance. While deep learning has shown its potential in a variety of medical image analysis problems including segmentation, motion estimation, etc., generalizability is still an unsolved problem and many of these successes are achieved at the cost of a large pool of datasets. For most practical applications, getting access to a copious dataset can be very difficult, often impossible. Annotation is tedious and time-consuming. This cost is further amplified when annotation must be done by a clinical expert in medical imaging applications. Additionally, the applications of deep learning in the real-world clinical setting are still limited due to the lack of reliability caused by the limited prediction capabilities of some deep learning models. Moreover, while using a CNN in an automated image analysis pipeline, it’s critical to understand which segmentation results are problematic and require further manual examination. To this extent, the estimation of uncertainty calibration in a semi-supervised setting for medical image segmentation is still rarely reported. This thesis focuses on developing and evaluating optimized machine learning models for a variety of medical imaging applications, ranging from fully-supervised, single-task learning to semi-supervised, multi-task learning that makes efficient use of annotated training data. The contributions of this dissertation are as follows: (1) developing a fully-supervised, single-task transfer learning for the surgical instrument segmentation from laparoscopic images; and (2) utilizing supervised, single-task, transfer learning for segmenting and digitally removing the surgical instruments from endoscopic/laparoscopic videos to allow the visualization of the anatomy being obscured by the tool. The tool removal algorithms use a tool segmentation mask and either instrument-free reference frames or previous instrument-containing frames to fill in (inpaint) the instrument segmentation mask; (3) developing fully-supervised, single-task learning via efficient weight pruning and learned group convolution for accurate left ventricle (LV), right ventricle (RV) blood pool and myocardium localization and segmentation from 4D cine cardiac MR images; (4) demonstrating the use of our fully-supervised memory-efficient model to generate dynamic patient-specific right ventricle (RV) models from cine cardiac MRI dataset via an unsupervised learning-based deformable registration field; and (5) integrating a Monte Carlo dropout into our fully-supervised memory-efficient model with inherent uncertainty estimation, with the overall goal to estimate the uncertainty associated with the obtained segmentation and error, as a means to flag regions that feature less than optimal segmentation results; (6) developing semi-supervised, single-task learning via self-training (through meta pseudo-labeling) in concert with a Teacher network that instructs the Student network by generating pseudo-labels given unlabeled input data; (7) proposing largely-unsupervised, multi-task learning to demonstrate the power of a simple combination of a disentanglement block, variational autoencoder (VAE), generative adversarial network (GAN), and a conditioning layer-based reconstructor for performing two of the foremost critical tasks in medical imaging — segmentation of cardiac structures and reconstruction of the cine cardiac MR images; (8) demonstrating the use of 3D semi-supervised, multi-task learning for jointly learning multiple tasks in a single backbone module – uncertainty estimation, geometric shape generation, and cardiac anatomical structure segmentation of the left atrial cavity from 3D Gadolinium-enhanced magnetic resonance (GE-MR) images. This dissertation summarizes the impact of the contributions of our work in terms of demonstrating the adaptation and use of deep learning architectures featuring different levels of supervision to build a variety of image segmentation tools and techniques that can be used across a wide spectrum of medical image computing applications centered on facilitating and promoting the wide-spread computer-integrated diagnosis and therapy data science

    Thermal-Hydraulics in Nuclear Fusion Technology: R&D and Applications

    Get PDF
    In nuclear fusion technology, thermal-hydraulics is a key discipline employed in the design phase of the systems and components to demonstrate performance, and to ensure the reliability and their efficient and economical operation. ITER is in charge of investigating the transients of the engineering systems; this included safety analysis. The thermal-hydraulics is required for the design and analysis of the cooling and ancillary systems such as the blanket, the divertor, the cryogenic, and the balance of plant systems, as well as the tritium carrier, extraction and recovery systems. This Special Issue collects and documents the recent scientific advancements which include, but are not limited to: thermal-hydraulic analyses of systems and components, including magneto-hydrodynamics; safety investigations of systems and components; numerical models and code development and application; codes coupling methodology; code assessment and validation, including benchmarks; experimental infrastructures design and operation; experimental campaigns and investigations; scaling issue in experiments
    corecore