397 research outputs found

    Recommended Implementation of Quantitative Susceptibility Mapping for Clinical Research in The Brain: A Consensus of the ISMRM Electro-Magnetic Tissue Properties Study Group

    Get PDF
    This article provides recommendations for implementing quantitative susceptibility mapping (QSM) for clinical brain research. It is a consensus of the ISMRM Electro-Magnetic Tissue Properties Study Group. While QSM technical development continues to advance rapidly, the current QSM methods have been demonstrated to be repeatable and reproducible for generating quantitative tissue magnetic susceptibility maps in the brain. However, the many QSM approaches available give rise to the need in the neuroimaging community for guidelines on implementation. This article describes relevant considerations and provides specific implementation recommendations for all steps in QSM data acquisition, processing, analysis, and presentation in scientific publications. We recommend that data be acquired using a monopolar 3D multi-echo GRE sequence, that phase images be saved and exported in DICOM format and unwrapped using an exact unwrapping approach. Multi-echo images should be combined before background removal, and a brain mask created using a brain extraction tool with the incorporation of phase-quality-based masking. Background fields should be removed within the brain mask using a technique based on SHARP or PDF, and the optimization approach to dipole inversion should be employed with a sparsity-based regularization. Susceptibility values should be measured relative to a specified reference, including the common reference region of whole brain as a region of interest in the analysis, and QSM results should be reported with - as a minimum - the acquisition and processing specifications listed in the last section of the article. These recommendations should facilitate clinical QSM research and lead to increased harmonization in data acquisition, analysis, and reporting

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Bayesian scalar-on-image regression via random image partition models: automatic identification of regions of interest

    Get PDF
    Scalar-on-image regression aims to investigate changes in a scalar response of interest based on high-dimensional imaging data. These problems are increasingly prevalent in numerous domains, particularly in biomedical studies. For instance, they aim to utilise medical imaging data to capture and study the complex pattern of changes associated with disease to improve diagnostic accuracy. Due to the massive dimension of the images, which can often be in millions, combined with modest sample sizes, typically in the hundreds in most biomedical studies, pose serious challenges. Specifically, scalar-on-image regression belongs to the “large p, small n” paradigm, and hence, many models utilise shrinkage methods. However, neighbouring pixels in images are highly correlated, making standard regression methods, even with shrinkage, problematic due to multicollinearity and the high number of nonzero coefficients. We propose a novel Bayesian scalar-on-image regression model that utilises spatial coordinates of the pixels to group them with similar effects on the response to have a common coefficient, thus, allowing for automatic identification of regions of interest in the image for predicting the response of interest. In this thesis, we explore two classes of priors for the spatially-dependent partition process, namely, Potts-Gibbs random partition models (Potts-Gibbs) and Ewens-Pitman attraction (EPA) distribution and provide a thorough comparison of the models. In addition, Bayesian shrinkage priors are utilised to identify the covariates and regions that are most relevant for the prediction. The proposed model is illustrated using the simulated data sets and to identify brain regions of interest in Alzheimer’s disease

    Uncertainty quantification and numerical methods in charged particle radiation therapy

    Get PDF
    Radiation therapy is applied in approximately 50% of all cancer treatments. To eliminate the tumor without damaging organs in the vicinity, optimized treatment plans are determined. This requires the calculation of three-dimensional dose distributions in a heterogeneous volume with a spatial resolution of 2-3mm. Current planning techniques use multiple beams with optimized directions and energies to achieve the best possible dose distribution. Each dose calculation however requires the discretization of the six-dimensional phase space of the linear Boltzmann transport equation describing complex particle dynamics. Despite the complexity of the problem, dose calculation errors of less than 2% are clinically recommended and computation times cannot exceed a few minutes. Additionally, the treatment reality often differs from the computed plan due to various uncertainties, for example in patient positioning, the acquired CT image or the delineation of tumor and organs at risk. Therefore, it is essential to include uncertainties in the planning process to determine a robust treatment plan. This entails a realistic mathematical model of uncertainties, quantification of their effect on the dose distribution using appropriate propagation methods as well as a robust or probabilistic optimization of treatment parameters to account for these effects. Fast and accurate calculations of the dose distribution including predictions of uncertainties in the computed dose are thus crucial for the determination of robust treatment plans in radiation therapy. Monte Carlo methods are often used to solve transport problems, especially for applications that require high accuracy. In these cases, common non-intrusive uncertainty propagation strategies that involve repeated simulations of the problem at different points in the parameter space quickly become infeasible due to their long run-times. Quicker deterministic dose calculation methods allow for better incorporation of uncertainties, but often use strong simplifications or admit non-physical solutions and therefore cannot provide the required accuracy. This work is concerned with finding efficient mathematical solutions for three aspects of (robust) radiation therapy planning: 1. Efficient particle transport and dose calculations, 2. uncertainty modeling and propagation for radiation therapy, and 3. robust optimization of the treatment set-up

    Designing hybridization: alternative education strategies for fostering innovation in communication design for the territory

    Get PDF
    Within the broad context of design studies, Communication Design for the Territory stands as a hybrid discipline constantly interfacing with other fields of knowledge. It assumes the territorial theme as its specific dimension, aiming to generate communication systems capable of reading the stratifications of places. From an educational perspective, teaching activities are closely linked to research and can take on different levels of complexity: from the various forms of cartographic translation to the design of sophisticated transmedia digital systems. In the wake of COVID-19, this discipline has come to terms with a profoundly changed scenario in terms of limited access to the physical space and the emergence of new technologies for remote access. In this unique context, we propose a pedagogical strategy that focuses on the hybridization of communication artifacts with the aim of fostering design experimentation. As a creative tool, hybridization leads to the design of innovative systems by strategically combining the characteristics of different artifacts to achieve specific communication goals. By experimenting with these creative strategies, students are led to critically reflect on existing communication artifacts’ features and explore original designs that deliberately combine different media, contents, and communication languages in innovative ways. Through hybridization, the methods for territorial knowledge production appear more effective, effectively combining the skills and knowledge embodied in multiple subject areas. The paper presents the experience developed in the teaching laboratories of the DCxT (Communication Design for the Territory) research group of the Design Department of Politecnico di Milano. The teaching experience highlights how hybridization strategies can increase the effectiveness in learning about territorial specificities, in acquiring critical knowledge about communication systems, and in developing innovation strategies that allow to influence the evolution of traditional communication models

    PhD students´day FMST 2023

    Get PDF
    The authors gave oral presentations of their work online as part of a Doctoral Students’ Day held on 15 June 2023, and they reflect the challenging work done by the students and their supervisors in the fields of metallurgy, materials engineering and management. There are 82 contributions in total, covering a range of areas – metallurgical technology, thermal engineering and fuels in industry, chemical metallurgy, nanotechnology, materials science and engineering, and industrial systems management. This represents a cross-section of the diverse topics investigated by doctoral students at the faculty, and it will provide a guide for Master’s graduates in these or similar disciplines who are interested in pursuing their scientific careers further, whether they are from the faculty here in Ostrava or engineering faculties elsewhere in the Czech Republic. The quality of the contributions varies: some are of average quality, but many reach a standard comparable with research articles published in established journals focusing on disciplines of materials technology. The diversity of topics, and in some cases the excellence of the contributions, with logical structure and clearly formulated conclusions, reflect the high standard of the doctoral programme at the faculty.Ostrav

    Advantages and Disadvantages of Electronic Cigarettes

    Get PDF
    Electronic cigarettes (ECs) have been present on the consumer market for over a decade, and the number of related scientific publications in the PubMed database has now exceeded seven thousand. Despite the number of publications, there is still no consensus in the scientific community regarding their safety. However, it should be emphasized that a comparison of equivalent quantities of tobacco smoke and the aerosol produced from e-cigarettes showed that there was a significantly lower quantity of toxic compounds in the aerosol compared with the tobacco smoke. Therefore, the use of ECs could be seen as a way of reducing the health damage to cigarette smokers who cannot or are unwilling to quit using conventional methods. In addition, randomized studies are emerging suggesting that ECs could be useful in smoking cessation. On the other hand, ECs are now widely used among adolescents and may pose a serious risk of future nicotine dependence and health problems in this population, as they counteract their advantages in the population gained from smokers who quit using them. Therefore, as most authors stress, further research that will convincingly resolve the current controversies is needed. Clinicians urgently need evidence-based knowledge to better inform their patients about the use of these emerging tobacco products as a harm-reduction strategy, and regulators should regulate these products in ways that best serve public health, especially taking the youth population into account

    Algorithms in Intersection Theory in the Plane

    Get PDF
    This thesis presents an algorithm to find the local structure of intersections of plane curves. More precisely, we address the question of describing the scheme of the quotient ring of a bivariate zero-dimensional ideal IK[x,y]I\subseteq \mathbb K[x,y], \textit{i.e.} finding the points (maximal ideals of K[x,y]/I\mathbb K[x,y]/I) and describing the regular functions on those points. A natural way to address this problem is via Gr\"obner bases as they reduce the problem of finding the points to a problem of factorisation, and the sheaf of rings of regular functions can be studied with those bases through the division algorithm and localisation. Let IK[x,y]I\subseteq \mathbb K[x,y] be an ideal generated by F\mathcal F, a subset of A[x,y]\mathbb A[x,y] with AK\mathbb A\hookrightarrow\mathbb K and K\mathbb K a field. We present an algorithm that features a quadratic convergence to find a Gr\"obner basis of II or its primary component at the origin. We introduce an m\mathfrak m-adic Newton iteration to lift the lexicographic Gr\"obner basis of any finite intersection of zero-dimensional primary components of II if mA\mathfrak m\subseteq \mathbb A is a \textit{good} maximal ideal. It relies on a structural result about the syzygies in such a basis due to Conca \textit{\&} Valla (2008), from which arises an explicit map between ideals in a stratum (or Gr\"obner cell) and points in the associated moduli space. We also qualify what makes a maximal ideal m\mathfrak m suitable for our filtration. When the field K\mathbb K is \textit{large enough}, endowed with an Archimedean or ultrametric valuation, and admits a fraction reconstruction algorithm, we use this result to give a complete m\mathfrak m-adic algorithm to recover G\mathcal G, the Gr\"obner basis of II. We observe that previous results of Lazard that use Hermite normal forms to compute Gr\"obner bases for ideals with two generators can be generalised to a set of nn generators. We use this result to obtain a bound on the height of the coefficients of G\mathcal G and to control the probability of choosing a \textit{good} maximal ideal mA\mathfrak m\subseteq\mathbb A to build the m\mathfrak m-adic expansion of G\mathcal G. Inspired by Pardue (1994), we also give a constructive proof to characterise a Zariski open set of GL2(K)\mathrm{GL}_2(\mathbb K) (with action on K[x,y]\mathbb K[x,y]) that changes coordinates in such a way as to ensure the initial term ideal of a zero-dimensional II becomes Borel-fixed when K|\mathbb K| is sufficiently large. This sharpens our analysis to obtain, when A=Z\mathbb A=\mathbb Z or A=k[t]\mathbb A=k[t], a complexity less than cubic in terms of the dimension of Q[x,y]/G\mathbb Q[x,y]/\langle \mathcal G\rangle and softly linear in the height of the coefficients of G\mathcal G. We adapt the resulting method and present the analysis to find the x,y\langle x,y\rangle-primary component of II. We also discuss the transition towards other primary components via linear mappings, called \emph{untangling} and \emph{tangling}, introduced by van der Hoeven and Lecerf (2017). The two maps form one isomorphism to find points with an isomorphic local structure and, at the origin, bind them. We give a slightly faster tangling algorithm and discuss new applications of these techniques. We show how to extend these ideas to bivariate settings and give a bound on the arithmetic complexity for certain algebras

    PIE: pp-adic Encoding for High-Precision Arithmetic in Homomorphic Encryption

    Get PDF
    A large part of current research in homomorphic encryption (HE) aims towards making HE practical for real-world applications. In any practical HE, an important issue is to convert the application data (type) to the data type suitable for the HE. The main purpose of this work is to investigate an efficient HE-compatible encoding method that is generic, and can be easily adapted to apply to the HE schemes over integers or polynomials. pp-adic number theory provides a way to transform rationals to integers, which makes it a natural candidate for encoding rationals. Although one may use naive number-theoretic techniques to perform rational-to-integer transformations without reference to pp-adic numbers, we contend that the theory of pp-adic numbers is the proper lens to view such transformations. In this work we identify mathematical techniques (supported by pp-adic number theory) as appropriate tools to construct a generic rational encoder which is compatible with HE. Based on these techniques, we propose a new encoding scheme PIE, that can be easily combined with both AGCD-based and RLWE-based HE to perform high precision arithmetic. After presenting an abstract version of PIE, we show how it can be attached to two well-known HE schemes: the AGCD-based IDGHV scheme and the RLWE-based (modified) Fan-Vercauteren scheme. We also discuss the advantages of our encoding scheme in comparison with previous works
    corecore