6,877 research outputs found

    The medical applications of hyperpolarized Xe and nonproton magnetic resonance imaging

    Get PDF
    Hyperpolarized 129Xe (HP 129Xe) magnetic resonance imaging (MRI) is a relatively young field which is experiencing significant advancements each year. Conventional proton MRI is widely used in clinical practice as an anatomical medical imaging due to its superb soft tissue contrast. HP 129Xe MRI, on the other hand, may provide valuable information about internal organs functions and structure. HP 129Xe MRI has been recently clinically approved for lung imaging in the United Kingdom and the United States. It allows quantitative assessment of the lung function in addition to structural imaging. HP 129Xe has unique properties of anaesthetic, and may transfer to the blood stream and be further carried to the highly perfused organs. This gives the opportunity to assess brain perfusion with HP 129Xe and perform molecular imaging. However, the further progression of the HP 129Xe utilization for brain perfusion quantification and molecular imaging implementation is limited by the absence of certain crucial milestones. This thesis focused on providing important stepping stones for the further development of HP 129Xe molecular imaging and brain imaging. The effect of glycation on the spectroscopic characteristics of HP 129Xe was studied in whole sheep blood with magnetic resonance spectroscopy. An additional peak of HP 129Xe bound to glycated hemoglobin was observed. This finding should be implemented in the spectroscopic HP 129Xe studies in patients with diabetes. [...

    Reliable Sensor Intelligence in Resource Constrained and Unreliable Environment

    Get PDF
    The objective of this research is to design a sensor intelligence that is reliable in a resource constrained, unreliable environment. There are various sources of variations and uncertainty involved in intelligent sensor system, so it is critical to build reliable sensor intelligence. Many prior works seek to design reliable sensor intelligence by developing robust and reliable task. This thesis suggests that along with improving task itself, task reliability quantification based early warning can further improve sensor intelligence. DNN based early warning generator quantifies task reliability based on spatiotemporal characteristics of input, and the early warning controls sensor parameters and avoids system failure. This thesis presents an early warning generator that predicts task failure due to sensor hardware induced input corruption and controls the sensor operation. Moreover, lightweight uncertainty estimator is presented to take account of DNN model uncertainty in task reliability quantification without prohibitive computation from stochastic DNN. Cross-layer uncertainty estimation is also discussed to consider the effect of PIM variations.Ph.D

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Data-efficient neural network training with dataset condensation

    Get PDF
    The state of the art in many data driven fields including computer vision and natural language processing typically relies on training larger models on bigger data. It is reported by OpenAI that the computational cost to achieve the state of the art doubles every 3.4 months in the deep learning era. In contrast, the GPU computation power doubles every 21.4 months, which is significantly slower. Thus, advancing deep learning performance by consuming more hardware resources is not sustainable. How to reduce the training cost while preserving the generalization performance is a long standing goal in machine learning. This thesis investigates a largely under-explored while promising solution - dataset condensation which aims to condense a large training set into a small set of informative synthetic samples for training deep models and achieve close performance to models trained on the original dataset. In this thesis, we investigate how to condense image datasets for classification tasks. We propose three methods for image dataset condensation. Our methods can be applied to condense other kinds of datasets for different learning tasks, such as text data, graph data and medical images, and we discuss it in Section 6.1. First, we propose a principled method that formulates the goal of learning a small synthetic set as a gradient matching problem with respect to the gradients of deep neural network weights that are trained on the original and synthetic data. A new gradient/weight matching loss is designed for robust matching of different neural architectures. We evaluate its performance in several image classification benchmarks and explore the usage of our method in continual learning and neural architecture search. In the second work, we propose to further improve the data-efficiency of training neural networks with synthetic data by enabling effective data augmentation. Specifically, we propose Differentiable Siamese Augmentation and learn better synthetic data that can be used more effectively with data augmentation and thus achieve better performance when training networks with data augmentation. Experiments verify that the proposed method obtains substantial gains over the state of the art. While training deep models on the small set of condensed images can be extremely fast, their synthesis remains computationally expensive due to the complex bi-level optimization. Finally, we propose a simple yet effective method that synthesizes condensed images by matching feature distributions of the synthetic and original training images when being embedded by randomly sampled deep networks. Thanks to its efficiency, we apply our method to more realistic and larger datasets with sophisticated neural architectures and obtain a significant performance boost. In summary, this manuscript presents several important contributions that improve data efficiency of training deep neural networks by condensing large datasets into significantly smaller synthetic ones. The innovations focus on principled methods based on gradient matching, higher data-efficiency with differentiable Siamese augmentation, and extremely simple and fast distribution matching without bilevel optimization. The proposed methods are evaluated on popular image classification datasets, namely MNIST, FashionMNIST, SVHN, CIFAR10/100 and TinyImageNet. The code is available at https://github.com/VICO-UoE/DatasetCondensation

    The present and future status of heavy neutral leptons

    Full text link
    Artículo escrito por un elevado número de autores, solo se referencian el que aparece en primer lugar, los autores pertenecientes a la UAM y el nombre del grupo de colaboración, si lo hubiereThe existence of nonzero neutrino masses points to the likely existence of multiple Standard Model neutral fermions. When such states are heavy enough that they cannot be produced in oscillations, they are referred to as heavy neutral leptons (HNLs). In this white paper, we discuss the present experimental status of HNLs including colliders, beta decay, accelerators, as well as astrophysical and cosmological impacts. We discuss the importance of continuing to search for HNLs, and its potential impact on our understanding of key fundamental questions, and additionally we outline the future prospects for next-generation future experiments or upcoming accelerator run scenario

    Joint Mirror Procedure: Controlling False Discovery Rate for Identifying Simultaneous Signals

    Full text link
    In many applications, identifying a single feature of interest requires testing the statistical significance of several hypotheses. Examples include mediation analysis which simultaneously examines the existence of the exposure-mediator and the mediator-outcome effects, and replicability analysis aiming to identify simultaneous signals that exhibit statistical significance across multiple independent experiments. In this work, we develop a novel procedure, named joint mirror (JM), to detect such features while controlling the false discovery rate (FDR) in finite samples. The JM procedure iteratively shrinks the rejection region based on partially revealed information until a conservative false discovery proportion (FDP) estimate is below the target FDR level. We propose an efficient algorithm to implement the method. Extensive simulations demonstrate that our procedure can control the modified FDR, a more stringent error measure than the conventional FDR, and provide power improvement in several settings. Our method is further illustrated through real-world applications in mediation and replicability analyses

    Investigating the learning potential of the Second Quantum Revolution: development of an approach for secondary school students

    Get PDF
    In recent years we have witnessed important changes: the Second Quantum Revolution is in the spotlight of many countries, and it is creating a new generation of technologies. To unlock the potential of the Second Quantum Revolution, several countries have launched strategic plans and research programs that finance and set the pace of research and development of these new technologies (like the Quantum Flagship, the National Quantum Initiative Act and so on). The increasing pace of technological changes is also challenging science education and institutional systems, requiring them to help to prepare new generations of experts. This work is placed within physics education research and contributes to the challenge by developing an approach and a course about the Second Quantum Revolution. The aims are to promote quantum literacy and, in particular, to value from a cultural and educational perspective the Second Revolution. The dissertation is articulated in two parts. In the first, we unpack the Second Quantum Revolution from a cultural perspective and shed light on the main revolutionary aspects that are elevated to the rank of principles implemented in the design of a course for secondary school students, prospective and in-service teachers. The design process and the educational reconstruction of the activities are presented as well as the results of a pilot study conducted to investigate the impact of the approach on students' understanding and to gather feedback to refine and improve the instructional materials. The second part consists of the exploration of the Second Quantum Revolution as a context to introduce some basic concepts of quantum physics. We present the results of an implementation with secondary school students to investigate if and to what extent external representations could play any role to promote students’ understanding and acceptance of quantum physics as a personal reliable description of the world

    Differentially Private Partial Set Cover with Applications to Facility Location

    Full text link
    It was observed in \citet{gupta2009differentially} that the Set Cover problem has strong impossibility results under differential privacy. In our work, we observe that these hardness results dissolve when we turn to the Partial Set Cover problem, where we only need to cover a ρ\rho-fraction of the elements in the universe, for some ρ(0,1)\rho\in(0,1). We show that this relaxation enables us to avoid the impossibility results: under loose conditions on the input set system, we give differentially private algorithms which output an explicit set cover with non-trivial approximation guarantees. In particular, this is the first differentially private algorithm which outputs an explicit set cover. Using our algorithm for Partial Set Cover as a subroutine, we give a differentially private (bicriteria) approximation algorithm for a facility location problem which generalizes kk-center/kk-supplier with outliers. Like with the Set Cover problem, no algorithm has been able to give non-trivial guarantees for kk-center/kk-supplier-type facility location problems due to the high sensitivity and impossibility results. Our algorithm shows that relaxing the covering requirement to serving only a ρ\rho-fraction of the population, for ρ(0,1)\rho\in(0,1), enables us to circumvent the inherent hardness. Overall, our work is an important step in tackling and understanding impossibility results in private combinatorial optimization.Comment: 11 pages, 2 figures. Full version of IJCAI 2023 publicatio

    Towards Reliable and Accurate Global Structure-from-Motion

    Get PDF
    Reconstruction of objects or scenes from sparse point detections across multiple views is one of the most tackled problems in computer vision. Given the coordinates of 2D points tracked in multiple images, the problem consists of estimating the corresponding 3D points and cameras\u27 calibrations (intrinsic and pose), and can be solved by minimizing reprojection errors using bundle adjustment. However, given bundle adjustment\u27s nonlinear objective function and iterative nature, a good starting guess is required to converge to global minima. Global and Incremental Structure-from-Motion methods appear as ways to provide good initializations to bundle adjustment, each with different properties. While Global Structure-from-Motion has been shown to result in more accurate reconstructions compared to Incremental Structure-from-Motion, the latter has better scalability by starting with a small subset of images and sequentially adding new views, allowing reconstruction of sequences with millions of images. Additionally, both Global and Incremental Structure-from-Motion methods rely on accurate models of the scene or object, and under noisy conditions or high model uncertainty might result in poor initializations for bundle adjustment. Recently pOSE, a class of matrix factorization methods, has been proposed as an alternative to conventional Global SfM methods. These methods use VarPro - a second-order optimization method - to minimize a linear combination of an approximation of reprojection errors and a regularization term based on an affine camera model, and have been shown to converge to global minima with a high rate even when starting from random camera calibration estimations.This thesis aims at improving the reliability and accuracy of global SfM through different approaches. First, by studying conditions for global optimality of point set registration, a point cloud averaging method that can be used when (incomplete) 3D point clouds of the same scene in different coordinate systems are available. Second, by extending pOSE methods to different Structure-from-Motion problem instances, such as Non-Rigid SfM or radial distortion invariant SfM. Third and finally, by replacing the regularization term of pOSE methods with an exponential regularization on the projective depth of the 3D point estimations, resulting in a loss that achieves reconstructions with accuracy close to bundle adjustment

    Utilizing Fluorescent Nanoscale Particles to Create a Map of the Electric Double Layer

    Get PDF
    The interactions between charged particles in solution and an applied electric field follow several models, most notably the Gouy-Chapman-Stern model, for the establishment of an electric double layer along the electrode, but these models make several assumptions of ionic concentrations and an infinite bulk solution. As more scientific progress is made for the finite and single molecule reactions inside microfluidic cells, the limitations of the models become more extreme. Thus, creating an accurate map of the precise response of charged nanoparticles in an electric field becomes increasingly vital. Another compounding factor is Brownian motion’s inverse relationship with size: large easily observable particles have relatively small Brownian movements, while nanoscale particles are simultaneously more difficult to be observed directly and have much larger magnitude Brownian movements. The research presented here tackles both cases simultaneously using fluorescently tagged, negatively charged, 20 nm diameter polystyrene nanoparticles. By utilizing parallel plate electrodes within a specially constructed microfluidic device that limits the z-direction, the nanoparticle movements become restricted to two dimensions. By using one axis to measure purely Brownian motion, while the other axis has both Brownian motion and ballistic movement from the applied electric field, the ballistic component can be disentangled and isolated. Using this terminal velocity to calculate the direct effect of the field on a single nanoparticle, as opposed to the reaction of the bulk solution, several curious phenomena were observed: the trajectory of the nanoparticle suggests that the charge time of the electrode is several magnitudes larger than the theoretical value, lasting for over a minute instead of tens of milliseconds. Additionally, the effective electric field does not reduce to below the Brownian limit, but instead has a continued influence for far longer than the model suggests. Finally, when the electrode was toggled off, a repeatable response was observed where the nanoparticle would immediately alter course in the opposite direction of the previously established field, rebounding with a high degree of force for several seconds after the potential had been cut before settling to a neutral and stochastic Brownian motion. While some initial hypotheses are presented in this dissertation as possible explanations, these findings indicate the need for additional experiments to find the root cause of these unexpected results and observations
    corecore