872 research outputs found

    Undergraduate Catalog of Studies, 2023-2024

    Get PDF

    Differential spectrum modeling and sensitivity for keV sterile neutrino search at KATRIN

    Get PDF
    Starting in 2026, the KATRIN experiment will conduct a high-statistics measurement of the differential tritium β\beta-spectrum to energies deep below the kinematic endpoint. This enables the search for keV sterile neutrinos with masses less than the kinematic endpoint energy m4E0=18.6keVm_\mathrm{4} \leq E_0 = 18.6\,\mathrm{keV}, aiming for a statistical sensitivity of Ue42=sin2θ106|U_\mathrm{e4}|^2=\sin^2\theta\sim 10^{-6} for the mixing amplitude. The differential spectrum is obtained by decreasing the retarding potential of KATRIN\u27s main spectrometer, and by determining the β\beta-electron energies by their energy deposition in the new TRISTAN SDD array. In this mode of operation, the existing integral model of the tritium spectrum is insufficient, and a novel differential model is developed in this work. The new model (TRModel) convolves the differential tritium spectrum using responese matrices to predict the energy spectrum of registered events after data acquisition. Each response matrix encodes the spectral spectral distrortion from individual experimental effects, which depend on adjustable systematic parameters. This approach allows to efficiently assess the sensitivity impact of each systematics individually or in combination with others. The response matrices are obtained from monte carlo simulations, numerical convolution, and analytical computation. In this work, the sensitivity impact of 20 systematic parameters is assessed for the TRISTAN Phase-1 measurement for which nine TRISTAN SDD modules are integrated into the KATRIN beamline. Furthermore, it is demonstrated that the sensitivity impact is significantly mitigated with several beamline field adjustments and minimal hardware modifications

    Undergraduate Catalog of Studies, 2023-2024

    Get PDF

    Undergraduate Catalog of Studies, 2022-2023

    Get PDF

    Machine Learning Approaches for Semantic Segmentation on Partly-Annotated Medical Images

    Get PDF
    Semantic segmentation of medical images plays a crucial role in assisting medical practitioners in providing accurate and swift diagnoses; nevertheless, deep neural networks require extensive labelled data to learn and generalise appropriately. This is a major issue in medical imagery because most of the datasets are not fully annotated. Training models with partly-annotated datasets generate plenty of predictions that belong to correct unannotated areas that are categorised as false positives; as a result, standard segmentation metrics and objective functions do not work correctly, affecting the overall performance of the models. In this thesis, the semantic segmentation of partly-annotated medical datasets is extensively and thoroughly studied. The general objective is to improve the segmentation results of medical images via innovative supervised and semi-supervised approaches. The main contributions of this work are the following. Firstly, a new metric, specifically designed for this kind of dataset, can provide a reliable score to partly-annotated datasets with positive expert feedback in their generated predictions by exploiting all the confusion matrix values except the false positives. Secondly, an innovative approach to generating better pseudo-labels when applying co-training with the disagreement selection strategy. This method expands the pixels in disagreement utilising the combined predictions as a guide. Thirdly, original attention mechanisms based on disagreement are designed for two cases: intra-model and inter-model. These attention modules leverage the disagreement between layers (from the same or different model instances) to enhance the overall learning process and generalisation of the models. Lastly, innovative deep supervision methods improve the segmentation results by training neural networks one subnetwork at a time following the order of the supervision branches. The methods are thoroughly evaluated on several histopathological datasets showing significant improvements

    Various Applications of Methods and Elements of Adaptive Optics

    Get PDF
    This volume is focused on a wide range of topics, including adaptive optic components and tools, wavefront sensing, different control algorithms, astronomy, and propagation through turbulent and turbid media

    Conditional resampled importance sampling and ReSTIR

    Get PDF
    Recent work on generalized resampled importance sampling (GRIS) enables importance-sampled Monte Carlo integration with random variable weights replacing the usual division by probability density. This enables very flexible spatiotemporal sample reuse, even if neighboring samples (e.g., light paths) have intractable probability densities. Unlike typical Monte Carlo integration, which samples according to some PDF, GRIS instead resamples existing samples. But resampling with GRIS assumes samples have tractable marginal contribution weights, which is problematic if reusing, for example, light subpaths from unidirectionally-sampled paths. Reusing such subpaths requires conditioning by (non-reused) segments of the path prefixes. In this paper, we extend GRIS to conditional probability spaces, showing correctness given certain conditional independence between integration variables and their unbiased contribution weights. We show proper conditioning when using GRIS over randomized conditional domains and how to formulate a joint unbiased contribution weight for unbiased integration. To show our theory has practical impact, we prototype a modified ReSTIR PT with a final gather pass. This reuses subpaths, postponing reuse at least one bounce along each light path. As in photon mapping, such a final gather reduces blotchy artifacts from sample correlation and reduced correlation improves the behavior of modern denoisers on ReSTIR PT signals

    AI for time-resolved imaging: from fluorescence lifetime to single-pixel time of flight

    Get PDF
    Time-resolved imaging is a field of optics which measures the arrival time of light on the camera. This thesis looks at two time-resolved imaging modalities: fluorescence lifetime imaging and time-of-flight measurement for depth imaging and ranging. Both of these applications require temporal accuracy on the order of pico- or nanosecond (10−12 − 10−9s) scales. This demands special camera technology and optics that can sample light-intensity extremely quickly, much faster than an ordinary video camera. However, such detectors can be very expensive compared to regular cameras while offering lower image quality. Further, information of interest is often hidden (encoded) in the raw temporal data. Therefore, computational imaging algorithms are used to enhance, analyse and extract information from time-resolved images. "A picture is worth a thousand words". This describes a fundamental blessing and curse of image analysis: images contain extreme amounts of data. Consequently, it is very difficult to design algorithms that encompass all the possible pixel permutations and combinations that can encode this information. Fortunately, the rise of AI and machine learning (ML) allow us to instead create algorithms in a data-driven way. This thesis demonstrates the application of ML to time-resolved imaging tasks, ranging from parameter estimation in noisy data and decoding of overlapping information, through super-resolution, to inferring 3D information from 1D (temporal) data

    Hardware Acceleration of Progressive Refinement Radiosity using Nvidia RTX

    Full text link
    A vital component of photo-realistic image synthesis is the simulation of indirect diffuse reflections, which still remain a quintessential hurdle that modern rendering engines struggle to overcome. Real-time applications typically pre-generate diffuse lighting information offline using radiosity to avoid performing costly computations at run-time. In this thesis we present a variant of progressive refinement radiosity that utilizes Nvidia's novel RTX technology to accelerate the process of form-factor computation without compromising on visual fidelity. Through a modern implementation built on DirectX 12 we demonstrate that offloading radiosity's visibility component to RT cores significantly improves the lightmap generation process and potentially propels it into the domain of real-time.Comment: 114 page
    corecore