7,138 research outputs found
Differential spectrum modeling and sensitivity for keV sterile neutrino search at KATRIN
Starting in 2026, the KATRIN experiment will conduct a high-statistics measurement of the differential tritium -spectrum to energies deep below the kinematic endpoint. This enables the search for keV sterile neutrinos with masses less than the kinematic endpoint energy , aiming for a statistical sensitivity of for the mixing amplitude. The differential spectrum is obtained by decreasing the retarding potential of KATRIN\u27s main spectrometer, and by determining the -electron energies by their energy deposition in the new TRISTAN SDD array. In this mode of operation, the existing integral model of the tritium spectrum is insufficient, and a novel differential model is developed in this work.
The new model (TRModel) convolves the differential tritium spectrum using responese matrices to predict the energy spectrum of registered events after data acquisition. Each response matrix encodes the spectral spectral distrortion from individual experimental effects, which depend on adjustable systematic parameters. This approach allows to efficiently assess the sensitivity impact of each systematics individually or in combination with others. The response matrices are obtained from monte carlo simulations, numerical convolution, and analytical computation.
In this work, the sensitivity impact of 20 systematic parameters is assessed for the TRISTAN Phase-1 measurement for which nine TRISTAN SDD modules are integrated into the KATRIN beamline. Furthermore, it is demonstrated that the sensitivity impact is significantly mitigated with several beamline field adjustments and minimal hardware modifications
Recommended from our members
Synaptic plasticity and memory addressing in biological and artificial neural networks
Biological brains are composed of neurons, interconnected by synapses to create large complex networks. Learning and memory occur, in large part, due to synaptic plasticity -- modifications in the efficacy of information transmission through these synaptic connections. Artificial neural networks model these with neural "units" which communicate through synaptic weights. Models of learning and memory propose synaptic plasticity rules that describe and predict the weight modifications. An equally important but under-evaluated question is the selection of \textit{which} synapses should be updated in response to a memory event. In this work, we attempt to separate the questions of synaptic plasticity from that of memory addressing.
Chapter 1 provides an overview of the problem of memory addressing and a summary of the solutions that have been considered in computational neuroscience and artificial intelligence, as well as those that may exist in biology. Chapter 2 presents in detail a solution to memory addressing and synaptic plasticity in the context of familiarity detection, suggesting strong feedforward weights and anti-Hebbian plasticity as the respective mechanisms. Chapter 3 proposes a model of recall, with storage performed by addressing through local third factors and neo-Hebbian plasticity, and retrieval by content-based addressing. In Chapter 4, we consider the problem of concurrent memory consolidation and memorization. Both storage and retrieval are performed by content-based addressing, but the plasticity rule itself is implemented by gradient descent, modulated according to whether an item should be stored in a distributed manner or memorized verbatim. However, the classical method for computing gradients in recurrent neural networks, backpropagation through time, is generally considered unbiological. In Chapter 5 we suggest a more realistic implementation through an approximation of recurrent backpropagation.
Taken together, these results propose a number of potential mechanisms for memory storage and retrieval, each of which separates the mechanism of synaptic updating -- plasticity -- from that of synapse selection -- addressing. Explicit studies of memory addressing may find applications not only in artificial intelligence but also in biology. In artificial networks, for example, selectively updating memories in large language models can help improve user privacy and security. In biological ones, understanding memory addressing can help with health outcomes and treating memory-based illnesses such as Alzheimers or PTSD
The Monarch Initiative in 2024: an analytic platform integrating phenotypes, genes and diseases across species.
Bridging the gap between genetic variations, environmental determinants, and phenotypic outcomes is critical for supporting clinical diagnosis and understanding mechanisms of diseases. It requires integrating open data at a global scale. The Monarch Initiative advances these goals by developing open ontologies, semantic data models, and knowledge graphs for translational research. The Monarch App is an integrated platform combining data about genes, phenotypes, and diseases across species. Monarch\u27s APIs enable access to carefully curated datasets and advanced analysis tools that support the understanding and diagnosis of disease for diverse applications such as variant prioritization, deep phenotyping, and patient profile-matching. We have migrated our system into a scalable, cloud-based infrastructure; simplified Monarch\u27s data ingestion and knowledge graph integration systems; enhanced data mapping and integration standards; and developed a new user interface with novel search and graph navigation features. Furthermore, we advanced Monarch\u27s analytic tools by developing a customized plugin for OpenAI\u27s ChatGPT to increase the reliability of its responses about phenotypic data, allowing us to interrogate the knowledge in the Monarch graph using state-of-the-art Large Language Models. The resources of the Monarch Initiative can be found at monarchinitiative.org and its corresponding code repository at github.com/monarch-initiative/monarch-app
Configuration Management of Distributed Systems over Unreliable and Hostile Networks
Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems.
This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration.
Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture.
The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn.
Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts
Efficient Visual Computing with Camera RAW Snapshots
Conventional cameras capture image irradiance (RAW) on a sensor and convert it to RGB images using an image signal
processor (ISP). The images can then be used for photography or visual computing tasks in a variety of applications, such as public
safety surveillance and autonomous driving. One can argue that since RAW images contain all the captured information, the conversion
of RAW to RGB using an ISP is not necessary for visual computing. In this paper, we propose a novel ρ-Vision framework to perform
high-level semantic understanding and low-level compression using RAW images without the ISP subsystem used for decades.
Considering the scarcity of available RAW image datasets, we first develop an unpaired CycleR2R network based on unsupervised
CycleGAN to train modular unrolled ISP and inverse ISP (invISP) models using unpaired RAW and RGB images. We can then flexibly
generate simulated RAW images (simRAW) using any existing RGB image dataset and finetune different models originally trained in
the RGB domain to process real-world camera RAW images. We demonstrate object detection and image compression capabilities in
RAW-domain using RAW-domain YOLOv3 and RAW image compressor (RIC) on camera snapshots. Quantitative results reveal that
RAW-domain task inference provides better detection accuracy and compression efficiency compared to that in the RGB domain.
Furthermore, the proposed ρ-Vision generalizes across various camera sensors and different task-specific models. An added benefit of
employing the ρ-Vision is the elimination of the need for ISP, leading to potential reductions in computations and processing times
Resource-aware scheduling for 2D/3D multi-/many-core processor-memory systems
This dissertation addresses the complexities of 2D/3D multi-/many-core processor-memory systems, focusing on two key areas: enhancing timing predictability in real-time multi-core processors and optimizing performance within thermal constraints. The integration of an increasing number of transistors into compact chip designs, while boosting computational capacity, presents challenges in resource contention and thermal management. The first part of the thesis improves timing predictability. We enhance shared cache interference analysis for set-associative caches, advancing the calculation of Worst-Case Execution Time (WCET). This development enables accurate assessment of cache interference and the effectiveness of partitioned schedulers in real-world scenarios. We introduce TCPS, a novel task and cache-aware partitioned scheduler that optimizes cache partitioning based on task-specific WCET sensitivity, leading to improved schedulability and predictability. Our research explores various cache and scheduling configurations, providing insights into their performance trade-offs. The second part focuses on thermal management in 2D/3D many-core systems. Recognizing the limitations of Dynamic Voltage and Frequency Scaling (DVFS) in S-NUCA many-core processors, we propose synchronous thread migrations as a thermal management strategy. This approach culminates in the HotPotato scheduler, which balances performance and thermal safety. We also introduce 3D-TTP, a transient temperature-aware power budgeting strategy for 3D-stacked systems, reducing the need for Dynamic Thermal Management (DTM) activation. Finally, we present 3QUTM, a novel method for 3D-stacked systems that combines core DVFS and memory bank Low Power Modes with a learning algorithm, optimizing response times within thermal limits. This research contributes significantly to enhancing performance and thermal management in advanced processor-memory systems
EV-Tach : a handheld rotational speed estimation system with event camera
Rotational speed is one of the important metrics to be measured for calibrating electric motors in manufacturing, monitoring engines during car repairs, detecting faults in electrical appliance and more. However, existing measurement techniques either require prohibitive hardware (e.g., high-speed camera) or are inconvenient to use in real-world application scenarios. In this paper, we propose, EV-Tach, a novel handheld rotational speed estimation system that utilizes emerging imaging sensors known as event cameras or dynamic vision sensors (DVS). The pixels of DVS work independent and trigger an event as soon as a per-pixel intensity change is detected, without global synchronization like CCD/CMOS cameras. Thus, its unique design features high temporal resolution and generates sparse events, which benefits the high-speed rotation estimation. To achieve accurate and efficient rotational speed estimation, a series of signal processing algorithms are specifically designed for the event streams generated by event cameras on an embedded platform. First, a new cluster-centroids initialization module is proposed to initialize the centroids of the clusters to address the issue that common clustering approaches are easy to fall into a local optimal solution without proper initial centroids. Second, an outlier removal module is designed to suppress the background noise caused by subtle hand movements and host devices vibrations. Third, a coarse-to-fine alignment strategy is proposed with Iterative closest point (ICP)-based event stream alignment to obtain angle of rotation and achieve accurate estimation for rotational speed in a large range. With these bespoke components, EV-Tach is able to extract the rotational speed accurately from the event stream produced by an event camera recording rotary targets. According to our extensive evaluations under controlled and practical experiment settings, the Relative Mean Absolute Error (RMAE) of EV-Tach is as low as 0.3‰ which is comparable to the state-of-the-art laser tachometer under fixed measurement mode. Moreover, EV-Tach is robust to subtle movement of user’s hand and dazzling light outdoor, therefore, can be used as a handheld device under challenging lighting condition, where the laser tachometer fails to produce reasonable results. To speed up the processing of EV-Tach and reduce its resource consumption on embedded devices, VoxelGrid filtering is applied to significantly downsample the event streams by merging the events within the same 3D-VoxelGrid while preserving its formation in spatial-temporal domain. At last, we implement EV-Tach on Raspberry Pi and the evaluation results show that the downsampling process preserves the high measurement accuracy while saving the computation speed and energy consumption by approximately 8 times and 30 times in average
- …