17,059 research outputs found

    Skipper-CCD Sensors for the Oscura Experiment: Requirements and Preliminary Tests

    Full text link
    Oscura is a proposed multi-kg skipper-CCD experiment designed for a dark matter (DM) direct detection search that will reach unprecedented sensitivity to sub-GeV DM-electron interactions with its 10 kg detector array. Oscura is planning to operate at SNOLAB with 2070 m overburden, and aims to reach a background goal of less than one event in each electron bin in the 2-10 electron ionization-signal region for the full 30 kg-year exposure, with a radiation background rate of 0.01 dru. In order to achieve this goal, Oscura must address each potential source of background events, including instrumental backgrounds. In this work, we discuss the main instrumental background sources and the strategy to control them, establishing a set of constraints on the sensors' performance parameters. We present results from the tests of the first fabricated Oscura prototype sensors, evaluate their performance in the context of the established constraints and estimate the Oscura instrumental background based on these results

    Rotational and Dilational Reconstruction in Transition Metal Dichalcogenide Moir\'e Bilayers

    Full text link
    Lattice reconstruction and corresponding strain accumulation play a key role in defining the electronic structure of two-dimensional moir\'e superlattices, including those of transition metal dichalcogenides (TMDs). Imaging of TMD moir\'es has so far provided a qualitative understanding of this relaxation process in terms of interlayer stacking energy, while models of the underlying deformation mechanisms have relied on simulations. Here, we use interferometric four-dimensional scanning transmission electron microscopy to quantitatively map the mechanical deformations through which reconstruction occurs in small-angle twisted bilayer MoS2 and WSe2/MoS2 heterobilayers. We provide direct evidence that local rotations govern relaxation for twisted homobilayers, while local dilations are prominent in heterobilayers possessing a sufficiently large lattice mismatch. Encapsulation of the moir\'e layers in hBN further localizes and enhances these in-plane reconstruction pathways, suppressing out-of-plane corrugation. We also find that extrinsic uniaxial heterostrain, which introduces a lattice constant difference in twisted homobilayers, leads to accumulation and redistribution of reconstruction strain, demonstrating another route to modify the moir\'e potential.Comment: 27 pages, 5 figure

    Examples of works to practice staccato technique in clarinet instrument

    Get PDF
    Klarnetin staccato tekniğini güçlendirme aşamaları eser çalışmalarıyla uygulanmıştır. Staccato geçişlerini hızlandıracak ritim ve nüans çalışmalarına yer verilmiştir. Çalışmanın en önemli amacı sadece staccato çalışması değil parmak-dilin eş zamanlı uyumunun hassasiyeti üzerinde de durulmasıdır. Staccato çalışmalarını daha verimli hale getirmek için eser çalışmasının içinde etüt çalışmasına da yer verilmiştir. Çalışmaların üzerinde titizlikle durulması staccato çalışmasının ilham verici etkisi ile müzikal kimliğe yeni bir boyut kazandırmıştır. Sekiz özgün eser çalışmasının her aşaması anlatılmıştır. Her aşamanın bir sonraki performans ve tekniği güçlendirmesi esas alınmıştır. Bu çalışmada staccato tekniğinin hangi alanlarda kullanıldığı, nasıl sonuçlar elde edildiği bilgisine yer verilmiştir. Notaların parmak ve dil uyumu ile nasıl şekilleneceği ve nasıl bir çalışma disiplini içinde gerçekleşeceği planlanmıştır. Kamış-nota-diyafram-parmak-dil-nüans ve disiplin kavramlarının staccato tekniğinde ayrılmaz bir bütün olduğu saptanmıştır. Araştırmada literatür taraması yapılarak staccato ile ilgili çalışmalar taranmıştır. Tarama sonucunda klarnet tekniğin de kullanılan staccato eser çalışmasının az olduğu tespit edilmiştir. Metot taramasında da etüt çalışmasının daha çok olduğu saptanmıştır. Böylelikle klarnetin staccato tekniğini hızlandırma ve güçlendirme çalışmaları sunulmuştur. Staccato etüt çalışmaları yapılırken, araya eser çalışmasının girmesi beyni rahatlattığı ve istekliliği daha arttırdığı gözlemlenmiştir. Staccato çalışmasını yaparken doğru bir kamış seçimi üzerinde de durulmuştur. Staccato tekniğini doğru çalışmak için doğru bir kamışın dil hızını arttırdığı saptanmıştır. Doğru bir kamış seçimi kamıştan rahat ses çıkmasına bağlıdır. Kamış, dil atma gücünü vermiyorsa daha doğru bir kamış seçiminin yapılması gerekliliği vurgulanmıştır. Staccato çalışmalarında baştan sona bir eseri yorumlamak zor olabilir. Bu açıdan çalışma, verilen müzikal nüanslara uymanın, dil atış performansını rahatlattığını ortaya koymuştur. Gelecek nesillere edinilen bilgi ve birikimlerin aktarılması ve geliştirici olması teşvik edilmiştir. Çıkacak eserlerin nasıl çözüleceği, staccato tekniğinin nasıl üstesinden gelinebileceği anlatılmıştır. Staccato tekniğinin daha kısa sürede çözüme kavuşturulması amaç edinilmiştir. Parmakların yerlerini öğrettiğimiz kadar belleğimize de çalışmaların kaydedilmesi önemlidir. Gösterilen azmin ve sabrın sonucu olarak ortaya çıkan yapıt başarıyı daha da yukarı seviyelere çıkaracaktır

    Accelerated Sparse Recovery via Gradient Descent with Nonlinear Conjugate Gradient Momentum

    Full text link
    This paper applies an idea of adaptive momentum for the nonlinear conjugate gradient to accelerate optimization problems in sparse recovery. Specifically, we consider two types of minimization problems: a (single) differentiable function and the sum of a non-smooth function and a differentiable function. In the first case, we adopt a fixed step size to avoid the traditional line search and establish the convergence analysis of the proposed algorithm for a quadratic problem. This acceleration is further incorporated with an operator splitting technique to deal with the non-smooth function in the second case. We use the convex 1\ell_1 and the nonconvex 12\ell_1-\ell_2 functionals as two case studies to demonstrate the efficiency of the proposed approaches over traditional methods

    Modelling uncertainties for measurements of the H → γγ Channel with the ATLAS Detector at the LHC

    Get PDF
    The Higgs boson to diphoton (H → γγ) branching ratio is only 0.227 %, but this final state has yielded some of the most precise measurements of the particle. As measurements of the Higgs boson become increasingly precise, greater import is placed on the factors that constitute the uncertainty. Reducing the effects of these uncertainties requires an understanding of their causes. The research presented in this thesis aims to illuminate how uncertainties on simulation modelling are determined and proffers novel techniques in deriving them. The upgrade of the FastCaloSim tool is described, used for simulating events in the ATLAS calorimeter at a rate far exceeding the nominal detector simulation, Geant4. The integration of a method that allows the toolbox to emulate the accordion geometry of the liquid argon calorimeters is detailed. This tool allows for the production of larger samples while using significantly fewer computing resources. A measurement of the total Higgs boson production cross-section multiplied by the diphoton branching ratio (σ × Bγγ) is presented, where this value was determined to be (σ × Bγγ)obs = 127 ± 7 (stat.) ± 7 (syst.) fb, within agreement with the Standard Model prediction. The signal and background shape modelling is described, and the contribution of the background modelling uncertainty to the total uncertainty ranges from 18–2.4 %, depending on the Higgs boson production mechanism. A method for estimating the number of events in a Monte Carlo background sample required to model the shape is detailed. It was found that the size of the nominal γγ background events sample required a multiplicative increase by a factor of 3.60 to adequately model the background with a confidence level of 68 %, or a factor of 7.20 for a confidence level of 95 %. Based on this estimate, 0.5 billion additional simulated events were produced, substantially reducing the background modelling uncertainty. A technique is detailed for emulating the effects of Monte Carlo event generator differences using multivariate reweighting. The technique is used to estimate the event generator uncertainty on the signal modelling of tHqb events, improving the reliability of estimating the tHqb production cross-section. Then this multivariate reweighting technique is used to estimate the generator modelling uncertainties on background V γγ samples for the first time. The estimated uncertainties were found to be covered by the currently assumed background modelling uncertainty

    The Influence of Frontal and Axial Plane Deformities on Contact Mechanics during Squatting: A Finite Element Study

    Get PDF
    Knee Osteoarthritis (KOA) is a degenerative joint disease and a leading cause of disability worldwide. Lower limb malalignment was a risky factor leading to KOA, altering the load distributions. This study aimed to study the influence of knee deformities on knee contact mechanics and knee kinematics during squatting. A full-leg squat FE model was developed based on general open-source models and validated with in vivo studies to investigate the outputs under frontal malalignment (valgus 8° to varus 8°) and axial malalignment (miserable malalignment 30°). As a result, Varus-aligned and miserable aligned models increased medial tibiofemoral force and lateral patellar contact pressures, while the valgus-aligned model increased lateral tibiofemoral force medial patellar contact pressures with no effects on total contact loads. The Model with a higher medial force ratio (medial force/total force) induced a higher internal tibial rotation. In conclusion, we recommended that patients with knee malalignment be taken care of alignments in both frontal and axial planes

    First Measurement of Energy-dependent Inclusive Muon Neutrino Charged-Current Cross Sections on Argon with the MicroBooNE Detector

    Get PDF
    We report a measurement of the energy-dependent total charged-current cross section σ(Eν)\sigma\left(E_\nu\right) for inclusive muon neutrinos scattering on argon, as well as measurements of flux-averaged differential cross sections as a function of muon energy and hadronic energy transfer (ν\nu). Data corresponding to 5.3×\times1019^{19} protons on target of exposure were collected using the MicroBooNE liquid argon time projection chamber located in the Fermilab Booster Neutrino Beam with a mean neutrino energy of approximately 0.8~GeV. The mapping between the true neutrino energy EνE_\nu and reconstructed neutrino energy EνrecE^{rec}_\nu and between the energy transfer ν\nu and reconstructed hadronic energy EhadrecE^{rec}_{had} are validated by comparing the data and Monte Carlo (MC) predictions. In particular, the modeling of the missing hadronic energy and its associated uncertainties are verified by a new method that compares the EhadrecE^{rec}_{had} distributions between data and an MC prediction after constraining the reconstructed muon kinematic distributions, energy and polar angle, to those of data. The success of this validation gives confidence that the missing energy in the MicroBooNE detector is well-modeled and underpins first-time measurements of both the total cross section σ(Eν)\sigma\left(E_\nu\right) and the differential cross section dσ/dνd\sigma/d\nu on argon

    Cost-effective non-destructive testing of biomedical components fabricated using additive manufacturing

    Get PDF
    Biocompatible titanium-alloys can be used to fabricate patient-specific medical components using additive manufacturing (AM). These novel components have the potential to improve clinical outcomes in various medical scenarios. However, AM introduces stability and repeatability concerns, which are potential roadblocks for its widespread use in the medical sector. Micro-CT imaging for non-destructive testing (NDT) is an effective solution for post-manufacturing quality control of these components. Unfortunately, current micro-CT NDT scanners require expensive infrastructure and hardware, which translates into prohibitively expensive routine NDT. Furthermore, the limited dynamic-range of these scanners can cause severe image artifacts that may compromise the diagnostic value of the non-destructive test. Finally, the cone-beam geometry of these scanners makes them susceptible to the adverse effects of scattered radiation, which is another source of artifacts in micro-CT imaging. In this work, we describe the design, fabrication, and implementation of a dedicated, cost-effective micro-CT scanner for NDT of AM-fabricated biomedical components. Our scanner reduces the limitations of costly image-based NDT by optimizing the scanner\u27s geometry and the image acquisition hardware (i.e., X-ray source and detector). Additionally, we describe two novel techniques to reduce image artifacts caused by photon-starvation and scatter radiation in cone-beam micro-CT imaging. Our cost-effective scanner was designed to match the image requirements of medium-size titanium-alloy medical components. We optimized the image acquisition hardware by using an 80 kVp low-cost portable X-ray unit and developing a low-cost lens-coupled X-ray detector. Image artifacts caused by photon-starvation were reduced by implementing dual-exposure high-dynamic-range radiography. For scatter mitigation, we describe the design, manufacturing, and testing of a large-area, highly-focused, two-dimensional, anti-scatter grid. Our results demonstrate that cost-effective NDT using low-cost equipment is feasible for medium-sized, titanium-alloy, AM-fabricated medical components. Our proposed high-dynamic-range strategy improved by 37% the penetration capabilities of an 80 kVp micro-CT imaging system for a total x-ray path length of 19.8 mm. Finally, our novel anti-scatter grid provided a 65% improvement in CT number accuracy and a 48% improvement in low-contrast visualization. Our proposed cost-effective scanner and artifact reduction strategies have the potential to improve patient care by accelerating the widespread use of patient-specific, bio-compatible, AM-manufactured, medical components

    The aquaculture supply chain in the time of covid-19 pandemic: vulnerability, resilience, solutions and priorities at the global scale

    Get PDF
    The COVID-19 global pandemic has had severe, unpredictable and synchronous impacts on all levels of perishable food supply chains (PFSC), across multiple sectors and spatial scales. Aquaculture plays a vital and rapidly expanding role in food security, in some cases overtaking wild caught fisheries in the production of high-quality animal protein in this PFSC. We performed a rapid global assessment to evaluate the effects of the COVID-19 pandemic and related emerging control measures on the aquaculture supply chain. Socio-economic effects of the pandemic were analysed by surveying the perceptions of stakeholders, who were asked to describe potential supply-side disruption, vulnerabilities and resilience patterns along the production pipeline with four main supply chain components: a) hatchery, b) production/processing, c) distribution/logistics and d) market. We also assessed different farming strategies, comparing land- vs. sea-based systems; extensive vs. intensive methods; and with and without integrated multi-trophic aquaculture, IMTA. In addition to evaluating levels and sources of economic distress, interviewees were asked to identify mitigation solutions adopted at local / internal (i.e., farm-site) scales, and to express their preference on national / external scale mitigation measures among a set of a priori options. Survey responses identified the potential causes of disruption, ripple effects, sources of food insecurity, and socio-economic conflicts. They also pointed to various levels of mitigation strategies. The collated evidence represents a first baseline useful to address future disaster-driven responses, to reinforce the resilience of the sector and to facilitate the design reconstruction plans and mitigation measures, such as financial aid strategies.publishe

    PU-Refiner: A Geometry Refiner with Adversarial Learning for Point Cloud Upsampling

    Get PDF
    We present PU-Refiner, a generative adversarial network for point cloud upsampling. The generator of our network includes a coarse feature expansion module to create coarse upsampled features, a geometry generation module to regress a coarse point cloud from the coarse upsampled features, and a progressive geometry refinement module to restore the dense point cloud in a coarse-to-fine fashion based on the coarse upsampled point cloud. The discriminator of our network helps the generator produce point clouds closer to the target distribution. It makes full use of multi-level features to improve its classification performance. Extensive experimental results show that PU-Refiner is superior to five state-of-the-art point cloud upsampling methods. Code: https://github.com/liuhaoyun/PU-Refine
    corecore