506 research outputs found

    Phonon engineering of atomic-scale defects in superconducting quantum circuits

    Full text link
    Noise within solid-state systems at low temperatures, where many of the degrees of freedom of the host material are frozen out, can typically be traced back to material defects that support low-energy excitations. These defects can take a wide variety of microscopic forms, and for amorphous materials are broadly described using generic models such as the tunneling two-level systems (TLS) model. Although the details of TLS, and their impact on the low-temperature behavior of materials have been studied since the 1970s, these states have recently taken on further relevance in the field of quantum computing, where the limits to the coherence of superconducting microwave quantum circuits are dominated by TLS. Efforts to mitigate the impact of TLS have thus far focused on circuit design, material selection, and material surface treatment. In this work, we take a new approach that seeks to directly modify the properties of TLS through nanoscale-engineering. This is achieved by periodically structuring the host material, forming an acoustic bandgap that suppresses all microwave-frequency phonons in a GHz-wide frequency band around the operating frequency of a transmon qubit superconducting quantum circuit. For embedded TLS that are strongly coupled to the electric qubit, we measure a pronounced increase in relaxation time by two orders of magnitude when the TLS transition frequency lies within the acoustic bandgap, with the longest T1T_1 time exceeding 55 milliseconds. Our work paves the way for in-depth investigation and coherent control of TLS, which is essential for deepening our understanding of noise in amorphous materials and advancing solid-state quantum devices.Comment: 11 + 25 pages, 4 + 22 figures, 6 tables; comments welcome

    On the equivalence between the effective cosmology and excursion set treatments of environment

    Full text link
    In studies of the environmental dependence of structure formation, the large scale environment is often thought of as providing an effective background cosmology: e.g. the formation of structure in voids is expected to be just like that in a less dense universe with appropriately modified Hubble and cosmological constants. However, in the excursion set description of structure formation which is commonly used to model this effect, no explicit mention is made of the effective cosmology. Rather, this approach uses the spherical evolution model to compute an effective linear theory growth factor, which is then used to predict the growth and evolution of nonlinear structures. We show that these approaches are, in fact, equivalent: a consequence of Birkhoff's theorem. We speculate that this equivalence will not survive in models where the gravitational force law is modified from an inverse square, potentially making the environmental dependence of clustering a good test of such models.Comment: 4 pages, 0 figures, accepted to MNRA

    Kinematic effect in gravitational lensing by clusters of galaxies

    Full text link
    Gravitational lensing provides an efficient tool for the investigation of matter structures, independent of the dynamical or hydrostatic equilibrium properties of the deflecting system. However, it depends on the kinematic status. In fact, either a translational motion or a coherent rotation of the mass distribution can affect the lensing properties. Here, light deflection by galaxy clusters in motion is considered. Even if gravitational lensing mass measurements of galaxy clusters are regarded as very reliable estimates, the kinematic effect should be considered. A typical peculiar motion with respect to the Hubble flow brings about a systematic error < 0.3%, independent of the mass of the cluster. On the other hand, the effect of the spin increases with the total mass. For cluster masses ~ 10^{15}M_{sun}, the effect of the gravitomagnetic term is < 0.04% on strong lensing estimates and < 0.5% in the weak lensing analyses. The total kinematic effect on the mass estimate is then < 1%, which is negligible in current statistical studies. In the weak lensing regime, the rotation imprints a typical angular modulation in the tangential shear distortion. This would allow in principle a detection of the gravitomagnetic field and a direct measurement of the angular velocity of the cluster but the required background source densities are well beyond current tecnological capabilities.Comment: 6 pages; accepted for publication in MNRA

    Observing the clustering properties of galaxy clusters in dynamical dark-energy cosmologies

    Full text link
    We study the clustering properties of galaxy clusters expected to be observed by various forthcoming surveys both in the X-ray and sub-mm regimes by the thermal Sunyaev-Zel'dovich effect. Several different background cosmological models are assumed, including the concordance Λ\LambdaCDM and various cosmologies with dynamical evolution of the dark energy. Particular attention is paid to models with a significant contribution of dark energy at early times which affects the process of structure formation. Past light cone and selection effects in cluster catalogs are carefully modeled by realistic scaling relations between cluster mass and observables and by properly taking into account the selection functions of the different instruments. The results show that early dark-energy models are expected to produce significantly lower values of effective bias and both spatial and angular correlation amplitudes with respect to the standard Λ\LambdaCDM model. Among the cluster catalogues studied in this work, it turns out that those based on \emph{eRosita}, \emph{Planck}, and South Pole Telescope observations are the most promising for distinguishing between various dark-energy models.Comment: 16 pages, 10 figures. A&A in pres

    Impact of early dark energy on the Planck SZ cluster sample

    Full text link
    Context. One science goal of the upcoming Planck mission is to perform a full-sky cluster survey based on the Sunyaev-Zel'dovich (SZ) effect, which leads to the question of how such a survey would be affected by cosmological models with a different history of structure formation than LCDM. One class of these models are early dark energy (EDE) cosmologies, where the dark energy contribution does not vanish at early times. Aims. Since structures grow slower in the presence of EDE, one expects an increase in the number of galaxy clusters compared to LCDM at intermediate and high redshifts, which could explain the reported excess of the angular CMB power spectrum on cluster scales via an enhanced SZ contribution. We study the impact of EDE on Planck's expected cluster sample. Methods. To obtain realistic simulations, we constructed full-sky SZ maps for EDE and LCDM cosmologies, taking angular cluster correlation into account. Using these maps, we simulated Planck observations with and without Galactic foregrounds and fed the results into our filter pipeline based on the spherical multi-frequency matched filters. Results. For the case of EDE cosmologies, we clearly find an increase in the detected number of clusters compared to the fiducial LCDM case. This shows that the spherical multi-frequency matched filter is sensitive enough to find deviations from the LCDM sample, being caused by EDE. In addition we find an interesting effect of EDE on the completeness of the cluster sample, such that EDE helps to obtain cleaner samples.Comment: 12 pages, 10 figures, accepted for publication in A&A, minor language corrections. Notable changes include an added subsection on collapse parameters for EDE models and a discussion of the consequent SZ power spectr

    Salt cleaning of ultrafiltration membranes fouled by whey model solutions

    Full text link
    In this work, three ultrafiltration (UF) membranes were fouled with whey model solutions that contained BSA (1% w/w) and CaCl2 (0.06% w/w). These membranes were cleaned with NaCl solutions. Temperature, crossflow velocity and concentration were varied. The membranes considered were a polyethersulfone (PES) membrane, a ceramic ZrO2–TiO2 membrane and a permanently hydrophilic polyethersulfone (PESH) membrane. Their molecular weight cut-offs (MWCOs) are 5, 15 and 30 kDa, respectively. The cleaning efficiency was related to the MWCO, membrane material and operating conditions. The results obtained demonstrated that NaCl solutions were able to clean the membranes tested. In addition, the higher the temperature and the crossflow velocity of the cleaning solution, the higher the cleaning efficiency was. However, there was an optimum value of NaCl concentration to clean the membranes effectively. When concentration was higher than the optimum, the cleaning efficiency decreased. The relationship between the cleaning efficiency and the operating conditions was obtained with statistical and optimization analysis.The authors of this work wish to gratefully acknowledge the financial support from the Spanish Ministry of Science and Innovation through the project CTM2010-20186 and the Generalitat Valenciana through the program "Ayudas para la realizacion de proyectos I+D para grupos de investigacion emergentes GV/2013".Corbatón Báguena, MJ.; Alvarez Blanco, S.; Vincent Vela, MC. (2014). Salt cleaning of ultrafiltration membranes fouled by whey model solutions. Separation and Purification Technology. 132:226-233. https://doi.org/10.1016/j.seppur.2014.05.029S22623313

    SIP metagenomics identifies uncultivated Methylophilaceae as dimethylsulphide degrading bacteria in soil and lake sediment.

    Get PDF
    Dimethylsulphide (DMS) has an important role in the global sulphur cycle and atmospheric chemistry. Microorganisms using DMS as sole carbon, sulphur or energy source, contribute to the cycling of DMS in a wide variety of ecosystems. The diversity of microbial populations degrading DMS in terrestrial environments is poorly understood. Based on cultivation studies, a wide range of bacteria isolated from terrestrial ecosystems were shown to be able to degrade DMS, yet it remains unknown whether any of these have important roles in situ. In this study, we identified bacteria using DMS as a carbon and energy source in terrestrial environments, an agricultural soil and a lake sediment, by DNA stable isotope probing (SIP). Microbial communities involved in DMS degradation were analysed by denaturing gradient gel electrophoresis, high-throughput sequencing of SIP gradient fractions and metagenomic sequencing of phi29-amplified community DNA. Labelling patterns of time course SIP experiments identified members of the Methylophilaceae family, not previously implicated in DMS degradation, as dominant DMS-degrading populations in soil and lake sediment. Thiobacillus spp. were also detected in (13)C-DNA from SIP incubations. Metagenomic sequencing also suggested involvement of Methylophilaceae in DMS degradation and further indicated shifts in the functional profile of the DMS-assimilating communities in line with methylotrophy and oxidation of inorganic sulphur compounds. Overall, these data suggest that unlike in the marine environment where gammaproteobacterial populations were identified by SIP as DMS degraders, betaproteobacterial Methylophilaceae may have a key role in DMS cycling in terrestrial environments.HS was supported by a UK Natural Environment Research Council Advanced Fellowship NE/E013333/1), ÖE by a postgraduate scholarship from the University of Warwick and an Early Career Fellowship from the Institute of Advanced Study, University of Warwick, UK, respectively. Lawrence Davies is acknowledged for help with QIIME

    Nuclear Shadowing in Electro-Weak Interactions

    Full text link
    Shadowing is a quantum phenomenon leading to a non-additivity of electroweak cross sections on nucleons bound in a nucleus. It occurs due to destructive interference of amplitudes on different nucleons. Although the current experimental evidence for shadowing is dominated by charged-lepton nucleus scattering, studies of neutrino nucleus scattering have recently begun and revealed unexpected results.Comment: 77 pages, 57 figures. To be published in "Progress in Particle and Nuclear Physics" 201

    Wizard CD Plus and ProTaper Universal: analysis of apical transportation using new software

    Get PDF
    OBJECTIVE: This study has two aims: 1) to evaluate the apical transportation of the Wizard CD Plus and ProTaper Universal after preparation of simulated root canals; 2) to compare, with Adobe Photoshop, the ability of a new software (Regeemy) in superposing and subtracting images. MATERIAL AND METHODS: Twenty five simulated root canals in acrylic-resin blocks (with 20º curvature) underwent cone beam computed tomography before and after preparation with the rotary systems (70 kVp, 4 mA, 10 s and with the 8×8 cm FoV selection). Canals were prepared up to F2 (ProTaper) and 24.04 (Wizard CD Plus) instruments and the working length was established to 15 mm. The tomographic images were imported into iCAT Vision software and CorelDraw for standardization. The superposition of pre- and post-instrumentation images from both systems was performed using Regeemy and Adobe Photoshop. The apical transportation was measured in millimetres using Image J. Five acrylic resin blocks were used to validate the superposition achieved by the software. Student's t-test for independent samples was used to evaluate the apical transportation achieved by the rotary systems using each software individually. Student's t-test for paired samples was used to compare the ability of each software in superposing and subtracting images from one rotary system per time. RESULTS: The values obtained with Regeemy and Adobe Photoshop were similar to rotary systems (P>0.05). ProTaper Universal and Wizard CD Plus promoted similar apical transportation regardless of the software used for image's superposition and subtraction (P>0.05). CONCLUSION: Wizard CD Plus and ProTaper Universal promoted little apical transportation. Regeemy consists in a feasible software to superpose and subtract images and appears to be an alternative to Adobe Photoshop

    On the Bounds of Function Approximations

    Full text link
    Within machine learning, the subfield of Neural Architecture Search (NAS) has recently garnered research attention due to its ability to improve upon human-designed models. However, the computational requirements for finding an exact solution to this problem are often intractable, and the design of the search space still requires manual intervention. In this paper we attempt to establish a formalized framework from which we can better understand the computational bounds of NAS in relation to its search space. For this, we first reformulate the function approximation problem in terms of sequences of functions, and we call it the Function Approximation (FA) problem; then we show that it is computationally infeasible to devise a procedure that solves FA for all functions to zero error, regardless of the search space. We show also that such error will be minimal if a specific class of functions is present in the search space. Subsequently, we show that machine learning as a mathematical problem is a solution strategy for FA, albeit not an effective one, and further describe a stronger version of this approach: the Approximate Architectural Search Problem (a-ASP), which is the mathematical equivalent of NAS. We leverage the framework from this paper and results from the literature to describe the conditions under which a-ASP can potentially solve FA as well as an exhaustive search, but in polynomial time.Comment: Accepted as a full paper at ICANN 2019. The final, authenticated publication will be available at https://doi.org/10.1007/978-3-030-30487-4_3
    corecore