2,282 research outputs found

    Political Trust as a determinant of Volatile Vote Intentions: Separating Within- from Between-persons effects

    Get PDF
    This article studies the oft-assumed destabilizing effect of political distrust on party preferences. We argue that there are two mechanisms that relate political trust to electoral volatility: (1) structurally low trust undermines the formation of stable party preferences and thereby stimulates volatility, and (2) declining trust drives voters, particularly supporters of parties in government, to change party preference. These rivaling mechanisms are often conflated. Using the within–between random effects approach on two extensive panel data sets (covering three different governmental periods in The Netherlands between 2006 and 2017) allows us to separate both mechanisms and estimate them simultaneously. We find evidence for both the structural and the dynamic effects of political trust on changing vote intentions

    Spatially encoded light for Large-alphabet Quantum Key Distribution

    Get PDF
    Most Quantum Key Distribution protocols use a two-dimensional basis such as HV polarization as first proposed by Bennett and Brassard in 1984. These protocols are consequently limited to a key generation density of 1 bit per photon. We increase this key density by encoding information in the transverse spatial displacement of the used photons. Employing this higher-dimensional Hilbert space together with modern single-photon-detecting cameras, we demonstrate a proof-of-principle large-alphabet Quantum Key Distribution experiment with 1024 symbols and a shared information between sender and receiver of 7 bit per photon.Comment: 9 pages, 6 figures, Added references, Updated Fig. 1 in the main text, Updated Fig.1 in supplementary material, Added section Trojan-horse attacks in supplementary material, title changed, Added paragraphs about final key rate and overfilling the detector to result sectio

    Condensation phase transitions of symmetric conserved-mass aggregation model on complex networks

    Full text link
    We investigate condensation phase transitions of symmetric conserved-mass aggregation (SCA) model on random networks (RNs) and scale-free networks (SFNs) with degree distribution P(k)kγP(k) \sim k^{-\gamma}. In SCA model, masses diffuse with unite rate, and unit mass chips off from mass with rate ω\omega. The dynamics conserves total mass density ρ\rho. In the steady state, on RNs and SFNs with γ>3\gamma>3 for ω\omega \neq \infty, we numerically show that SCA model undergoes the same type condensation transitions as those on regular lattices. However the critical line ρc(ω)\rho_c (\omega) depends on network structures. On SFNs with γ3\gamma \leq 3, the fluid phase of exponential mass distribution completely disappears and no phase transitions occurs. Instead, the condensation with exponentially decaying background mass distribution always takes place for any non-zero density. For the existence of the condensed phase for γ3\gamma \leq 3 at the zero density limit, we investigate one lamb-lion problem on RNs and SFNs. We numerically show that a lamb survives indefinitely with finite survival probability on RNs and SFNs with γ>3\gamma >3, and dies out exponentially on SFNs with γ3\gamma \leq 3. The finite life time of a lamb on SFNs with γ3\gamma \leq 3 ensures the existence of the condensation at the zero density limit on SFNs with γ3\gamma \leq 3 at which direct numerical simulations are practically impossible. At ω=\omega = \infty, we numerically confirm that complete condensation takes place for any ρ>0\rho > 0 on RNs. Together with the recent study on SFNs, the complete condensation always occurs on both RNs and SFNs in zero range process with constant hopping rate.Comment: 6 pages, 6 figure

    Unusual features of coarsening with detachment rates decreasing with cluster mass

    Full text link
    We study conserved one-dimensional models of particle diffusion, attachment and detachment from clusters, where the detachment rates decrease with increasing cluster size as gamma(m) ~ m^{-k}, k>0. Heuristic scaling arguments based on random walk properties show that the typical cluster size scales as (t/ln(t))^z, with z=1/(k+2). The initial symmetric flux of particles between neighboring clusters is followed by an effectively assymmetric flux due to the unbalanced detachement rates, which leads to the above logarithmic correction. Small clusters have densities of order t^{-mz(1)}, with z(1) = k/(k+2). Thus, for k<1, the small clusters (mass of order unity) are statistically dominant and the average cluster size does not scale as the size of typically large clusters does. We also solve the Master equation of the model under an independent interval approximation, which yields cluster distributions and exponent relations and gives the correct dominant coarsening exponent after accounting for the effects of correlations. The coarsening of large clusters is described by the distribution P_t(m) ~ 1/t^y f(m/t^z), with y=2z. All results are confirmed by simulation, which also illustrates the unusual features of cluster size distributions, with a power law decay for small masses and a negatively skewed peak in the scaling region. The detachment rates considered here can apply in the presence of strong attractive interactions, and recent applications suggest that even more rapid rate decays are also physically realistic.Comment: 12 pages, with 9 figures include

    Applicability and accuracy of an intraoral scanner for scanning multiple implants in edentulous mandibles:A pilot study

    Get PDF
    Statement of problem. In the past 5 years, the use of intraoral digitizers has increased. However, data are lacking on the accuracy of scanning implant restorative platforms for prosthodontics with intraoral digitizers. Purpose. The purpose of this clinical pilot study was to assess the applicability and accuracy of intraoral scans by using abutments designed for scanning (scan abutments) in edentulous mandibles. Material and methods. Twenty-five participants with complete mandibular overdentures retained by 2 implants and frameworks were included in this study. Scan abutments were placed on the implants intraorally and scanned with the iTero intraoral scanner. Also, scan abutments were placed on the implant analogs of the definitive casts and scanned with an extraoral laboratory scanner (Lava Scan ST scanner). Two 3-dimensional computer-aided design models of the scan abutments with predetermined center lines were subsequently imported and registered, together with each of the scanned equivalents. The distance between the centers of the top of the scan abutments and the angulations between the scan abutments was assessed. These values were compared with the measurements made on the 3-dimensional scans ofthe definitive casts, which were the participants' original definitive casts used for fabrication of soldered bars. The threshold for distance error was established to be 100 mu m. Results. Four of the 25 intraoral scans were not suitable for research because the intraoral scanner was not able to stitch the separate scans together. Five of the 21 suitable scans demonstrated an interimplant distance error >100 Rm. Three of the 25 intraoral scans showed interimplant angulation errors >0.4 degrees. Only 1 scan showed both an acceptable interimplant distance ( Conclusions. Based on the intraoral scans obtained in this study, distance and angulation errors were too large to fabricate well-fitting frameworks on implants in edentulous mandibles. The main reason for the unreliable scans seemed to be the lack of anatomic landmarks for scanning

    Boson Sampling in Low-depth Optical Systems

    Get PDF
    Optical losses are the main obstacle to demonstrating a quantum advantage via boson sampling without leaving open the possibility of classical spoofing. We propose a method for generating low-depth optical circuits suitable for boson sampling with very high efficiencies. Our circuits require only a constant number of optical components (namely three) to implement an optical transformation suitable for demonstrating a quantum advantage. Consequently, our proposal has a constant optical loss regardless of the number of optical modes. We argue that sampling from our family of circuits is computationally hard by providing numerical evidence that our family of circuits converges to that of the original boson sampling proposal in the limit of large optical systems. Our work opens a new route to demonstrate an optical quantum advantage.Comment: 11 pages, 6 figure

    Optimizing spontaneous parametric down-conversion sources for boson sampling

    Get PDF
    An important step for photonic quantum technologies is the demonstration of a quantum advantage through boson sampling. In order to prevent classical simulability of boson sampling, the photons need to be almost perfectly identical and almost without losses. These two requirements are connected through spectral filtering, improving one leads to a decrease of the other. A proven method of generating single photons is spontaneous parametric downconversion (SPDC). We show that an optimal trade-off between indistinguishability and losses can always be found for SPDC. We conclude that a 50-photon scattershot boson-sampling experiment using SPDC sources is possible from a computational complexity point of view. To this end, we numerically optimize SPDC sources under the regime of weak pumping and with a single spatial mode

    Short-Term Hyperglycemic Dysregulation in Patients With Type 1 Diabetes Does Not Change Myocardial Triglyceride Content or Myocardial Function

    Get PDF
    OBJECTIVE—To evaluate the effects of hyperglycemia due to partial insulin deprivation on myocardial triglyceride (TG) content and myocardial function in patients with type 1 diabetes

    Hands4U:the effects of a multifaceted implementation strategy on hand eczema prevalence in a healthcare setting. Results of a randomized controlled trial

    Get PDF
    Background. Healthcare workers have an increased risk of developing hand eczema. A multifaceted implementation strategy was developed to implement a guideline to prevent hand eczema among healthcare workers.Objectives. To investigate the effects of the implementation strategy on self-reported hand eczema and preventive behaviour.Methods. A randomized controlled trial was performed. A total of 48 departments (n = 1649) were randomly allocated to the multifaceted implementation strategy or the control group. The strategy consisted of education, participatory working groups, and role models. Outcome measures were self-reported hand eczema and preventive behaviour. Data were collected at baseline, and 3, 6, 9 and 12 months of follow-up.Results. Participants in the intervention group were significantly more likely to report hand eczema [odds ratio (OR) 1.45; 95% confidence interval (CI) 1.03-2.04], and they reported significantly less hand washing (B, -0.38; 95% CI: -0.48 to -0.27), reported significantly more frequent use of a moisturizer (B, 0.30; 95% CI: 0.22-0.39) and were more likely to report wearing cotton undergloves (OR 6.33; 95% CI: 3.23-12.41) than participants in the control group 12 months after baseline.Conclusions. The strategy implemented can be used in practice, as it showed positive effects on preventive behaviour. More research is needed to investigate the unexpected effects on hand eczema.</p
    corecore