2,665 research outputs found
Development of a dynamic population model as a decision support system for Codling Moth (Cydia pomonella L) management
In 2004 RIMpro-Cydia was developed as a dynamic population model that simulates the
within-year biology of a local codling moth population. The model is meant to be used by
growers and advisors to optimize the control of codling moth populations in organic and
integrated managed orchards. The model is based on literature data and unpublished
research data. Fractional boxcar trains are used to mimic the dispersion in the
developmental processes. The model is run in real time on the data input of local weather
stations, starting on 1 January. The output of the model was compared with the results of
field observations in three years in an untreated orchard. In the years 2005 to 2007 the
progress in egg deposition as predicted by the model was in general agreement with the
field data. The start of the egg deposition period was predicted well. The end of the egg
deposition period was predicted when in the field about 10% of the eggs was still to be
laid. There was no consistency in the relation between cumulated pheromone trap catches
and the cumulative egg deposition as calculated from the field data
Quantum error correction in crossbar architectures
A central challenge for the scaling of quantum computing systems is the need
to control all qubits in the system without a large overhead. A solution for
this problem in classical computing comes in the form of so called crossbar
architectures. Recently we made a proposal for a large scale quantum
processor~[Li et al. arXiv:1711.03807 (2017)] to be implemented in silicon
quantum dots. This system features a crossbar control architecture which limits
parallel single qubit control, but allows the scheme to overcome control
scaling issues that form a major hurdle to large scale quantum computing
systems. In this work, we develop a language that makes it possible to easily
map quantum circuits to crossbar systems, taking into account their
architecture and control limitations. Using this language we show how to map
well known quantum error correction codes such as the planar surface and color
codes in this limited control setting with only a small overhead in time. We
analyze the logical error behavior of this surface code mapping for estimated
experimental parameters of the crossbar system and conclude that logical error
suppression to a level useful for real quantum computation is feasible.Comment: 29 + 9 pages, 13 figures, 9 tables, 8 algorithms and 3 big boxes.
Comments are welcom
Weathering product-harm crises.
Product-harm crises can seriously imperil a brand's performance. Consumers tend to weigh negative publicity heavily in product judgments, customer preferences may shift towards competing products during the recall period, and competitors often increase their advertising spending in the wake of a brand's misfortune. To counter these negative effects, brands hope to capitalize on their equity, and often use advertising as a communication device to regain customers' lost trust. We develop a multiple-event hazard model to study how consumer characteristics and advertising influence consumers' first-purchase decisions for two affected brands of peanut butter following a severe Australian product-harm crisis. Buying a recently affected brand is perceived as highly risky, making the trial purchase a first hurdle to be taken in the brand's recovery. Both pre-crisis loyalty and familiarity are found to form an important buffer against the product-harm crisis, supporting the idea that a brand's equity prior to the crisis offers resilience in the face of misfortune. Also heavy users tend to purchase the affected brands sooner, unless their usage rate decreased significantly during the crisis. Brand advertising was found to be effective for the stronger brand, but not for the weaker brand, while competitive advertising delayed the first-purchase decision for both brands affected by the crisis.(pro-environmental) attitudes; Behavior; Claim; Cognitive; Consumption; Control; Control theory; Decision; Decisions; Demand; Ecological consumer behaviour; Effects; Ego depletion; Implications; Marketing; Model; Performance; Research; Self-control; Self-perception theory; Social marketing; Studies; Theory; Product; Judgments; Preference; Recall; Advertising; Brands; Communication; Trust; Characteristics; Loyalty;
The relative age effect in youth and elite sport: Did 20 years of research make any difference?
In recent decades, our research team (among others) has identified obvious participation and attainment inequalities resulting from annual age grouping procedures across varying forms and levels of sport participation, and the relative age effects (RAEs) associated with it. Generally, youth born early in the selection year have selection and attainment advantages over their relatively younger peers. Twenty years ago, Helsen et al. (1998) observed that 37.9% of soccer players who were transferred from lower league teams to first division teams were born in the first three months of the selection year, while only 12.3% were born in the final three months. Almost a decade ago, Baker et al. (2010) observed that over 35% of players in two amateur developmental ice hockey leagues were born in the first three months of the selection year, while less than 10% were born in the final three months. Over-representation of relatively older players have been consistently observed in a variety of sports (Cobley et al., 2009; Musch & Grondin, 2001). I will discuss the (dis)advantages in selection and attainment that are considered RAEs (Wattie et al., 2008) and how they have changed (or not) over the past 20 years
Multi-qubit Randomized Benchmarking Using Few Samples
Randomized benchmarking (RB) is an efficient and robust method to
characterize gate errors in quantum circuits. Averaging over random sequences
of gates leads to estimates of gate errors in terms of the average fidelity.
These estimates are isolated from the state preparation and measurement errors
that plague other methods like channel tomography and direct fidelity
estimation. A decisive factor in the feasibility of randomized benchmarking is
the number of sampled sequences required to obtain rigorous confidence
intervals. Previous bounds were either prohibitively loose or required the
number of sampled sequences to scale exponentially with the number of qubits in
order to obtain a fixed confidence interval at a fixed error rate. Here we show
that, with a small adaptation to the randomized benchmarking procedure, the
number of sampled sequences required for a fixed confidence interval is
dramatically smaller than could previously be justified. In particular, we show
that the number of sampled sequences required is essentially independent of the
number of qubits and scales favorably with the average error rate of the system
under investigation. We also show that the number of samples required for long
sequence lengths can be made substantially smaller than previous rigorous
results (even for single qubits) as long as the noise process under
investigation is not unitary. Our results bring rigorous randomized
benchmarking on systems with many qubits into the realm of experimental
feasibility.Comment: v3: Added discussion of the impact of variance heteroskedasticity on
the RB fitting procedure. Close to published versio
- …