1,813 research outputs found

    RF and IF mixer optimum matching impedances extracted by large-signal vectorial measurements

    Get PDF
    This paper introduces a new technique that allows us to measure the admittance conversion matrix of a two-port device,using a Nonlinear Vector Network Analyzer.This method is applied to extract the conversion matrix of a 0.2 µµµµm pHEMT,driven by a 4.8 GHz pump signal,at different power levels,using an intermediate frequency of 600 MHz.The issue on data inconsistency due to phase randomization among different measurements is discussed and a proper pre- processing algorithm is proposed to fix the problem. The output of this work consists of a comprehensive experimental evaluation of up-and down-conversion maximum gain,stability,and optimal RF and IF impedances

    A systematic study of non-ideal contacts in integer quantum Hall systems

    Full text link
    In the present article we investigate the influence of the contact region on the distribution of the chemical potential in integer quantum Hall samples, as well as the longitudinal and Hall resistance as a function of the magnetic field. First we use a standard quantum Hall sample geometry and analyse the influence of the length of the leads where current enters/leaves the sample and the ratio of the contact width to the width of these leads. Furthermore we investigate potential barriers in the current injecting leads and the measurement arms in order to simulate non-ideal contacts. Second we simulate nonlocal quantum Hall samples with applied gating voltage at the metallic contacts. For such samples it has been found experimentally that both the longitudinal and Hall resistance as a function of the magnetic field can change significantly. Using the nonequilibrium network model we are able to reproduce most qualitative features of the experiments.Comment: 29 pages, 16 Figure

    Minority Becomes Majority in Social Networks

    Full text link
    It is often observed that agents tend to imitate the behavior of their neighbors in a social network. This imitating behavior might lead to the strategic decision of adopting a public behavior that differs from what the agent believes is the right one and this can subvert the behavior of the population as a whole. In this paper, we consider the case in which agents express preferences over two alternatives and model social pressure with the majority dynamics: at each step an agent is selected and its preference is replaced by the majority of the preferences of her neighbors. In case of a tie, the agent does not change her current preference. A profile of the agents' preferences is stable if the preference of each agent coincides with the preference of at least half of the neighbors (thus, the system is in equilibrium). We ask whether there are network topologies that are robust to social pressure. That is, we ask if there are graphs in which the majority of preferences in an initial profile always coincides with the majority of the preference in all stable profiles reachable from that profile. We completely characterize the graphs with this robustness property by showing that this is possible only if the graph has no edge or is a clique or very close to a clique. In other words, except for this handful of graphs, every graph admits at least one initial profile of preferences in which the majority dynamics can subvert the initial majority. We also show that deciding whether a graph admits a minority that becomes majority is NP-hard when the minority size is at most 1/4-th of the social network size.Comment: To appear in WINE 201

    A non-autonomous stochastic discrete time system with uniform disturbances

    Full text link
    The main objective of this article is to present Bayesian optimal control over a class of non-autonomous linear stochastic discrete time systems with disturbances belonging to a family of the one parameter uniform distributions. It is proved that the Bayes control for the Pareto priors is the solution of a linear system of algebraic equations. For the case that this linear system is singular, we apply optimization techniques to gain the Bayesian optimal control. These results are extended to generalized linear stochastic systems of difference equations and provide the Bayesian optimal control for the case where the coefficients of these type of systems are non-square matrices. The paper extends the results of the authors developed for system with disturbances belonging to the exponential family

    Minimum Decision Cost for Quantum Ensembles

    Get PDF
    For a given ensemble of NN independent and identically prepared particles, we calculate the binary decision costs of different strategies for measurement of polarised spin 1/2 particles. The result proves that, for any given values of the prior probabilities and any number of constituent particles, the cost for a combined measurement is always less than or equal to that for any combination of separate measurements upon sub-ensembles. The Bayes cost, which is that associated with the optimal strategy (i.e., a combined measurement) is obtained in a simple closed form.Comment: 11 pages, uses RevTe

    Phenomenological approach to the critical dynamics of the QCD phase transition revisited

    Full text link
    The phenomenological dynamics of the QCD critical phenomena is revisited. Recently, Son and Stephanov claimed that the dynamical universality class of the QCD phase transition belongs to model H. In their discussion, they employed a time-dependent Ginzburg-Landau equation for the net baryon number density, which is a conserved quantity. We derive the Langevin equation for the net baryon number density, i.e., the Cahn-Hilliard equation. Furthermore, they discussed the mode coupling induced through the {\it irreversible} current. Here, we show the {\it reversible} coupling can play a dominant role for describing the QCD critical dynamics and that the dynamical universality class does not necessarily belong to model H.Comment: 13 pages, the Curie principle is discussed in S.2, to appear in J.Phys.

    Majority Dynamics and Aggregation of Information in Social Networks

    Full text link
    Consider n individuals who, by popular vote, choose among q >= 2 alternatives, one of which is "better" than the others. Assume that each individual votes independently at random, and that the probability of voting for the better alternative is larger than the probability of voting for any other. It follows from the law of large numbers that a plurality vote among the n individuals would result in the correct outcome, with probability approaching one exponentially quickly as n tends to infinity. Our interest in this paper is in a variant of the process above where, after forming their initial opinions, the voters update their decisions based on some interaction with their neighbors in a social network. Our main example is "majority dynamics", in which each voter adopts the most popular opinion among its friends. The interaction repeats for some number of rounds and is then followed by a population-wide plurality vote. The question we tackle is that of "efficient aggregation of information": in which cases is the better alternative chosen with probability approaching one as n tends to infinity? Conversely, for which sequences of growing graphs does aggregation fail, so that the wrong alternative gets chosen with probability bounded away from zero? We construct a family of examples in which interaction prevents efficient aggregation of information, and give a condition on the social network which ensures that aggregation occurs. For the case of majority dynamics we also investigate the question of unanimity in the limit. In particular, if the voters' social network is an expander graph, we show that if the initial population is sufficiently biased towards a particular alternative then that alternative will eventually become the unanimous preference of the entire population.Comment: 22 page

    Error estimates for solid-state density-functional theory predictions: an overview by means of the ground-state elemental crystals

    Get PDF
    Predictions of observable properties by density-functional theory calculations (DFT) are used increasingly often in experimental condensed-matter physics and materials engineering as data. These predictions are used to analyze recent measurements, or to plan future experiments. Increasingly more experimental scientists in these fields therefore face the natural question: what is the expected error for such an ab initio prediction? Information and experience about this question is scattered over two decades of literature. The present review aims to summarize and quantify this implicit knowledge. This leads to a practical protocol that allows any scientist - experimental or theoretical - to determine justifiable error estimates for many basic property predictions, without having to perform additional DFT calculations. A central role is played by a large and diverse test set of crystalline solids, containing all ground-state elemental crystals (except most lanthanides). For several properties of each crystal, the difference between DFT results and experimental values is assessed. We discuss trends in these deviations and review explanations suggested in the literature. A prerequisite for such an error analysis is that different implementations of the same first-principles formalism provide the same predictions. Therefore, the reproducibility of predictions across several mainstream methods and codes is discussed too. A quality factor Delta expresses the spread in predictions from two distinct DFT implementations by a single number. To compare the PAW method to the highly accurate APW+lo approach, a code assessment of VASP and GPAW with respect to WIEN2k yields Delta values of 1.9 and 3.3 meV/atom, respectively. These differences are an order of magnitude smaller than the typical difference with experiment, and therefore predictions by APW+lo and PAW are for practical purposes identical.Comment: 27 pages, 20 figures, supplementary material available (v5 contains updated supplementary material

    Pair creation: back-reactions and damping

    Get PDF
    We solve the quantum Vlasov equation for fermions and bosons, incorporating spontaneous pair creation in the presence of back-reactions and collisions. Pair creation is initiated by an external impulse field and the source term is non-Markovian. A simultaneous solution of Maxwell's equation in the presence of feedback yields an internal current and electric field that exhibit plasma oscillations with a period tau_pl. Allowing for collisions, these oscillations are damped on a time-scale, tau_r, determined by the collision frequency. Plasma oscillations cannot affect the early stages of the formation of a quark-gluon plasma unless tau_r >> tau_pl and tau_pl approx. 1/Lambda_QCD approx 1 fm/c.Comment: 16 pages, 6 figure, REVTEX, epsfig.st

    Kepler Presearch Data Conditioning I - Architecture and Algorithms for Error Correction in Kepler Light Curves

    Full text link
    Kepler provides light curves of 156,000 stars with unprecedented precision. However, the raw data as they come from the spacecraft contain significant systematic and stochastic errors. These errors, which include discontinuities, systematic trends, and outliers, obscure the astrophysical signals in the light curves. To correct these errors is the task of the Presearch Data Conditioning (PDC) module of the Kepler data analysis pipeline. The original version of PDC in Kepler did not meet the extremely high performance requirements for the detection of miniscule planet transits or highly accurate analysis of stellar activity and rotation. One particular deficiency was that astrophysical features were often removed as a side-effect to removal of errors. In this paper we introduce the completely new and significantly improved version of PDC which was implemented in Kepler SOC 8.0. This new PDC version, which utilizes a Bayesian approach for removal of systematics, reliably corrects errors in the light curves while at the same time preserving planet transits and other astrophysically interesting signals. We describe the architecture and the algorithms of this new PDC module, show typical errors encountered in Kepler data, and illustrate the corrections using real light curve examples.Comment: Submitted to PASP. Also see companion paper "Kepler Presearch Data Conditioning II - A Bayesian Approach to Systematic Error Correction" by Jeff C. Smith et a
    corecore