22,587 research outputs found

    Injection locking of two frequency-doubled lasers with 3.2 GHz offset for driving Raman transitions with low photon scattering in 43^{43}Ca+^+

    Full text link
    We describe the injection locking of two infrared (794 nm) laser diodes which are each part of a frequency-doubled laser system. An acousto-optic modulator (AOM) in the injection path gives an offset of 1.6 GHz between the lasers for driving Raman transitions between states in the hyperfine split (by 3.2 GHz) ground level of 43^{43}Ca+^+. The offset can be disabled for use in 40^{40}Ca+^+. We measure the relative linewidth of the frequency-doubled beams to be 42 mHz in an optical heterodyne measurement. The use of both injection locking and frequency doubling combines spectral purity with high optical power. Our scheme is applicable for providing Raman beams across other ion species and neutral atoms where coherent optical manipulation is required.Comment: 3 pages, 3 figure

    Atypical late-time singular regimes accurately diagnosed in stagnation-point-type solutions of 3D Euler flows

    Full text link
    We revisit, both numerically and analytically, the finite-time blowup of the infinite-energy solution of 3D Euler equations of stagnation-point-type introduced by Gibbon et al. (1999). By employing the method of mapping to regular systems, presented in Bustamante (2011) and extended to the symmetry-plane case by Mulungye et al. (2015), we establish a curious property of this solution that was not observed in early studies: before but near singularity time, the blowup goes from a fast transient to a slower regime that is well resolved spectrally, even at mid-resolutions of 5122.512^2. This late-time regime has an atypical spectrum: it is Gaussian rather than exponential in the wavenumbers. The analyticity-strip width decays to zero in a finite time, albeit so slowly that it remains well above the collocation-point scale for all simulation times t<T109000t < T^* - 10^{-9000}, where TT^* is the singularity time. Reaching such a proximity to singularity time is not possible in the original temporal variable, because floating point double precision (1016\approx 10^{-16}) creates a `machine-epsilon' barrier. Due to this limitation on the \emph{original} independent variable, the mapped variables now provide an improved assessment of the relevant blowup quantities, crucially with acceptable accuracy at an unprecedented closeness to the singularity time: $T^*- t \approx 10^{-140}.

    Against Game Theory

    Get PDF
    People make choices. Often, the outcome depends on choices other people make. What mental steps do people go through when making such choices? Game theory, the most influential model of choice in economics and the social sciences, offers an answer, one based on games of strategy such as chess and checkers: the chooser considers the choices that others will make and makes a choice that will lead to a better outcome for the chooser, given all those choices by other people. It is universally established in the social sciences that classical game theory (even when heavily modified) is bad at predicting behavior. But instead of abandoning classical game theory, those in the social sciences have mounted a rescue operation under the name of “behavioral game theory.” Its main tool is to propose systematic deviations from the predictions of game theory, deviations that arise from character type, for example. Other deviations purportedly come from cognitive overload or limitations. The fundamental idea of behavioral game theory is that, if we know the deviations, then we can correct our predictions accordingly, and so get it right. There are two problems with this rescue operation, each of them is fatal. (1) For a chooser, contemplating the range of possible deviations, as there are many dozens, actually makes it exponentially harder to figure out a path to an outcome. This makes the theoretical models useless for modeling human thought or human behavior in general. (2) Modeling deviations are helpful only if the deviations are consistent, so that scientists (and indeed decision makers) can make predictions about future choices on the basis of past choices. But the deviations are not consistent. In general, deviations from classical models are not consistent for any individual from one task to the next or between individuals for the same task. In addition, people’s beliefs are in general not consistent with their choices. Accordingly, all hope is hollow that we can construct a general behavioral game theory. What can replace it? We survey some of the emerging candidates

    Can We Build Behavioral Game Theory?

    Get PDF
    The way economists and other social scientists model how people make interdependent decisions is through the theory of games. Psychologists and behavioral economists, however, have established many deviations from the predictions of game theory. In response to these findings, a broad movement has arisen to salvage the core of game theory. Extant models of interdependent decision-making try to improve their explanatory domain by adding some corrective terms or limits. We will make the argument that this approach is misguided. For this approach to work, the deviations would have to be consistent. Drawing in part on our experimental results, we will argue that deviations from classical models are not consistent for any individual from one task to the next or between individuals for the same task. In turn, the problem of finding an equilibrium strategy is not easier but rather is exponentially more difficult. It does not seem that game theory can be repaired by adding corrective terms (such as consideration of personal characteristics, social norms, heuristic or bias terms, or cognitive limits on choice and learning). In what follows, we describe new methods for investigating interdependent decision-making. Our experimental results show that people do not choose consistently, do not hold consistent beliefs, and do not in general align actions and beliefs. We will show that experimental choices are inconsistent in ways that prevent us from drawing general characterizations of an individual’s choices or beliefs or of the general population\u27s choices and beliefs. A general behavioral game theory seems a distant and, at present, unfulfilled hope

    Use of cohesive elements in fatigue analysis

    Get PDF
    Cohesive laws describe the resistance to incipient separation of material surfaces. A cohesive finite element is formulated on the basis of a particular cohesive law. Cohesive elements are placed at the boundary between adjacent standard volume finite elements to model fatigue damage that leads to fracture at the separation of the element boundaries per the cohesive law. In this work, a cohesive model for fatigue crack initiation is taken to be the irreversible loadingunloading hysteresis that represents fatigue damage occuring due to cyclic loads leading to the initiation of small cracks. Various cohesive laws are reviewed and one is selected that incorporates a hysteretic cyclic loading that accounts for energetic dissipative mechanisms. A mathematical representation is developed based on an exponential effective load-separation cohesive relationship. A three-dimensional cohesive element is defined using this compliance relationship integrated at four points on the mid-surface of the area element. Implementation into finite element software is discussed and particular attention is applied to numerical convergence issues as the inflection point between loading and 'unloading in the cohesive law is encountered. A simple example of a displacementcontrolled fatigue test is presented in a finite element simulation. Comments are made on applications of the method to prediction of fatigue life for engineering structures such as pressure vessels and piping

    Feature Selection of Post-Graduation Income of College Students in the United States

    Full text link
    This study investigated the most important attributes of the 6-year post-graduation income of college graduates who used financial aid during their time at college in the United States. The latest data released by the United States Department of Education was used. Specifically, 1,429 cohorts of graduates from three years (2001, 2003, and 2005) were included in the data analysis. Three attribute selection methods, including filter methods, forward selection, and Genetic Algorithm, were applied to the attribute selection from 30 relevant attributes. Five groups of machine learning algorithms were applied to the dataset for classification using the best selected attribute subsets. Based on our findings, we discuss the role of neighborhood professional degree attainment, parental income, SAT scores, and family college education in post-graduation incomes and the implications for social stratification.Comment: 14 pages, 6 tables, 3 figure
    corecore