10,953 research outputs found

    Scratches from the Past: Inflationary Archaeology through Features in the Power Spectrum of Primordial Fluctuations

    Full text link
    Inflation may provide unique insight into the physics at the highest available energy scales that cannot be replicated in any realistic terrestrial experiment. Features in the primordial power spectrum are generically predicted in a wide class of models of inflation and its alternatives, and are observationally one of the most overlooked channels for finding evidence for non-minimal inflationary models. Constraints from observations of the cosmic microwave background cover the widest range of feature frequencies, but the most sensitive constraints will come from future large-scale structure surveys that can measure the largest number of linear and quasi-linear modes.Comment: 5 pages + references, 1 figure; science white paper submitted to the Astro2020 decadal surve

    Nash equilibria in quantum games with generalized two-parameter strategies

    Get PDF
    In the Eisert protocol for 2 X 2 quantum games [Phys. Rev. Lett. 83, 3077], a number of authors have investigated the features arising from making the strategic space a two-parameter subset of single qubit unitary operators. We argue that the new Nash equilibria and the classical-quantum transitions that occur are simply an artifact of the particular strategy space chosen. By choosing a different, but equally plausible, two-parameter strategic space we show that different Nash equilibria with different classical-quantum transitions can arise. We generalize the two-parameter strategies and also consider these strategies in a multiplayer setting.Comment: 19 pages, 2 eps figure

    Fast and Accurate Coarsening Simulation with an Unconditionally Stable Time Step

    Full text link
    We present Cahn-Hilliard and Allen-Cahn numerical integration algorithms that are unconditionally stable and so provide significantly faster accuracy-controlled simulation. Our stability analysis is based on Eyre's theorem and unconditional von Neumann stability analysis, both of which we present. Numerical tests confirm the accuracy of the von Neumann approach, which is straightforward and should be widely applicable in phase-field modeling. We show that accuracy can be controlled with an unbounded time step Delta-t that grows with time t as Delta-t ~ t^alpha. We develop a classification scheme for the step exponent alpha and demonstrate that a class of simple linear algorithms gives alpha=1/3. For this class the speed up relative to a fixed time step grows with the linear size of the system as N/log N, and we estimate conservatively that an 8192^2 lattice can be integrated 300 times faster than with the Euler method.Comment: 14 pages, 6 figure

    Deep Learning with Dynamically Weighted Loss Function for Sensor-Based Prognostics and Health Management

    Get PDF
    Deep learning has been employed to prognostic and health management of automotive and aerospace with promising results. Literature in this area has revealed that most contributions regarding deep learning is largely focused on the model’s architecture. However, contributions regarding improvement of different aspects in deep learning, such as custom loss function for prognostic and health management are scarce. There is therefore an opportunity to improve upon the effectiveness of deep learning for the system’s prognostics and diagnostics without modifying the models’ architecture. To address this gap, the use of two different dynamically weighted loss functions, a newly proposed weighting mechanism and a focal loss function for prognostics and diagnostics task are investigated. A dynamically weighted loss function is expected to modify the learning process by augmenting the loss function with a weight value corresponding to the learning error of each data instance. The objective is to force deep learning models to focus on those instances where larger learning errors occur in order to improve their performance. The two loss functions used are evaluated using four popular deep learning architectures, namely, deep feedforward neural network, one-dimensional convolutional neural network, bidirectional gated recurrent unit and bidirectional long short-term memory on the commercial modular aero-propulsion system simulation data from NASA and air pressure system failure data for Scania trucks. Experimental results show that dynamically-weighted loss functions helps us achieve significant improvement for remaining useful life prediction and fault detection rate over non-weighted loss function predictions

    Energy-Conserving Lattice Boltzmann Thermal Model in Two Dimensions

    Get PDF
    A discrete velocity model is presented for lattice Boltzmann thermal fluid dynamics. This model is implemented and tested in two dimensions with a finite difference scheme. Comparison with analytical solutions shows an excellent agreement even for wide temperature differences. An alternative approximate approach is then presented for traditional lattice transport schemes

    Partisan Voting on the California Supreme Court

    Get PDF
    When did ideology become the major fault line of the California Supreme Court? To answer this question, we use a two-parameter item response theory (IRT) model to identify voting patterns in non-unanimous decisions by California Supreme Court justices from 1910 to 2011. The model shows that voting on the court became polarized on recognizably partisan lines beginning in the mid-1900s. Justices usually did not vote in a pattern that matched their political reputations and party affiliation during the first half of the century. This began to change in the 1950s. After 1959 the dominant voting pattern is partisan and closely aligns with each justice’s political reputation. Our findings after 1959 largely confirm the conventional wisdom that voting on the modern court is on political lines. But our findings call into question the usual characterization of the Lucas court (1987–1996) as a moderately conservative court. Our model shows that the conservatives dominated the Lucas court to the same degree the liberals dominated the Traynor court (1964–1970). More broadly, this Article confirms that an important development occurred in American law at the turn of the half-century. A previous study used the same model to identify voting patterns on the New York Court of Appeals from 1900 to 1941 and to investigate whether those voting patterns were best explained by the justices’ political reputations. That study found consistently patterned voting for most of the 40 years. But the dominant dimension of disagreement on the court for much of the period was not political in the usual sense of that term. Our finding that the dominant voting pattern on the California Supreme Court was non-political in the first half of the 1900s parallels the New York study’s findings for the period before 1941. Carrying the voting pattern analysis forward in time, this Article finds that in the mid-1900s the dominant voting pattern became aligned with the justices’ political reputations due to a change in the voting pattern in criminal law and tort cases that dominated the court’s docket. Together, these two studies provide empirical evidence that judicial decision-making changed in the United States in the mid-1900s as judges divided into ideological camps on a broad swath of issues

    Coalitions in the quantum Minority game: classical cheats and quantum bullies

    Full text link
    In a one-off Minority game, when a group of players agree to collaborate they gain an advantage over the remaining players. We consider the advantage obtained in a quantum Minority game by a coalition sharing an initially entangled state versus that obtained by a coalition that uses classical communication to arrive at an optimal group strategy. In a model of the quantum Minority game where the final measurement basis is randomized, quantum coalitions outperform classical ones when carried out by up to four players, but an unrestricted amount of classical communication is better for larger coalition sizes.Comment: 12 pages, 1 figur

    Structural basis of suppression of host translation termination by Moloney Murine Leukemia Virus

    Get PDF
    Retroviral reverse transcriptase (RT) of Moloney murine leukemia virus (MoMLV) is expressed in the form of a large Gag–Pol precursor protein by suppression of translational termination in which the maximal efficiency of stop codon read-through depends on the interaction between MoMLV RT and peptidyl release factor 1 (eRF1). Here, we report the crystal structure of MoMLV RT in complex with eRF1. The MoMLV RT interacts with the C-terminal domain of eRF1 via its RNase H domain to sterically occlude the binding of peptidyl release factor 3 (eRF3) to eRF1. Promotion of read-through by MoMLV RNase H prevents nonsense-mediated mRNA decay (NMD) of mRNAs. Comparison of our structure with that of HIV RT explains why HIV RT cannot interact with eRF1. Our results provide a mechanistic view of how MoMLV manipulates the host translation termination machinery for the synthesis of its own proteins
    corecore