4,126 research outputs found

    Antitrust and payment technologies

    Get PDF
    Antitrust law ; Payment systems

    Why Tie A Product Consumers Do Not Use?

    Get PDF
    This paper provides a new explanation for tying that is not based on any of the standard explanations -- efficiency, price discrimination, and exclusion. Our analysis shows how a monopolist sometimes has an incentive to tie a complementary good to its monopolized good in order to transfer profits from a rival producer of the complementary product to the monopolist. This occurs even when consumers -- who have the option to use the monopolist's complementary good -- do not use it. The tie is profitable because it alters the subsequent pricing game between the monopolist and the rival in a manner favorable to the monopolist. We show that this form of tying is socially inefficient, but interestingly can arise only when the tie is socially efficient in the absence of the rival producer. We relate this inefficient form of tying to several actual examples and explore its antitrust implications.

    Extending the halo mass resolution of NN-body simulations

    Full text link
    We present a scheme to extend the halo mass resolution of N-body simulations of the hierarchical clustering of dark matter. The method uses the density field of the simulation to predict the number of sub-resolution dark matter haloes expected in different regions. The technique requires as input the abundance of haloes of a given mass and their average clustering, as expressed through the linear and higher order bias factors. These quantities can be computed analytically or, more accurately, derived from a higher resolution simulation as done here. Our method can recover the abundance and clustering in real- and redshift-space of haloes with mass below 7.5×1013h1M\sim 7.5 \times 10^{13}h^{-1}M_{\odot} at z=0z=0 to better than 10%. We demonstrate the technique by applying it to an ensemble of 50 low resolution, large-volume NN-body simulations to compute the correlation function and covariance matrix of luminous red galaxies (LRGs). The limited resolution of the original simulations results in them resolving just two thirds of the LRG population. We extend the resolution of the simulations by a factor of 30 in halo mass in order to recover all LRGs. With existing simulations it is possible to generate a halo catalogue equivalent to that which would be obtained from a NN-body simulation using more than 20 trillion particles; a direct simulation of this size is likely to remain unachievable for many years. Using our method it is now feasible to build the large numbers of high-resolution large volume mock galaxy catalogues required to compute the covariance matrices necessary to analyse upcoming galaxy surveys designed to probe dark energy.Comment: 11 pages, 7 Figure

    Boundary-Layer Transition on Hollow Cylinders in Supersonic Free Flight as Affected by Mach Number and a Screwthread Type of Surface Roughness

    Get PDF
    The effects of Mach number and surface-roughness variation on boundary-layer transition were studied using fin-stabilized hollow-tube models in free flight. The tests were conducted over the Mach number range from 2.8 to 7 at a nominally constant unit Reynolds number of 3 million per inch, and with heat transfer to the model surface. A screwthread type of distributed two-dimensional roughness was used. Nominal thread heights varied from 100 microinches to 2100 microinches. Transition Reynolds number was found to increase with increasing Mach number at a rate depending simultaneously on Mach number and roughness height. The laminar boundary layer was found to tolerate increasing amounts of roughness as Mach number increased. For a given Mach number an optimum roughness height was found which gave a maximum laminar run greater than was obtained with a smooth surface

    On the measurement of a weak classical force coupled to a quantum-mechanical oscillator. I. Issues of principle

    Get PDF
    The monitoring of a quantum-mechanical harmonic oscillator on which a classical force acts is important in a variety of high-precision experiments, such as the attempt to detect gravitational radiation. This paper reviews the standard techniques for monitoring the oscillator, and introduces a new technique which, in principle, can determine the details of the force with arbitrary accuracy, despite the quantum properties of the oscillator. The standard method for monitoring the oscillator is the "amplitude-and-phase" method (position or momentum transducer with output fed through a narrow-band amplifier). The accuracy obtainable by this method is limited by the uncertainty principle ("standard quantum limit"). To do better requires a measurement of the type which Braginsky has called "quantum nondemolition." A well known quantum nondemolition technique is "quantum counting," which can detect an arbitrarily weak classical force, but which cannot provide good accuracy in determining its precise time dependence. This paper considers extensively a new type of quantum nondemolition measurement—a "back-action-evading" measurement of the real part X_1 (or the imaginary part X_2) of the oscillator's complex amplitude. In principle X_1 can be measured "arbitrarily quickly and arbitrarily accurately," and a sequence of such measurements can lead to an arbitrarily accurate monitoring of the classical force. The authors describe explicit Gedanken experiments which demonstrate that X_1 can be measured arbitrarily quickly and arbitrarily accurately. In these experiments the measuring apparatus must be coupled to both the position (position transducer) and the momentum (momentum transducer) of the oscillator, and both couplings must be modulated sinusoidally. For a given measurement time the strength of the coupling determines the accuracy of the measurement; for arbitrarily strong coupling the measurement can be arbitrarily accurate. The "momentum transducer" is constructed by combining a "velocity transducer" with a "negative capacitor" or "negative spring." The modulated couplings are provided by an external, classical generator, which can be realized as a harmonic oscillator excited in an arbitrarily energetic, coherent state. One can avoid the use of two transducers by making "stroboscopic measurements" of X_1, in which one measures position (or momentum) at half-cycle intervals. Alternatively, one can make "continuous single-transducer" measurements of X_1 by modulating appropriately the output of a single transducer (position or momentum), and then filtering the output to pick out the information about X_1 and reject information about X_2. Continuous single-transducer measurements are useful in the case of weak coupling. In this case long measurement times are required to achieve good accuracy, and continuous single-transducer measurements are almost as good as perfectly coupled two-transducer measurements. Finally, the authors develop a theory of quantum nondemolition measurement for arbitrary systems. This paper (Paper I) concentrates on issues of principle; a sequel (Paper II) will consider issues of practice

    WFIRST Coronagraph Technology Requirements: Status Update and Systems Engineering Approach

    Full text link
    The coronagraphic instrument (CGI) on the Wide-Field Infrared Survey Telescope (WFIRST) will demonstrate technologies and methods for high-contrast direct imaging and spectroscopy of exoplanet systems in reflected light, including polarimetry of circumstellar disks. The WFIRST management and CGI engineering and science investigation teams have developed requirements for the instrument, motivated by the objectives and technology development needs of potential future flagship exoplanet characterization missions such as the NASA Habitable Exoplanet Imaging Mission (HabEx) and the Large UV/Optical/IR Surveyor (LUVOIR). The requirements have been refined to support recommendations from the WFIRST Independent External Technical/Management/Cost Review (WIETR) that the WFIRST CGI be classified as a technology demonstration instrument instead of a science instrument. This paper provides a description of how the CGI requirements flow from the top of the overall WFIRST mission structure through the Level 2 requirements, where the focus here is on capturing the detailed context and rationales for the CGI Level 2 requirements. The WFIRST requirements flow starts with the top Program Level Requirements Appendix (PLRA), which contains both high-level mission objectives as well as the CGI-specific baseline technical and data requirements (BTR and BDR, respectively)... We also present the process and collaborative tools used in the L2 requirements development and management, including the collection and organization of science inputs, an open-source approach to managing the requirements database, and automating documentation. The tools created for the CGI L2 requirements have the potential to improve the design and planning of other projects, streamlining requirement management and maintenance. [Abstract Abbreviated]Comment: 16 pages, 4 figure
    corecore