4,797 research outputs found

    Problems in rendezvous search.

    Get PDF
    Suppose n players are placed randomly on the real line at consecutive integers, and faced in random directions. Each player has maximum speed one and cannot see the others. The least expected time required for m(? n) of them to meet together at a single point, if all players have to use the same strategy, is the symmetric rendezvous value Rsm,n. If the players can use different strategies, the least expected meeting time is the asymmetric rendezvous value Ram,n. show that Ra3,2 is 47/48 and Rsn,n is asymptotic to n/2. If the minimax rendezvous time Mn is the minimum time required to ensure that all players can meet together at a single point regardless of their initial placement, we prove that M2 is 3, M3 is 4 and Mn is asymptotic to n/2. If players have to stick together upon meeting, we prove that three players require 5 time units to ensure a meeting. We also consider a problem proposed by S. Alpern (in his joint paper with A. Beck, Rendezvous Search on the Line with Bounded Resources, LSE Math Preprint Series, 92 (1995)) of how two players can optimally rendezvous while at the same time evading an enemy searcher. We model this rendezvous-evasion problem as a two-person, zero-sum game between the rendezvous team R (with agents R1, R2) and the searcher S and consider a version which is discrete in time and space. R1, R2 and S start at different locations among n identical locations and no two of them share a common labelling of the locations. Each player can move between any two locations in one time step (this includes the possibility of staying still) until at least two of them are at the same location together, at which time the game ends. If S is at this location, S (maximizer) wins and the payoff is 1; otherwise the team R (minimizer) wins and the payoff is 0. The value of the game vn is the probability that S wins under optimal play. We assume that R1 and R2 can jointly randomize their strategies and prove that V3 is 47/76 ? 0.61842 and v4 is at least 31/54 ? 0.57407. If all the players share a common notion of a directed cycle containing all the n locations (while still able to move between any two locations), the value of the game dn is ((1 - 2/n)n-1 + l)/2. In particular, d3 is less than v3 and d4 is less than v4. We also compare some of these results with those obtained when the rendezvous-evasion game is modelled as a multi-stage game with observed actions (W. S. Lim, Rendezvous-Evasion As a Multi-Stage Game With Observed Actions, LSE Math-CDAM Research Report Series, 96-05 (1996)). In all instances considered, we find that obligatory announcement of actions at the end of each step either does not affect the value of the game or helps the rendezvous team secure a lower value

    The Impact of Risk Preference on Auction Mechanism: An Experimental Approach

    Get PDF
    Auction is an important exchange mechanism from both the practical as well as theoretical perspective. The advent of the Internet has opened up new research arena for the theory of auctions. In this paper, we investigate the bidding behavior of subjects under three mechanisms, namely, the first-price, second-price, and third-price sealed-bid auctions, taking into consideration the risk profile of the subjects. In particular, we address the question of whether third-price auctions generate the highest expected revenue for the seller when bidders are risk seeking (Monderer and Tennenholtz (2000)

    Market structure and the value of overselling under stochastic demands

    Get PDF
    In the operations management literature, traditional revenue management focused on pricing and capacity allocation strategies in a two-period model with stochastic demand. Inspired by travel and lodging industries, we examine a two-period model in which each seller may also adopt the overselling strategy to customers whose valuations are differentiated by timing of arrivals. Widely seen as a popular hedge against consumersā€™ skipping reservations, we extend the stylized approaches of Biyalogorsky, Carmon, Fruchter, and Gerstner (1999) and Lim (2009) to understand the value of overselling under various market structures. We find that contrary to existing literature, the impact of period-two pricing competition from overselling spills over to period-one such that overselling may not always be a (weakly) dominant strategy once unlimited early demand ceases to hold in a duopoly regime. We provide some numerical studies on the existence of multiple equilibria at the capacity allocation level which actually lead to different selling strategies at the equilibrium despite identical market conditions and firm characteristics

    Entry of copycats of luxury brands

    Get PDF
    We develop a game-theoretic model to examine the entry of copycats and its implications by incorporating two salient features; these features are two product attributes, i.e., physical resemblance and product quality, and two consumer utilities, i.e., consumption utility and status utility. Our equilibrium analysis suggests that copycats with a high physical resemblance but low product quality are more likely to successfully enter the market by defying the deterrence of the incumbent. Furthermore, we show that higher quality can prevent the copycat from successfully entering the market. Finally, we show that the entry of copycats does not always improve consumer surplus and social welfare. In particular, when the quality of the copycat is sufficiently low, the loss in status utility from consumers of the incumbent product overshadows the small gain in consumption utility from buyers of the copycat, leading to an overall decrease in consumer surplus and social welfare. </jats:p

    Polymeric Nanoparticles Amenable to Simultaneous Installation of Exterior Targeting and Interior Therapeutic Proteins

    Get PDF
    Effective delivery of therapeutic proteins is a formidable challenge. Herein, using a unique polymer family with a wide-ranging set of cationic and hydrophobic features, we developed a novel nanoparticle (NP) platform capable of installing protein ligands on the particle surface and simultaneously carrying therapeutic proteins inside by a self-assembly procedure. The loaded therapeutic proteins (e.g., insulin) within the NPs exhibited sustained and tunable release, while the surface-coated protein ligands (e.g., transferrin) were demonstrated to alter the NP cellular behaviors. Inā€…vivo results revealed that the transferrin-coated NPs can effectively be transported across the intestinal epithelium for oral insulin delivery, leading to a notable hypoglycemic response.National Institutes of Health (U.S.) (Grants EB015419, R00CA160350, and CA151884)Prostate Cancer Foundation (Challenge Award)National Research Foundation of Korea (Grant K1A1A2048701)David H. Koch Institute for Integrative Cancer Research at MIT. Prostate Cancer Foundation Program in Cancer NanotherapeuticsNational Natural Science Foundation (China) (Grant 81173010

    Optimizing the noise versus bias trade-off for Illumina whole genome expression BeadChips

    Get PDF
    Five strategies for pre-processing intensities from Illumina expression BeadChips are assessed from the point of view of precision and bias. The strategies include a popular variance stabilizing transformation and model-based background corrections that either use or ignore the control probes. Four calibration data sets are used to evaluate precision, bias and false discovery rate (FDR). The original algorithms are shown to have operating characteristics that are not easily comparable. Some tend to minimize noise while others minimize bias. Each original algorithm is shown to have an innate intensity offset, by which unlogged intensities are bounded away from zero, and the size of this offset determines its position on the noiseā€“bias spectrum. By adding extra offsets, a continuum of related algorithms with different noiseā€“bias trade-offs is generated, allowing direct comparison of the performance of the strategies on equivalent terms. Adding a positive offset is shown to decrease the FDR of each original algorithm. The potential of each strategy to generate an algorithm with an optimal noiseā€“bias trade-off is explored by finding the offset that minimizes its FDR. The use of control probes as part of the background correction and normalization strategy is shown to achieve the lowest FDR for a given bias

    ELUCID IV: Galaxy Quenching and its Relation to Halo Mass, Environment, and Assembly Bias

    Full text link
    We examine the quenched fraction of central and satellite galaxies as a function of galaxy stellar mass, halo mass, and the matter density of their large scale environment. Matter densities are inferred from our ELUCID simulation, a constrained simulation of local Universe sampled by SDSS, while halo masses and central/satellite classification are taken from the galaxy group catalog of Yang et al. The quenched fraction for the total population increases systematically with the three quantities. We find that the `environmental quenching efficiency', which quantifies the quenched fraction as function of halo mass, is independent of stellar mass. And this independence is the origin of the stellar mass-independence of density-based quenching efficiency, found in previous studies. Considering centrals and satellites separately, we find that the two populations follow similar correlations of quenching efficiency with halo mass and stellar mass, suggesting that they have experienced similar quenching processes in their host halo. We demonstrate that satellite quenching alone cannot account for the environmental quenching efficiency of the total galaxy population and the difference between the two populations found previously mainly arises from the fact that centrals and satellites of the same stellar mass reside, on average, in halos of different mass. After removing these halo-mass and stellar-mass effects, there remains a weak, but significant, residual dependence on environmental density, which is eliminated when halo assembly bias is taken into account. Our results therefore indicate that halo mass is the prime environmental parameter that regulates the quenching of both centrals and satellites.Comment: 21 pages, 16 figures, submitted to Ap

    ELUCID V. Lighting dark matter halos with galaxies

    Full text link
    In a recent study, using the distribution of galaxies in the north galactic pole of SDSS DR7 region enclosed in a 500\mpch box, we carried out our ELUCID simulation (Wang et al. 2016, ELUCID III). Here we {\it light} the dark matter halos and subhalos in the reconstructed region in the simulation with galaxies in the SDSS observations using a novel {\it neighborhood} abundance matching method. Before we make use of thus established galaxy-subhalo connections in the ELUCID simulation to evaluate galaxy formation models, we set out to explore the reliability of such a link. For this purpose, we focus on the following a few aspects of galaxies: (1) the central-subhalo luminosity and mass relations; (2) the satellite fraction of galaxies; (3) the conditional luminosity function (CLF) and conditional stellar mass function (CSMF) of galaxies; and (4) the cross correlation functions between galaxies and the dark matter particles, most of which are measured separately for all, red and blue galaxy populations. We find that our neighborhood abundance matching method accurately reproduces the central-subhalo relations, satellite fraction, the CLFs and CSMFs and the biases of galaxies. These features ensure that thus established galaxy-subhalo connections will be very useful in constraining galaxy formation processes. And we provide some suggestions on the three levels of using the galaxy-subhalo pairs for galaxy formation constraints. The galaxy-subhalo links and the subhalo merger trees in the SDSS DR7 region extracted from our ELUCID simulation are available upon request.Comment: 18 pages, 13 figures, ApJ accepte

    Reconstruction of extremely dense breast composition Utilizing inverse scattering technique integrated with Frequency-hopping approach

    Get PDF
    The Forward-Backward Time-Stepping (FBTS) inverse scattering technique is utilized for breast composition reconstruction of an extremely dense breast model at different center frequencies. A numerical extremely dense breast phantom is used and resized to suit the Finite-Difference Time-Domain (FDTD) lattice environment utilizing twodimensional (2-D) FBTS technique. The average value of fibro glandular region for reconstruction with Frequencyhopping approach applied is much closer to average value of the actual image compared to the reconstruction without Frequency-approach applied. Hence, the composition of the extremely dense breast model can be reconstructed with Frequency-hopping approach is applied and the details of the reconstruction is also enhanced
    • ā€¦
    corecore