1,782 research outputs found
One-Way Compatibility, Two-Way Compatibility and Entry in Network Industries
We study the strategic choice of compatibility between two initially incompatible software packages in a two-stage game by an incumbent and an entrant firm. Consumers enjoy network externality in consumption and maximise expected surplus over the two periods. Compatibility may be achieved by means of a converter. We derive a number of results under diÆerent assumptions about the nature of the converter (one-way vs two-way) and the existence of property rights. In the case of a two-way converter, which can only be supplied by the incumbent, incompatibility will result in equilibrium and depending on the strength of network externalities the incumbent may deter entry. When both firms can build a one-way converter and there are no property rights on the necessary technical specifications, the only fulfilled expectations subgame perfect equilibrium involves full compatibility. Finally, when each firm has property rights on its technical specifications, full incompatibility and preemption are again observed at the equilibrium. Entry deterrence will then occur for su±ciently strong network eÆects. The analysis generalises to any market where network externalities are present.
One-Way Compatibility, Two-Way Compatibility and Entry in Network Industries
We study the strategic choice of compatibility between two initially incompatible network goods in a two-stage game played by an incumbent and an entrant firm. Compatibility may be achieved by means of a converter. We derive a number of results under different assumptions about the nature of the converter (one-way \emph{vs} two-way) and the existence of property rights. In the case of a two-way converter, which can only be supplied by the incumbent, incompatibility will result in equilibrium. When both firms can build a one-way converter and there are no property rights on the necessary technical specifications, the unique equilibrium involves full compatibility. Finally, when each firm has property rights on its technical specifications, full incompatibility and preemption are again observed at the equilibrium. With incompatibility, entry deterrence occurs for sufficiently strong network effects. The welfare analysis shows that the equilibrium compatibility regime is socially inefficient for most levels of the network effects.Network externalities, one-way compatibility, two-way compatibility, entry
Plastic Clashes: Competition among Closed and Open Systems in the Credit Card Industry.
This paper analyses market competition between two different types of credit card platforms: not-for-profit associations and proprietary systems. The main focus is on the role of the interchange fee set by not-for-profit platforms. We show that when the interchange fee is set so as to maximise the sum of issuers' and acquirers' profits, the equilibrium values of platforms' profits, of the sum of the fees charged by each platform and their market shares are independent of the competitive conditions within the not-for-profit platform and are affected by the strength of inter-platform competition. We also show that the imposition of a ban on the setting of the interchange fee has ambiguous effects on the profit of the proprietary system.two-sided markets, network externalities, credit cards, interchange fee
Biomarkers in emergency medicine
Researchers navigate the ocean of biomarkers searching for proper targets and optimal utilization of them. Emergency medicine builds up the front line to maximize the utility of clinically validated biomarkers and is the cutting edge field to test the applicability of promising biomarkers emerging from thorough translational researches. The role of biomarkers in clinical decision making would be of greater significance for identification, risk stratification, monitoring, and prognostication of the patients in the critical- and acute-care settings. No doubt basic research to explore novel biomarkers in relation to the pathogenesis
is as important as its clinical counterpart. This special issue includes five selected research papers that cover a variety of biomarker- and disease-related topics
Circulating Biologically Active Adrenomedullin Predicts Organ Failure and Mortality in Sepsis
BACKGROUND: Sepsis is a life-threatening organ dysfunction caused by a dysregulated host response to infection. Biologically active adrenomedullin (bio-ADM) is an emerging biomarker for sepsis. We explored whether bio-ADM concentration could predict severity, organ failure, and 30-day mortality in septic patients. METHODS: In 215 septic patients (109 patients with sepsis; 106 patients with septic shock), bio-ADM concentration was measured at diagnosis of sepsis, using sphingotest bio-ADM (Sphingotec GmbH, Hennigsdorf, Germany) and analyzed in terms of sepsis severity, vasopressor use, and 30-day mortality. The number of organ failures, sequential (sepsis-related) organ failure assessment (SOFA) score, and 30-day mortality were compared according to bio-ADM quartiles. RESULTS: Bio-ADM concentration was significantly higher in patients with septic shock, vasopressor use, and non-survivors than in patients with solitary sepsis, no vasopressor use, and survivors, respectively (all P<0.0001). Bio-ADM quartiles were associated with the number of organ failures (P<0.0001), as well as SOFA cardiovascular, renal, coagulation, and liver subscores (all P<0.05). The 30-day mortality rate showed a stepwise increase in each bio-ADM quartile (all P<0.0001). Bio-ADM concentration and SOFA score equally predicted the 30-day mortality (area under the curve: 0.827 vs 0.830). CONCLUSIONS: Bio-ADM could serve as a useful and objective biomarker to predict severity, organ failure, and 30-day mortality in septic patients
Experimental method for measuring classical concurrence of generic beam shapes
Classical entanglement is a powerful tool which provides a neat numerical
estimate for the study of classical correlations. Its experimental
investigation, however, has been limited to special cases. Here, we demonstrate
that the experimental quantification of the level of classical entanglement can
be carried out in more general instances. Our approach enables the extension to
arbitrarily shaped transverse modes and hence delivering a suitable
quantification tool to describe concisely the modal structure
Evaluation of the Bond Stress Transfer Mechanism in CFSTs
This paper studies the non-linear distribution of bond–slip behavior in the steel concrete interface of a Concrete Filled Steel Tube (CFST). Specifically, it concerns the regions of geometric discontinuity occurring in composite beams of CFST column-frame connection points. The study was conducted through an analytical model that represented the bond stress transfer mechanism within these areas. The resulting deductions were drawn up on the basis of the elasticity theory and the non-linear bond–slip relationship between the steel jacket and the confined concrete. This paper highlights how the model proposed here was able to obtain, not only the closed-form analytical expression of the transferring length involved in the bond stress transfer mechanism in CFSTs but also the expressions of concrete and steel jacket stresses and strains. In addition, the procedure also obtained the bond stress and slip trend in the above-mentioned length for rectangular and circular concrete filled steel tubes. The use of this model also resulted in an analytical expression for the calculation of the ultimate load in CFSTs. In this paper, the ultimate load predictions were compared with the experimental results obtained from 97 tests carried out on circular concrete filled tubes (CCFTs) and 35 tests on rectangular concrete filled tubes (RCFTs). The predictions drawn up with this model have been found to be the most accurate and uniform in comparison with those obtained from models proposed by other authors and Eurocode. With reference to the experimental-to-analytical load value ratio, the AVG and COV values obtained from the model proposed here are 0.86 and 0.42, and 1.06 and 0.57 for CCFT and RCFT analyses, respectively
Efficient discrete-time simulations of continuous-time quantum query algorithms
The continuous-time query model is a variant of the discrete query model in
which queries can be interleaved with known operations (called "driving
operations") continuously in time. Interesting algorithms have been discovered
in this model, such as an algorithm for evaluating nand trees more efficiently
than any classical algorithm. Subsequent work has shown that there also exists
an efficient algorithm for nand trees in the discrete query model; however,
there is no efficient conversion known for continuous-time query algorithms for
arbitrary problems.
We show that any quantum algorithm in the continuous-time query model whose
total query time is T can be simulated by a quantum algorithm in the discrete
query model that makes O[T log(T) / log(log(T))] queries. This is the first
upper bound that is independent of the driving operations (i.e., it holds even
if the norm of the driving Hamiltonian is very large). A corollary is that any
lower bound of T queries for a problem in the discrete-time query model
immediately carries over to a lower bound of \Omega[T log(log(T))/log (T)] in
the continuous-time query model.Comment: 12 pages, 6 fig
Exponential improvement in precision for simulating sparse Hamiltonians
We provide a quantum algorithm for simulating the dynamics of sparse
Hamiltonians with complexity sublogarithmic in the inverse error, an
exponential improvement over previous methods. Specifically, we show that a
-sparse Hamiltonian acting on qubits can be simulated for time
with precision using queries and
additional 2-qubit gates, where . Unlike previous
approaches based on product formulas, the query complexity is independent of
the number of qubits acted on, and for time-varying Hamiltonians, the gate
complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our
algorithm is based on a significantly improved simulation of the continuous-
and fractional-query models using discrete quantum queries, showing that the
former models are not much more powerful than the discrete model even for very
small error. We also simplify the analysis of this conversion, avoiding the
need for a complex fault correction procedure. Our simplification relies on a
new form of "oblivious amplitude amplification" that can be applied even though
the reflection about the input state is unavailable. Finally, we prove new
lower bounds showing that our algorithms are optimal as a function of the
error.Comment: v1: 27 pages; Subsumes and improves upon results in arXiv:1308.5424.
v2: 28 pages, minor change
- …