14,868 research outputs found
Continuous Monitoring of A/B Tests without Pain: Optional Stopping in Bayesian Testing
A/B testing is one of the most successful applications of statistical theory
in modern Internet age. One problem of Null Hypothesis Statistical Testing
(NHST), the backbone of A/B testing methodology, is that experimenters are not
allowed to continuously monitor the result and make decision in real time. Many
people see this restriction as a setback against the trend in the technology
toward real time data analytics. Recently, Bayesian Hypothesis Testing, which
intuitively is more suitable for real time decision making, attracted growing
interest as an alternative to NHST. While corrections of NHST for the
continuous monitoring setting are well established in the existing literature
and known in A/B testing community, the debate over the issue of whether
continuous monitoring is a proper practice in Bayesian testing exists among
both academic researchers and general practitioners. In this paper, we formally
prove the validity of Bayesian testing with continuous monitoring when proper
stopping rules are used, and illustrate the theoretical results with concrete
simulation illustrations. We point out common bad practices where stopping
rules are not proper and also compare our methodology to NHST corrections.
General guidelines for researchers and practitioners are also provided
Filter-And-Forward Distributed Beamforming in Relay Networks with Frequency Selective Fading
A new approach to distributed cooperative beamforming in relay networks with
frequency selective fading is proposed. It is assumed that all the relay nodes
are equipped with finite impulse response (FIR) filters and use a
filter-and-forward (FF) strategy to compensate for the transmitter-to-relay and
relay-to-destination channels.
Three relevant half-duplex distributed beamforming problems are considered.
The first problem amounts to minimizing the total relay transmitted power
subject to the destination quality-of-service (QoS) constraint. In the second
and third problems, the destination QoS is maximized subject to the total and
individual relay transmitted power constraints, respectively. For the first and
second problems, closed-form solutions are obtained, whereas the third problem
is solved using convex optimization. The latter convex optimization technique
can be also directly extended to the case when the individual and total power
constraints should be jointly taken into account. Simulation results
demonstrate that in the frequency selective fading case, the proposed FF
approach provides substantial performance improvements as compared to the
commonly used amplify-and-forward (AF) relay beamforming strategy.Comment: Submitted to IEEE Trans. on Signal Processing on 8 July 200
Spatially extended nature of resistive switching in perovskite oxide thin films
We report the direct observation of the electric pulse induced
resistance-change (EPIR) effect at the nano scale on La1-xSrxMnO3 (LSMO) thin
films by the current measurement AFM technique. After a switching voltage of
one polarity is applied across the sample by the AFM tip, the conductivity in a
local nanometer region around the AFM tip is increased, and after a switching
voltage of the opposite polarity is applied, the local conductivity is reduced.
This reversible resistance switching effect is observed under both continuous
and short pulse voltage switching conditions. It is important for future
nanoscale non-volatile memory device applications.Comment: 11 pages, 3 figure
Fixation probabilities for any configuration of two strategies on regular graphs
Population structure and spatial heterogeneity are integral components of
evolutionary dynamics, in general, and of evolution of cooperation, in
particular. Structure can promote the emergence of cooperation in some
populations and suppress it in others. Here, we provide results for weak
selection to favor cooperation on regular graphs for any configuration, meaning
any arrangement of cooperators and defectors. Our results extend previous work
on fixation probabilities of single, randomly placed mutants. We find that for
any configuration cooperation is never favored for birth-death (BD) updating.
In contrast, for death-birth (DB) updating, we derive a simple, computationally
tractable formula for weak selection to favor cooperation when starting from
any configuration containing any number of cooperators and defectors. This
formula elucidates two important features: (i) the takeover of cooperation can
be enhanced by the strategic placement of cooperators and (ii) adding more
cooperators to a configuration can sometimes suppress the evolution of
cooperation. These findings give a formal account for how selection acts on all
transient states that appear in evolutionary trajectories. They also inform the
strategic design of initial states in social networks to maximally promote
cooperation. We also derive general results that characterize the interaction
of any two strategies, not only cooperation and defection.Comment: 28 pages; final versio
ARPA Whitepaper
We propose a secure computation solution for blockchain networks. The
correctness of computation is verifiable even under malicious majority
condition using information-theoretic Message Authentication Code (MAC), and
the privacy is preserved using Secret-Sharing. With state-of-the-art multiparty
computation protocol and a layer2 solution, our privacy-preserving computation
guarantees data security on blockchain, cryptographically, while reducing the
heavy-lifting computation job to a few nodes. This breakthrough has several
implications on the future of decentralized networks. First, secure computation
can be used to support Private Smart Contracts, where consensus is reached
without exposing the information in the public contract. Second, it enables
data to be shared and used in trustless network, without disclosing the raw
data during data-at-use, where data ownership and data usage is safely
separated. Last but not least, computation and verification processes are
separated, which can be perceived as computational sharding, this effectively
makes the transaction processing speed linear to the number of participating
nodes. Our objective is to deploy our secure computation network as an layer2
solution to any blockchain system. Smart Contracts\cite{smartcontract} will be
used as bridge to link the blockchain and computation networks. Additionally,
they will be used as verifier to ensure that outsourced computation is
completed correctly. In order to achieve this, we first develop a general MPC
network with advanced features, such as: 1) Secure Computation, 2) Off-chain
Computation, 3) Verifiable Computation, and 4)Support dApps' needs like
privacy-preserving data exchange
Convertible Bond Underpricing: Renegotiable Covenants, Seasoning and Convergence (Published in "Management Science", Vol. 53, No. 11, November 2007, pp. 1793.1814. )
We investigate the long-standing puzzle on the underpricings of convertible bonds. We hypothesize that the observed underpricing is induced by the possibility that a convertible bond might renegotiate on some of its covenants, e.g., an imbedded put option, in financial difficulties. Consistent with our hypothesis, we find that the initial underpricing is larger for lower rated bonds. The underpricing worsens if the issuer experiences subsequent financial difficulties. However, conditional on no rating downgrades, our main empirical result shows that convertible bond prices do converge to their theoretical prices within two years. This seasoning period is shorter for higher rated convertible bonds.
Increasing Achievable Information Rates via Geometric Shaping
Achievable information rates are used as a metric to design novel modulation
formats via geometric shaping. The proposed geometrically shaped 256-ary
constellation achieves SNR gains of up to 1.18 dB.Comment: Additional references have been adde
The Masked Sample Covariance Estimator: An Analysis via Matrix Concentration Inequalities
Covariance estimation becomes challenging in the regime where the number p of
variables outstrips the number n of samples available to construct the
estimate. One way to circumvent this problem is to assume that the covariance
matrix is nearly sparse and to focus on estimating only the significant
entries. To analyze this approach, Levina and Vershynin (2011) introduce a
formalism called masked covariance estimation, where each entry of the sample
covariance estimator is reweighted to reflect an a priori assessment of its
importance. This paper provides a short analysis of the masked sample
covariance estimator by means of a matrix concentration inequality. The main
result applies to general distributions with at least four moments. Specialized
to the case of a Gaussian distribution, the theory offers qualitative
improvements over earlier work. For example, the new results show that n = O(B
log^2 p) samples suffice to estimate a banded covariance matrix with bandwidth
B up to a relative spectral-norm error, in contrast to the sample complexity n
= O(B log^5 p) obtained by Levina and Vershynin
- …