21,968 research outputs found
Label optimal regret bounds for online local learning
We resolve an open question from (Christiano, 2014b) posed in COLT'14
regarding the optimal dependency of the regret achievable for online local
learning on the size of the label set. In this framework the algorithm is shown
a pair of items at each step, chosen from a set of items. The learner then
predicts a label for each item, from a label set of size and receives a
real valued payoff. This is a natural framework which captures many interesting
scenarios such as collaborative filtering, online gambling, and online max cut
among others. (Christiano, 2014a) designed an efficient online learning
algorithm for this problem achieving a regret of , where
is the number of rounds. Information theoretically, one can achieve a regret of
. One of the main open questions left in this framework
concerns closing the above gap.
In this work, we provide a complete answer to the question above via two main
results. We show, via a tighter analysis, that the semi-definite programming
based algorithm of (Christiano, 2014a), in fact achieves a regret of
. Second, we show a matching computational lower bound. Namely,
we show that a polynomial time algorithm for online local learning with lower
regret would imply a polynomial time algorithm for the planted clique problem
which is widely believed to be hard. We prove a similar hardness result under a
related conjecture concerning planted dense subgraphs that we put forth. Unlike
planted clique, the planted dense subgraph problem does not have any known
quasi-polynomial time algorithms.
Computational lower bounds for online learning are relatively rare, and we
hope that the ideas developed in this work will lead to lower bounds for other
online learning scenarios as well.Comment: 13 pages; Changes from previous version: small changes to proofs of
Theorems 1 & 2, a small rewrite of introduction as well (this version is the
same as camera-ready copy in COLT '15
Evaluating Asset Pricing Implications of DSGE Models
This paper conducts an econometric evaluation of structural macroeconomic asset pricing models. A one-sector dynamic stochastic general equilibrium model (DSGE) with habit formation and capital adjustment costs is considered. Based on the log-linearized DSGE model, a Gaussian probability model for the joint distribution of aggregate consumption, investment, and a vector of asset returns R(t) is specified. We facilitate the stochastic discount factor M(t) representation obtained from the DSGE model and impose the no-arbitrage condition E[M(t)R(t)|t-1]=1. In addition to the full general equilibrium model, we also consider consumption and production based partial equilibrium specifications, and a more general reference model. To evaluate the various asset pricing models we compute posterior model probabilities and loss function based measures of model adequacy.
Shadow Tomography of Quantum States
We introduce the problem of *shadow tomography*: given an unknown
-dimensional quantum mixed state , as well as known two-outcome
measurements , estimate the probability that
accepts , to within additive error , for each of the
measurements. How many copies of are needed to achieve this, with high
probability? Surprisingly, we give a procedure that solves the problem by
measuring only copies. This means, for example, that we can learn the behavior of an
arbitrary -qubit state, on all accepting/rejecting circuits of some fixed
polynomial size, by measuring only copies of the state.
This resolves an open problem of the author, which arose from his work on
private-key quantum money schemes, but which also has applications to quantum
copy-protected software, quantum advice, and quantum one-way communication.
Recently, building on this work, Brand\~ao et al. have given a different
approach to shadow tomography using semidefinite programming, which achieves a
savings in computation time.Comment: 29 pages, extended abstract appeared in Proceedings of STOC'2018,
revised to give slightly better upper bound (1/eps^4 rather than 1/eps^5) and
lower bounds with explicit dependence on the dimension
Navigating Central Path with Electrical Flows: from Flows to Matchings, and Back
We present an -time algorithm for
the maximum s-t flow and the minimum s-t cut problems in directed graphs with
unit capacities. This is the first improvement over the sparse-graph case of
the long-standing time bound due to Even and
Tarjan [EvenT75]. By well-known reductions, this also establishes an
-time algorithm for the maximum-cardinality bipartite
matching problem. That, in turn, gives an improvement over the celebrated
celebrated time bound of Hopcroft and Karp [HK73] whenever the
input graph is sufficiently sparse
(How) Do the ECB and the Fed React to Financial Market Uncertainty?: The Taylor Rule in Times of Crisis
We assess differences that emerge in Taylor rule estimations for the Fed and the ECB before and after the start of the subprime crisis. For this purpose, we apply an explicit estimate of the equilibrium real interest rate and of potential output in order to account for variations within these variables over time. We argue that measures of money and credit growth, interest rate spreads and asset price inflation should be added to the classical Taylor rule because these variables are proxies of a change in the equilibrium interest rate and are, thus, also ikely to have played a major role in setting policy rates during the crisis. Our empirical results gained from a state-space model and GMM estimations reveal that, as far as the Fed is concerned, the impact of consumer price inflation, and money and credit growth turns negative during the crisis while the sign of the asset price inflation coefficient turns positive. Thus we are able to establish significant differences in the parameters of the reaction functions of the Fed before and after the start of the subprime crisis. In case of the ECB, there is no evidence of a change in signs. Instead, the positive reaction to credit growth, consumer and house price inflation becomes even stronger than before. Moreover we find evidence of a less inertial policy of both the Fed and the ECB during the crisis.Subprime crisis, Federal Reserve, European Central Bank, equilibrium real interest rate, Taylor rule
Virtual to Real Reinforcement Learning for Autonomous Driving
Reinforcement learning is considered as a promising direction for driving
policy learning. However, training autonomous driving vehicle with
reinforcement learning in real environment involves non-affordable
trial-and-error. It is more desirable to first train in a virtual environment
and then transfer to the real environment. In this paper, we propose a novel
realistic translation network to make model trained in virtual environment be
workable in real world. The proposed network can convert non-realistic virtual
image input into a realistic one with similar scene structure. Given realistic
frames as input, driving policy trained by reinforcement learning can nicely
adapt to real world driving. Experiments show that our proposed virtual to real
(VR) reinforcement learning (RL) works pretty well. To our knowledge, this is
the first successful case of driving policy trained by reinforcement learning
that can adapt to real world driving data
Price Stability and The ECB's Monetary Policy Strategy
This paper focuses on the price stability objective within the framework of the single monetary policy strategy. It starts by reviewing what this objective, which is common to all central banks, means. Secondly, this paper will focus exclusively on the anchoring of short- to medium-term inflation expectations (Part 2). Several measures show that this anchoring is effective. Modern New Keynesian theory is an appropriate framework for analysing the impact that this anchoring of expectations has on the determination of the short- to medium-term inflation rate. From this point of view, observed inflation in the euro area seems to be in line with the theory and the ECB's action seems to be very effective. Thirdly, we will focus on the other aspect of monetary stability: the degree of price-level uncertainty and the anchoring of inflation expectations in the medium to long term. Even though this assessment is more difficult than it is in the short to medium term, since we only have a track record covering five years, various indicators from the theoretical analysis paint a fairly reassuring picture of the effectiveness of the device used by the ECB.Monetary policy ; European Central Bank ; Inflation
Type VII Collagen Gene Mutations (c.8569G>T and c.4879G>A) Result in the Moderately Severe Phenotype of Recessive Dystrophic Epidermolysis Bullosa in a Korean Patient
Dystrophic epidermolysis bullosa (DEB) are caused by mutations in the COL7A1 gene, which encodes type VII collagen. Even though more than 500 different COL7A1 mutations have been identified in DEB, it still remains to be under-investigated. To investigate the mutation of COL7A1 in moderately severe phenotype of recessive DEB (RDEB) in a Korean patient, the mutation detection strategy was consisted of polymerase chain reaction (PCR) amplification of genomic DNA, followed by heteroduplex analysis, nucleotide sequencing of the PCR products demonstrating altered mobility. In this study, we found that one mutation (c.8569G>T) was detected within exon 116. The mutation of c.8569G>T in exon 116 changed the GAG (Glu) to TAG, eventually resulted in premature termination of type VII collagen polypeptide. Furthermore the mother did not have the mutation c.8569G>T in exon 116. The other novel mutation (c.4879G>A) was detected within exon 51 of both patient and mother, thereby resulting in changing valine (Val) to isoleucine (Ile) in type VII collagen polypeptide. Taken together, in this study we identified compound heterozygosity for COL7A1 mutations (c.8569G>T and c.4879G>A) in moderately severe RDEB in a Korean patient. We hope that this data contribute to the expanding database on COL7A1 mutations in DEB
Electrical Flows, Laplacian Systems, and Faster Approximation of Maximum Flow in Undirected Graphs
We introduce a new approach to computing an approximately maximum s-t flow in
a capacitated, undirected graph. This flow is computed by solving a sequence of
electrical flow problems. Each electrical flow is given by the solution of a
system of linear equations in a Laplacian matrix, and thus may be approximately
computed in nearly-linear time.
Using this approach, we develop the fastest known algorithm for computing
approximately maximum s-t flows. For a graph having n vertices and m edges, our
algorithm computes a (1-\epsilon)-approximately maximum s-t flow in time
\tilde{O}(mn^{1/3} \epsilon^{-11/3}). A dual version of our approach computes a
(1+\epsilon)-approximately minimum s-t cut in time
\tilde{O}(m+n^{4/3}\eps^{-8/3}), which is the fastest known algorithm for this
problem as well. Previously, the best dependence on m and n was achieved by the
algorithm of Goldberg and Rao (J. ACM 1998), which can be used to compute
approximately maximum s-t flows in time \tilde{O}(m\sqrt{n}\epsilon^{-1}), and
approximately minimum s-t cuts in time \tilde{O}(m+n^{3/2}\epsilon^{-3})
- âŠ