5,693 research outputs found
Transcribing Content from Structural Images with Spotlight Mechanism
Transcribing content from structural images, e.g., writing notes from music
scores, is a challenging task as not only the content objects should be
recognized, but the internal structure should also be preserved. Existing image
recognition methods mainly work on images with simple content (e.g., text lines
with characters), but are not capable to identify ones with more complex
content (e.g., structured symbols), which often follow a fine-grained grammar.
To this end, in this paper, we propose a hierarchical Spotlight Transcribing
Network (STN) framework followed by a two-stage "where-to-what" solution.
Specifically, we first decide "where-to-look" through a novel spotlight
mechanism to focus on different areas of the original image following its
structure. Then, we decide "what-to-write" by developing a GRU based network
with the spotlight areas for transcribing the content accordingly. Moreover, we
propose two implementations on the basis of STN, i.e., STNM and STNR, where the
spotlight movement follows the Markov property and Recurrent modeling,
respectively. We also design a reinforcement method to refine the framework by
self-improving the spotlight mechanism. We conduct extensive experiments on
many structural image datasets, where the results clearly demonstrate the
effectiveness of STN framework.Comment: Accepted by KDD2018 Research Track. In proceedings of the 24th ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining
(KDD'18
Integrated Sensing and Communication: Joint Pilot and Transmission Design
This paper studies a communication-centric integrated sensing and
communication (ISAC) system, where a multi-antenna base station (BS)
simultaneously performs downlink communication and target detection. A novel
target detection and information transmission protocol is proposed, where the
BS executes the channel estimation and beamforming successively and meanwhile
jointly exploits the pilot sequences in the channel estimation stage and user
information in the transmission stage to assist target detection. We
investigate the joint design of pilot matrix, training duration, and transmit
beamforming to maximize the probability of target detection, subject to the
minimum achievable rate required by the user. However, designing the optimal
pilot matrix is rather challenging since there is no closed-form expression of
the detection probability with respect to the pilot matrix. To tackle this
difficulty, we resort to designing the pilot matrix based on the
information-theoretic criterion to maximize the mutual information (MI) between
the received observations and BS-target channel coefficients for target
detection. We first derive the optimal pilot matrix for both channel estimation
and target detection, and then propose an unified pilot matrix structure to
balance minimizing the channel estimation error (MSE) and maximizing MI. Based
on the proposed structure, a low-complexity successive refinement algorithm is
proposed. Simulation results demonstrate that the proposed pilot matrix
structure can well balance the MSE-MI and the Rate-MI tradeoffs, and show the
significant region improvement of our proposed design as compared to other
benchmark schemes. Furthermore, it is unveiled that as the communication
channel is more correlated, the Rate-MI region can be further enlarged.Comment: This papar answers the optimal space code-time design for supporting
ISA
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Learning in Macroeconomics : An Empirical Approach
This thesis explores the role of adaptive learning in explaining empirical phenomena such as the Great Inflation. We show that the historic monetary policy deviated from the optimally recommended policy under learning. These deviations are caused by Brainard-type uncertainty which induces a conservative policy stance. Accounting for both imperfect knowledge and estimation uncertainty, the optimal policy under adaptive learning is consistent with historic policy behavior. However, we show that an optimizing but informationally constrained policy maker would most likely have experienced a Great Inflation like episode. Furthermore, we develop an estimation method to implement the adaptive learning hypothesis into DSGE modelling, thus extending the standard estimation toolbox
Bargaining with Incomplete Information
A central question in economics is understanding the difficulties that parties have in reaching mutually beneficial agreements. Informational differences provide an appealing explanation for bargaining inefficiencies. This chapter provides an overview of the theoretical and empirical literature on bargaining with incomplete information. The chapter begins with an analysis of bargaining within a mechanism design framework. A modern development is provided of the classic result that, given two parties with independent private valuations, ex post efficiency is attainable if and only if it is common knowledge that gains from trade exist. The classic problems of efficient trade with one-sided incomplete information but interdependent valuations, and of efficiently dissolving a partnership with two-sided incomplete information, are also reviewed using mechanism design. The chapter then proceeds to study bargaining where the parties sequentially exchange offers. Under one-sided incomplete information, it considers sequential bargaining between a seller with a known valuation and a buyer with a private valuation. When there is a "gap" between the seller's valuation and the support of buyer valuations, the seller-offer game has essentially a unique sequential equilibrium. This equilibrium exhibits the following properties: it is stationary, trade occurs in finite time, and the price is favorable to the informed party (the Coase Conjecture). The alternating-offer game exhibits similar properties, when a refinement of sequential equilibrium is applied. However, in the case of "no gap" between the seller's valuation and the support of buyer valuations, the bargaining does not conclude with probability one after any finite number of periods, and it does not follow that sequential equilibria need be stationary. If stationarity is nevertheless assumed, then the results parallel those for the "gap" case. However, if stationarity is not assumed, then instead a folk theorem obtains, so substantial delay is possible and the uninformed party may receive substantial surplus. The chapter also briefly sketches results for sequential bargaining with two-sided incomplete information. Finally, it reviews the empirical evidence on strategic bargaining with private information by focusing on one of the most prominent examples of bargaining: union contract negotiations.Bargaining; Delay; Incomplete Information
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Recommended from our members
Sequential Modelling and Inference of High-frequency Limit Order Book with State-space Models and Monte Carlo Algorithms
The high-frequency limit order book (LOB) market has recently attracted increasing research attention from both the industry and the academia as a result of expanding algorithmic trading. However, the massive data throughput and the inherent complexity of high-frequency market dynamics also present challenges to some classic statistical modelling approaches. By adopting powerful state-space models from the field of signal processing as well as a number of Bayesian inference algorithms such as particle filtering, Markov chain Monte Carlo and variational inference algorithms, this thesis presents my extensive research into the high-frequency limit order book covering a wide scope of topics.
Chapter 2 presents a novel construction of the non-homogeneous Poisson process to allow online intensity inference of limit order transactions arriving at a central exchange as point data. Chapter 3 extends a baseline jump diffusion model for market fair-price process to include three additional model features taken from real-world market intuitions. In Chapter 4, another price model is developed to account for both long-term and short-term diffusion behaviours of the price process. This is achieved by incorporating multiple jump-diffusion processes each exhibiting a unique characteristic. Chapter 5 observes the multi-regime nature of price diffusion processes as well as the non-Markovian switching behaviour between regimes. As such, a novel model is proposed which combines the continuous-time state-space model, the hidden semi-Markov switching model and the non-parametric Dirichlet process model. Additionally, building upon the general structure of the particle Markov chain Monte Carlo algorithm, I further propose an algorithm which achieves sequential state inference, regime identification and regime parameters learning requiring minimal prior assumptions. Chapter 6 focuses on the development of efficient parameter-learning algorithms for state-space models and presents three algorithms each demonstrating promising results in comparison to some well-established methods.
The models and algorithms proposed in this thesis not only are practical tools for analysing high-frequency LOB markets, but can also be applied in various areas and disciplines beyond finance
Fitting and goodness-of-fit test of non-truncated and truncated power-law distributions
Power-law distributions contain precious information about a large variety of
processes in geoscience and elsewhere. Although there are sound theoretical
grounds for these distributions, the empirical evidence in favor of power laws
has been traditionally weak. Recently, Clauset et al. have proposed a
systematic method to find over which range (if any) a certain distribution
behaves as a power law. However, their method has been found to fail, in the
sense that true (simulated) power-law tails are not recognized as such in some
instances, and then the power-law hypothesis is rejected. Moreover, the method
does not work well when extended to power-law distributions with an upper
truncation. We explain in detail a similar but alternative procedure, valid for
truncated as well as for non-truncated power-law distributions, based in
maximum likelihood estimation, the Kolmogorov-Smirnov goodness-of-fit test, and
Monte Carlo simulations. An overview of the main concepts as well as a recipe
for their practical implementation is provided. The performance of our method
is put to test on several empirical data which were previously analyzed with
less systematic approaches. The databases presented here include the half-lives
of the radionuclides, the seismic moment of earthquakes in the whole world and
in Southern California, a proxy for the energy dissipated by tropical cyclones
elsewhere, the area burned by forest fires in Italy, and the waiting times
calculated over different spatial subdivisions of Southern California. We find
the functioning of the method very satisfactory.Comment: 26 pages, 9 figure
- …