8,424 research outputs found
Learning and Forecasting Opinion Dynamics in Social Networks
Social media and social networking sites have become a global pinboard for
exposition and discussion of news, topics, and ideas, where social media users
often update their opinions about a particular topic by learning from the
opinions shared by their friends. In this context, can we learn a data-driven
model of opinion dynamics that is able to accurately forecast opinions from
users? In this paper, we introduce SLANT, a probabilistic modeling framework of
opinion dynamics, which represents users opinions over time by means of marked
jump diffusion stochastic differential equations, and allows for efficient
model simulation and parameter estimation from historical fine grained event
data. We then leverage our framework to derive a set of efficient predictive
formulas for opinion forecasting and identify conditions under which opinions
converge to a steady state. Experiments on data gathered from Twitter show that
our model provides a good fit to the data and our formulas achieve more
accurate forecasting than alternatives
Human-Machine Collaborative Optimization via Apprenticeship Scheduling
Coordinating agents to complete a set of tasks with intercoupled temporal and
resource constraints is computationally challenging, yet human domain experts
can solve these difficult scheduling problems using paradigms learned through
years of apprenticeship. A process for manually codifying this domain knowledge
within a computational framework is necessary to scale beyond the
``single-expert, single-trainee" apprenticeship model. However, human domain
experts often have difficulty describing their decision-making processes,
causing the codification of this knowledge to become laborious. We propose a
new approach for capturing domain-expert heuristics through a pairwise ranking
formulation. Our approach is model-free and does not require enumerating or
iterating through a large state space. We empirically demonstrate that this
approach accurately learns multifaceted heuristics on a synthetic data set
incorporating job-shop scheduling and vehicle routing problems, as well as on
two real-world data sets consisting of demonstrations of experts solving a
weapon-to-target assignment problem and a hospital resource allocation problem.
We also demonstrate that policies learned from human scheduling demonstration
via apprenticeship learning can substantially improve the efficiency of a
branch-and-bound search for an optimal schedule. We employ this human-machine
collaborative optimization technique on a variant of the weapon-to-target
assignment problem. We demonstrate that this technique generates solutions
substantially superior to those produced by human domain experts at a rate up
to 9.5 times faster than an optimization approach and can be applied to
optimally solve problems twice as complex as those solved by a human
demonstrator.Comment: Portions of this paper were published in the Proceedings of the
International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and
in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper
consists of 50 pages with 11 figures and 4 table
Modeling Adoption and Usage of Competing Products
The emergence and wide-spread use of online social networks has led to a
dramatic increase on the availability of social activity data. Importantly,
this data can be exploited to investigate, at a microscopic level, some of the
problems that have captured the attention of economists, marketers and
sociologists for decades, such as, e.g., product adoption, usage and
competition.
In this paper, we propose a continuous-time probabilistic model, based on
temporal point processes, for the adoption and frequency of use of competing
products, where the frequency of use of one product can be modulated by those
of others. This model allows us to efficiently simulate the adoption and
recurrent usages of competing products, and generate traces in which we can
easily recognize the effect of social influence, recency and competition. We
then develop an inference method to efficiently fit the model parameters by
solving a convex program. The problem decouples into a collection of smaller
subproblems, thus scaling easily to networks with hundred of thousands of
nodes. We validate our model over synthetic and real diffusion data gathered
from Twitter, and show that the proposed model does not only provides a good
fit to the data and more accurate predictions than alternatives but also
provides interpretable model parameters, which allow us to gain insights into
some of the factors driving product adoption and frequency of use
Modelling Financial High Frequency Data Using Point Processes
In this paper, we give an overview of the state-of-the-art in the econometric literature on the modeling of so-called financial point processes. The latter are associated with the random arrival of specific financial trading events, such as transactions, quote updates, limit orders or price changes observable based on financial high-frequency data. After discussing fundamental statistical concepts of point process theory, we review durationbased and intensity-based models of financial point processes. Whereas duration-based approaches are mostly preferable for univariate time series, intensity-based models provide powerful frameworks to model multivariate point processes in continuous time. We illustrate the most important properties of the individual models and discuss major empirical applications.Financial point processes, dynamic duration models, dynamic intensity models.
Defy the Game: Automated Market Making using Deep Reinforcement Learning
Automated market makers have gained popularity in the financial market for their ability to provide
liquidity without needing a centralized intermediary (market maker). However, they suffer from the
problems of slippage and impermanent loss, which can lead to losses for both liquidity providers and takers.
This work implements a pseudo-arbitrage rule to solve the impermanent loss issues related to arbitrage
opportunities. The mechanism implements a trusted external oracle to get the market conditions, put them
on the automated market maker, and match the bonding curve to them. Next, the application of a Double
Deep Q-Learning reinforcement learning algorithm is proposed to reduce these issues in automated market
makers. The algorithm adjusts the curvature of the bonding curve function to adapt to market conditions
quickly. This work describes the model, the simulation environment used to learn and test the proposed
approach, and the metrics used to evaluate its performance. Finally, it explains the results of the experiments
and analysis of their implications. The approach shows promise in reducing slippage and impermanent loss
and recommending improvements and future works
Semantic data integration for supply chain management: with a specific focus on applications in the semiconductor industry
Supply Chain Management (SCM) is essential to monitor, control, and enhance the performance of SCs. Increasing globalization and diversity of Supply Chains (SC)s lead to complex SC structures, limited visibility among SC partners, and
challenging collaboration caused by dispersed data silos. Digitalization is responsible for driving and transforming SCs of fundamental sectors such as the semiconductor industry. This is further accelerated due to the inevitable role that semiconductor products play in electronics, IoT, and security systems. Semiconductor SCM is unique as the SC operations exhibit special features, e.g.,
long production lead times and short product life. Hence, systematic SCM is required to establish information exchange, overcome inefficiency resulting from incompatibility, and adapt to industry-specific challenges.
The Semantic Web is designed for linking data and establishing information exchange. Semantic models provide high-level descriptions of the domain that enable interoperability. Semantic data integration consolidates the heterogeneous data into meaningful and valuable information. The main goal of this thesis is to investigate Semantic Web Technologies (SWT) for SCM with a specific focus
on applications in the semiconductor industry.
As part of SCM, End-to-End SC modeling ensures visibility of SC partners and flows. Existing models are limited in the way they represent operational SC relationships beyond one-to-one structures. The scarcity of empirical data from multiple SC partners hinders the analysis of the impact of supply network partners on each other and the benchmarking of the overall SC performance. In our work, we investigate (i) how semantic models can be used to standardize and benchmark SCs. Moreover, in a volatile and unpredictable environment, SC experts require methodical and efficient approaches to integrate various data sources for informed decision-making regarding SC behavior. Thus, this work addresses (ii) how semantic data integration can help make SCs more efficient and resilient. Moreover,
to secure a good position in a competitive market, semiconductor SCs strive to implement operational strategies to control demand variation, i.e., bullwhip, while maintaining sustainable relationships with customers. We examine (iii) how we can apply semantic technologies to specifically support semiconductor SCs.
In this thesis, we provide semantic models that integrate, in a standardized way, SC processes, structure, and flows, ensuring both an elaborate understanding of the holistic SCs and including granular operational details. We demonstrate that these models enable the instantiation of a synthetic SC for benchmarking. We contribute with semantic data integration applications to enable interoperability
and make SCs more efficient and resilient. Moreover, we leverage ontologies and KGs to implement customer-oriented bullwhip-taming strategies. We create semantic-based approaches intertwined with Artificial Intelligence (AI) algorithms to address semiconductor industry specifics and ensure operational excellence.
The results prove that relying on semantic technologies contributes to achieving rigorous and systematic SCM. We deem that better standardization, simulation, benchmarking, and analysis, as elaborated in the contributions, will help master more complex SC scenarios. SCs stakeholders can increasingly understand the domain and thus are better equipped with effective control strategies to
restrain disruption accelerators, such as the bullwhip effect. In essence, the proposed Sematic Web Technology-based strategies unlock the potential to increase the efficiency, resilience, and operational excellence of
supply networks and the semiconductor SC in particular
Multi-Period Trading via Convex Optimization
We consider a basic model of multi-period trading, which can be used to
evaluate the performance of a trading strategy. We describe a framework for
single-period optimization, where the trades in each period are found by
solving a convex optimization problem that trades off expected return, risk,
transaction cost and holding cost such as the borrowing cost for shorting
assets. We then describe a multi-period version of the trading method, where
optimization is used to plan a sequence of trades, with only the first one
executed, using estimates of future quantities that are unknown when the trades
are chosen. The single-period method traces back to Markowitz; the multi-period
methods trace back to model predictive control. Our contribution is to describe
the single-period and multi-period methods in one simple framework, giving a
clear description of the development and the approximations made. In this paper
we do not address a critical component in a trading algorithm, the predictions
or forecasts of future quantities. The methods we describe in this paper can be
thought of as good ways to exploit predictions, no matter how they are made. We
have also developed a companion open-source software library that implements
many of the ideas and methods described in the paper
ATMS: Algorithmic Trading-Guided Market Simulation
The effective construction of an Algorithmic Trading (AT) strategy often
relies on market simulators, which remains challenging due to existing methods'
inability to adapt to the sequential and dynamic nature of trading activities.
This work fills this gap by proposing a metric to quantify market discrepancy.
This metric measures the difference between a causal effect from underlying
market unique characteristics and it is evaluated through the interaction
between the AT agent and the market. Most importantly, we introduce Algorithmic
Trading-guided Market Simulation (ATMS) by optimizing our proposed metric.
Inspired by SeqGAN, ATMS formulates the simulator as a stochastic policy in
reinforcement learning (RL) to account for the sequential nature of trading.
Moreover, ATMS utilizes the policy gradient update to bypass differentiating
the proposed metric, which involves non-differentiable operations such as order
deletion from the market. Through extensive experiments on semi-real market
data, we demonstrate the effectiveness of our metric and show that ATMS
generates market data with improved similarity to reality compared to the
state-of-the-art conditional Wasserstein Generative Adversarial Network (cWGAN)
approach. Furthermore, ATMS produces market data with more balanced BUY and
SELL volumes, mitigating the bias of the cWGAN baseline approach, where a
simple strategy can exploit the BUY/SELL imbalance for profit
- …