5,404 research outputs found
Wright meets Markowitz: How standard portfolio theory changes when assets are technologies following experience curves
We consider how to optimally allocate investments in a portfolio of competing
technologies using the standard mean-variance framework of portfolio theory. We
assume that technologies follow the empirically observed relationship known as
Wright's law, also called a "learning curve" or "experience curve", which
postulates that costs drop as cumulative production increases. This introduces
a positive feedback between cost and investment that complicates the portfolio
problem, leading to multiple local optima, and causing a trade-off between
concentrating investments in one project to spur rapid progress vs.
diversifying over many projects to hedge against failure. We study the
two-technology case and characterize the optimal diversification in terms of
progress rates, variability, initial costs, initial experience, risk aversion,
discount rate and total demand. The efficient frontier framework is used to
visualize technology portfolios and show how feedback results in nonlinear
distortions of the feasible set. For the two-period case, in which learning and
uncertainty interact with discounting, we compare different scenarios and find
that the discount rate plays a critical role
Algorithms for nonnegative matrix factorization with the beta-divergence
This paper describes algorithms for nonnegative matrix factorization (NMF)
with the beta-divergence (beta-NMF). The beta-divergence is a family of cost
functions parametrized by a single shape parameter beta that takes the
Euclidean distance, the Kullback-Leibler divergence and the Itakura-Saito
divergence as special cases (beta = 2,1,0, respectively). The proposed
algorithms are based on a surrogate auxiliary function (a local majorization of
the criterion function). We first describe a majorization-minimization (MM)
algorithm that leads to multiplicative updates, which differ from standard
heuristic multiplicative updates by a beta-dependent power exponent. The
monotonicity of the heuristic algorithm can however be proven for beta in (0,1)
using the proposed auxiliary function. Then we introduce the concept of
majorization-equalization (ME) algorithm which produces updates that move along
constant level sets of the auxiliary function and lead to larger steps than MM.
Simulations on synthetic and real data illustrate the faster convergence of the
ME approach. The paper also describes how the proposed algorithms can be
adapted to two common variants of NMF : penalized NMF (i.e., when a penalty
function of the factors is added to the criterion function) and convex-NMF
(when the dictionary is assumed to belong to a known subspace).Comment: \`a para\^itre dans Neural Computatio
Monte Carlo evaluation of sensitivities in computational finance
In computational finance, Monte Carlo simulation is used to compute the correct prices for financial options. More important, however, is the ability to compute the so-called "Greeks'', the first and second order derivatives of the prices with respect to input parameters such as the current asset price, interest rate and level of volatility.\ud
\ud
This paper discusses the three main approaches to computing Greeks: finite difference, likelihood ratio method (LRM) and pathwise sensitivity calculation. The last of these has an adjoint implementation with a computational cost which is independent of the number of first derivatives to be calculated. We explain how the practical development of adjoint codes is greatly assisted by using Algorithmic Differentiation, and in particular discuss the performance achieved by the FADBAD++ software package which is based on templates and operator overloading within C++.\ud
\ud
The pathwise approach is not applicable when the financial payoff function is not differentiable, and even when the payoff is differentiable, the use of scripting in real-world implementations means it can be very difficult in practice to evaluate the derivative of very complex financial products. A new idea is presented to address these limitations by combining the adjoint pathwise approach for the stochastic path evolution with LRM for the payoff evaluation
Fixed interval scheduling problem with minimal idle time with an application to music arrangement problem
The Operational Fixed Interval Scheduling Problem aims to find an assignment
of jobs to machines that maximizes the total weight of the completed jobs. We
introduce a new variant of the problem where we consider the additional goal of
minimizing the idle time, the total duration during which the machines are
idle. The problem is expressed using quadratic unconstrained binary
optimization (QUBO) formulation, taking into account soft and hard constraints
required to ensure that the number of jobs running at a time point is desirably
equal to the number of machines. Our choice of QUBO representation is motivated
by the increasing popularity of new computational architectures such as
neuromorphic processors, coherent Ising machines, and quantum and
quantum-inspired digital annealers for which QUBO is a natural input. An
optimization problem that can be solved using the presented QUBO formulation is
the music reduction problem, the process of reducing a given music piece for a
smaller number of instruments. We use two music compositions to test the QUBO
formulation and compare the performance of simulated, quantum, and hybrid
annealing algorithms.Comment: 15 pages, 3 figure
Application of Intermediate Multi-Agent Systems to Integrated Algorithmic Composition and Expressive Performance of Music
We investigate the properties of a new Multi-Agent Systems (MAS) for computer-aided composition called IPCS (pronounced “ipp-siss”) the Intermediate Performance Composition System which generates expressive performance as part of its compositional process, and produces emergent melodic structures by a novel multi-agent process. IPCS consists of a small-medium size (2 to 16) collection of agents in which each agent can perform monophonic tunes and learn monophonic tunes from other agents. Each agent has an affective state (an “artificial emotional state”) which affects how it performs the music to other agents; e.g. a “happy” agent will perform “happier” music. The agent performance not only involves compositional changes to the music, but also adds smaller changes based on expressive music performance algorithms for humanization. Every agent is initialized with a tune containing the same single note, and over the interaction period longer tunes are built through agent interaction. Agents will only learn tunes performed to them by other agents if the affective content of the tune is similar to their current affective state; learned tunes are concatenated to the end of their current tune. Each agent in the society learns its own growing tune during the interaction process. Agents develop “opinions” of other agents that perform to them, depending on how much the performing agent can help their tunes grow. These opinions affect who they interact with in the future. IPCS is not a mapping from multi-agent interaction onto musical features, but actually utilizes music for the agents to communicate emotions. In spite of the lack of explicit melodic intelligence in IPCS, the system is shown to generate non-trivial melody pitch sequences as a result of emotional communication between agents. The melodies also have a hierarchical structure based on the emergent social structure of the multi-agent system and the hierarchical structure is a result of the emerging agent social interaction structure. The interactive humanizations produce micro-timing and loudness deviations in the melody which are shown to express its hierarchical generative structure without the need for structural analysis software frequently used in computer music humanization
Computational composition strategies in audiovisual laptop performance
We live in a cultural environment in which computer based musical performances have become ubiquitous. Particularly the use of laptops as instruments is a thriving practice in many genres and subcultures. The opportunity to command the most intricate level of control on the smallest of time scales in music composition and computer graphics introduces a number of complexities and dilemmas for the performer working with algorithms. Writing computer code to create audiovisuals offers abundant opportunities for discovering new ways of expression in live performance while simultaneously introducing challenges and presenting the user with difficult choices. There are a host of computational strategies that can be employed in live situations to assist the performer, including artificially intelligent performance agents who operate according to predefined algorithmic rules. This thesis describes four software systems for real time multimodal improvisation and composition in which a number of computational strategies for audiovisual laptop performances is explored and which were used in creation of a portfolio of accompanying audiovisual compositions
- …