33,222 research outputs found
Recommended from our members
A review of portfolio planning: Models and systems
In this chapter, we first provide an overview of a number of portfolio planning models
which have been proposed and investigated over the last forty years. We revisit the
mean-variance (M-V) model of Markowitz and the construction of the risk-return
efficient frontier. A piecewise linear approximation of the problem through a
reformulation involving diagonalisation of the quadratic form into a variable
separable function is also considered. A few other models, such as, the Mean
Absolute Deviation (MAD), the Weighted Goal Programming (WGP) and the
Minimax (MM) model which use alternative metrics for risk are also introduced,
compared and contrasted. Recently asymmetric measures of risk have gained in
importance; we consider a generic representation and a number of alternative
symmetric and asymmetric measures of risk which find use in the evaluation of
portfolios. There are a number of modelling and computational considerations which
have been introduced into practical portfolio planning problems. These include: (a)
buy-in thresholds for assets, (b) restriction on the number of assets (cardinality
constraints), (c) transaction roundlot restrictions. Practical portfolio models may also
include (d) dedication of cashflow streams, and, (e) immunization which involves
duration matching and convexity constraints. The modelling issues in respect of these
features are discussed. Many of these features lead to discrete restrictions involving
zero-one and general integer variables which make the resulting model a quadratic
mixed-integer programming model (QMIP). The QMIP is a NP-hard problem; the
algorithms and solution methods for this class of problems are also discussed. The
issues of preparing the analytic data (financial datamarts) for this family of portfolio
planning problems are examined. We finally present computational results which
provide some indication of the state-of-the-art in the solution of portfolio optimisation
problems
Behavioral conservatism is linked to complexity of behavior in chimpanzees (<i>Pan troglodytes</i>):implications for cognition and cumulative culture
Cumulative culture is rare, if not altogether absent in nonhuman species. At the foundation of cumulative learning is the ability to modify, relinquish, or build upon previous behaviors flexibly to make them more productive or efficient. Within the primate literature, a failure to optimize solutions in this way is often proposed to derive from low-fidelity copying of witnessed behaviors, suboptimal social learning heuristics, or a lack of relevant sociocognitive adaptations. However, humans can also be markedly inflexible in their behaviors, perseverating with, or becoming fixated on, outdated or inappropriate responses. Humans show differential patterns of flexibility as a function of cognitive load, exhibiting difficulties with inhibiting suboptimal behaviors when there are high demands on working memory. We present a series of studies on captive chimpanzees that indicate that behavioral conservatism in apes may be underlain by similar constraints: Chimpanzees showed relatively little conservatism when behavioral optimization involved the inhibition of a well-established but simple solution, or the addition of a simple modification to a well-established but complex solution. In contrast, when behavioral optimization involved the inhibition of a well-established but complex solution, chimpanzees showed evidence of conservatism. We propose that conservatism is linked to behavioral complexity, potentially mediated by cognitive resource availability, and may be an important factor in the evolution of cumulative culture.</p
PID control system analysis, design, and technology
Designing and tuning a proportional-integral-derivative
(PID) controller appears to be conceptually intuitive, but can
be hard in practice, if multiple (and often conflicting) objectives
such as short transient and high stability are to be achieved.
Usually, initial designs obtained by all means need to be adjusted
repeatedly through computer simulations until the closed-loop
system performs or compromises as desired. This stimulates
the development of "intelligent" tools that can assist engineers
to achieve the best overall PID control for the entire operating
envelope. This development has further led to the incorporation
of some advanced tuning algorithms into PID hardware modules.
Corresponding to these developments, this paper presents a
modern overview of functionalities and tuning methods in patents,
software packages and commercial hardware modules. It is seen
that many PID variants have been developed in order to improve
transient performance, but standardising and modularising PID
control are desired, although challenging. The inclusion of system
identification and "intelligent" techniques in software based PID
systems helps automate the entire design and tuning process to
a useful degree. This should also assist future development of
"plug-and-play" PID controllers that are widely applicable and
can be set up easily and operate optimally for enhanced productivity,
improved quality and reduced maintenance requirements
The SOS Platform: Designing, Tuning and Statistically Benchmarking Optimisation Algorithms
open access articleWe present Stochastic Optimisation Software (SOS), a Java platform facilitating the algorithmic design process and the evaluation of metaheuristic optimisation algorithms. SOS reduces the burden of coding miscellaneous methods for dealing with several bothersome and time-demanding tasks such as parameter tuning, implementation of comparison algorithms and testbed problems, collecting and processing data to display results, measuring algorithmic overhead, etc. SOS provides numerous off-the-shelf methods including: (1) customised implementations of statistical tests, such as the Wilcoxon rank-sum test and the HolmâBonferroni procedure, for comparing the performances of optimisation algorithms and automatically generating result tables in PDF and formats; (2) the implementation of an original advanced statistical routine for accurately comparing couples of stochastic optimisation algorithms; (3) the implementation of a novel testbed suite for continuous optimisation, derived from the IEEE CEC 2014 benchmark, allowing for controlled activation of the rotation on each testbed function. Moreover, we briefly comment on the current state of the literature in stochastic optimisation and highlight similarities shared by modern metaheuristics inspired by nature. We argue that the vast majority of these algorithms are simply a reformulation of the same methods and that metaheuristics for optimisation should be simply treated as stochastic processes with less emphasis on the inspiring metaphor behind them
Recommended from our members
Selection of earthquake ground motions for multiple objectives using genetic algorithms
Existing earthquake ground motion (GM) selection methods for the seismic assessment of structural systems focus on spectral compatibility in terms of either only central values or both central values and variability. In this way, important selection criteria related to the seismology of the region, local soil conditions, strong GM intensity and duration as well as the magnitude of scale factors are considered only indirectly by setting them as constraints in the pre-processing phase in the form of permissible ranges. In this study, a novel framework for the optimum selection of earthquake GMs is presented, where the aforementioned criteria are treated explicitly as selection objectives. The framework is based on the principles of multi-objective optimization that is addressed with the aid of the Weighted Sum Method, which supports decision making both in the pre-processing and post-processing phase of the GM selection procedure. The solution of the derived equivalent single-objective optimization problem is performed by the application of a mixed-integer Genetic Algorithm and the effects of its parameters on the efficiency of the selection procedure are investigated. Application of the proposed framework shows that it is able to track GM sets that not only provide excellent spectral matching but they are also able to simultaneously consider more explicitly a set of additional criteria
Recommended from our members
On optimal designs for clinical trials: An updated review
Optimization of clinical trial designs can help investigators achieve higher qualityresults for the given resource constraints. The present paper gives an overviewof optimal designs for various important problems that arise in different stages ofclinical drug development, including phase I doseâtoxicity studies; phase I/II studiesthat consider early efficacy and toxicity outcomes simultaneously; phase IIdoseâresponse studies driven by multiple comparisons (MCP), modeling techniques(Mod), or their combination (MCPâMod); phase III randomized controlled multiarmmulti-objective clinical trials to test difference among several treatment groups;and population pharmacokineticsâpharmacodynamics experiments. We find thatmodern literature is very rich with optimal design methodologies that can be utilizedby clinical researchers to improve efficiency of drug development
Investment decisions and portfolios classificationbased on robust methods of estimation
In the process of assets selection and their allocation to the investment portfolio the most important factor issue thing is the accurate evaluation of the volatility of the return rate. In order to achieve stable and accurate estimates of parameters for contaminated multivariate normal distributions the robust estimators are required. In this paper we used some of the robust estimators to selection the optimal investment portfolios. The main goal of this paper was the comparative analysis of generated investment portfolios with respect to chosen robust estimation methods.Investment decisions, robust estimators, portfolios classification, cluster analysis 1. Introduction
An empirical evaluation of High-Level Synthesis languages and tools for database acceleration
High Level Synthesis (HLS) languages and tools are emerging as the most promising technique to make FPGAs more accessible to software developers. Nevertheless, picking the most suitable HLS for a certain class of algorithms depends on requirements such as area and throughput, as well as on programmer experience. In this paper, we explore the different trade-offs present when using a representative set of HLS tools in the context of Database Management Systems (DBMS) acceleration. More specifically, we conduct an empirical analysis of four representative frameworks (Bluespec SystemVerilog, Altera OpenCL, LegUp and Chisel) that we utilize to accelerate commonly-used database algorithms such as sorting, the median operator, and hash joins. Through our implementation experience and empirical results for database acceleration, we conclude that the selection of the most suitable HLS depends on a set of orthogonal characteristics, which we highlight for each HLS framework.Peer ReviewedPostprint (authorâs final draft
- âŠ