7,026 research outputs found
Why is Central Paris loosing jobs?
Brueckner et alii (1999) have explained urban population pattern through amenities distribution. Based on their model, this paper introduces a productive sector and helps understand employment suburbanization in a new way. Considering how amenities are valued, the 'people follow jobs' vs 'jobs follow people' case is discussed for CBD and hogh-brawn services firms. If they favour natural amenities, they might leave the historical center. A big constraint against that move is that the firm wants to keep its employees who may all live around the center. Despite conventionnal centripetal forces, they can settle in the suburbs before the households. People may than follow the firm in the suburbs.
Is Central Paris still that rich?
From 1975 to 1999, employment in Paris metropolitan area has become more and more decentralized. This deconcentration is almost half spread and half clustered. Parallel to the sprawl of jobs, the growth of a services oriented economy has led to an increase in sectoral concentration. But there are no clear evidences of a vertical spatial desintegration, because by the same time the places tend to diversify. An explanation might be that the sprawl relies both on endogenous job creations and on job relocations: the relocations tend to increase the specialisation of the clusters but endogenous growth is more diverse and residential.
Extreme Value Theory for Tail-Related Risk Measures
Many fields of modern science and engineering have to deal with events which are rare but have significant consequences. Extreme value theory is considered to provide the basis for the statistical modeling of such extremes. The potential of extreme value theory applied to financial problems has only been recognized recently. This paper aims at introducing the fundamentals of extreme value theory as well as practical aspects for estimating and assessing statistical models for tail-related risk measures.Extreme Value Theory; Generalized Pareto Distribution, Generalized Extreme Value Distribution; Quantile Estimation, Risk Measures; Maximum Likelihood Estimation; Profile Likelihood Confidence Intervals.
Bargaining and Collusion in a Regulatory Model
We consider the regulation of a monopolistic market when the prin- cipal delegates to a regulatory agency two tasks: the supervision of the firm's unknown costs and the arrangement of a pricing mechanism. As usual, the agency may have an incentive to hide information from the principal to share the informative rent with the firm. The novelty of this paper is that both the regulatory mechanism and the side con- tracting between the agency and the firm are modelled as a bargaining process. This negotiation between the regulator and the monopoly induces a radical change in the extraprofit from private information, which is now equal to the standard informational rent weighted by the agency’ bargaining power. This in turn a¤ects the collusive stage, in particular the firm has the greatest incentive to collude when fac- ing an agency with the same bargaining power. Then, we focus on the optimal organizational responses to the possibility of collusion. In our setting, where incompleteness of contracts prevents the design of a screening mechanism between the agency’ types and thus Tirole’ equivalence principle does not apply, we prove that the stronger the agency in the negotiation process, the greater the incentives for the principal to tolerate collusion in equilibrium.regulation, bargaining, collusion.
Heuristic Optimisation in Financial Modelling
There is a large number of optimisation problems in theoretical and applied finance that are difficult to solve as they exhibit multiple local optima or are not ‘well- behaved’ in other ways (eg, discontinuities in the objective function). One way to deal with such problems is to adjust and to simplify them, for instance by dropping constraints, until they can be solved with standard numerical methods. This paper argues that an alternative approach is the application of optimisation heuristics like Simulated Annealing or Genetic Algorithms. These methods have been shown to be capable to handle non-convex optimisation problems with all kinds of constraints. To motivate the use of such techniques in finance, the paper presents several actual problems where classical methods fail. Next, several well-known heuristic techniques that may be deployed in such cases are described. Since such presentations are quite general, the paper describes in some detail how a particular problem, portfolio selection, can be tackled by a particular heuristic method, Threshold Accepting. Finally, the stochastics of the solutions obtained from heuristics are discussed. It is shown, again for the example from portfolio selection, how this random character of the solutions can be exploited to inform the distribution of computations.Optimisation heuristics, Financial Optimisation, Portfolio Optimisation
A Heuristic Approach to Portfolio Optimization
Constraints on downside risk, measured by shortfall probability, expected shortfall, semi-variance etc., lead to optimal asset allocations which differ from the meanvariance optimum. The resulting optimization problem can become quite complex as it exhibits multiple local extrema and discontinuities, in particular if we also introduce constraints restricting the trading variables to integers, constraints on the holding size of assets or on the maximum number of different assets in the portfolio. In such situations classical optimization methods fail to work efficiently and heuristic optimization techniques can be the only way out. The paper shows how a particular optimization heuristic, called threshold accepting, can be successfully used to solve complex portfolio choice problems.Portfolio Optimization; Downside Risk Measures;Heuristic Optimization Threshold Accepting.
A note on ‘good starting values’ in numerical optimisation
Many optimisation problems in finance and economics have multiple local optima or discontinuities in their objective functions. In such cases it is stressed that ‘good starting points are important’. We look into a particular example: calibrating a yield curve model. We find that while ‘good starting values’ suggested in the literature produce parameters that are indeed ‘good’, a simple best-of-n–restarts strategy with random starting points gives results that are never worse, but better in many cases.
- …
