19,456 research outputs found

    A Review of the Monitoring of Market Power The Possible Roles of TSOs in Monitoring for Market Power Issues in Congested Transmission Systems

    Get PDF
    The paper surveys the literature and publicly available information on market power monitoring in electricity wholesale markets. After briefly reviewing definitions, strategies and methods of mitigating market power we examine the various methods of detecting market power that have been employed by academics and market monitors/regulators. These techniques include structural and behavioural indices and analysis as well as various simulation approaches. The applications of these tools range from spot market mitigation and congestion management through to long-term market design assessment and merger decisions. Various market-power monitoring units already track market behaviour and produce indices. Our survey shows that these units collect a large amount of data from various market participants and we identify the crucial role of the transmission system operators with their access to dispatch and system information. Easily accessible and comprehensive data supports effective market power monitoring and facilitates market design evaluation. The discretion required for effective market monitoring is facilitated by institutional independence.Electricity, liberalisation, market power, regulation

    Longitudinal Analysis of Android Ad Library Permissions

    Full text link
    This paper investigates changes over time in the behavior of Android ad libraries. Taking a sample of 100,000 apps, we extract and classify the ad libraries. By considering the release dates of the applications that use a specific ad library version, we estimate the release date for the library, and thus build a chronological map of the permissions used by various ad libraries over time. We find that the use of most permissions has increased over the last several years, and that more libraries are able to use permissions that pose particular risks to user privacy and security.Comment: Most 201

    Detecting and Forecasting Economic Regimes in Multi-Agent Automated Exchanges

    Get PDF
    We show how an autonomous agent can use observable market conditions to characterize the microeconomic situation of the market and predict future market trends. The agent can use this information to make both tactical decisions, such as pricing, and strategic decisions, such as product mix and production planning. We develop methods to learn dominant market conditions, such as over-supply or scarcity, from historical data using Gaussian mixture models to construct price density functions. We discuss how this model can be combined with real-time observable information to identify the current dominant market condition and to forecast market changes over a planning horizon. We forecast market changes via both a Markov correction-prediction process and an exponential smoother. Empirical analysis shows that the exponential smoother yields more accurate predictions for the current and the next day (supporting tactical decisions), while the Markov correction-prediction process is better for longer term predictions (supporting strategic decisions). Our approach offers more flexibility than traditional regression based approaches, since it does not assume a fixed functional relationship between dependent and independent variables. We validate our methods by presenting experimental results in a case study, the Trading Agent Competition for Supply Chain Management.dynamic pricing;machine learning;market forecasting;Trading agents

    Volatility Regimes in Macroeconomic Time Series: The Case of the Visegrad Group

    Get PDF
    The authors analyze several monthly and quarterly macroeconomic time series for the Czech Republic, Poland, Hungary, and Slovakia. These countries embarked on an economic transition in the early 1990s which ultimately led to their membership in the European Union, with Slovakia joining the euro area in 2009. It is natural to assume that changes of such a magnitude should also influence the major macroeconomic indicators. The authors explore the characteristics of these series by endogenously identifying their volatility regimes. In the course of their analysis, they show the difficulties in the handling of unit roots as a necessary step preceding volatility modeling. The final set of breaks identified shows very few changes near the beginning of the series, which corresponds to the transition period.macroeconomic fluctuations, economic transition, structural breaks, volatility regimes, cumulative sum of squares, unit root testing

    Long memory or shifting means? A new approach and application to realised volatility

    Get PDF
    It is now recognised that long memory and structural change can be confused because the statistical properties of times series of lengths typical of financial and econometric series are similar for both models. We propose a new set of methods aimed at distinguishing between long memory and structural change. The approach, which utilises the computational efficient methods based upon Atheoretical Regression Trees (ART), establishes through simulation the bivariate distribution of the fractional integration parameter, d, with regime length for simulated fractionally integrated series. This bivariate distribution is then compared with the data for the time series. We also combine ART with the established goodness of fit test for long memory series due to Beran. We apply these methods to the realized volatility series of 16 stocks in the Dow Jones Industrial Average. We show that in these series the value of the fractional integration parameter is not constant with time. The mathematical consequence of this is that the definition of H self-similarity is violated. We present evidence that these series have structural breaks.Long-range dependence; Strong dependence; Global dependence; Hurst phenomena

    Structural breaks and financial risk management

    Get PDF
    There is ample empirical evidence on the presence of structural changes in financial time series. Structural breaks are also shown to contribute to the leptokurtosis of financial returns and explain at least partly the observed persistence of volatility processes. This paper explores whether detecting and taking into account structural breaks in the volatility model can improve upon our Value at Risk forecast. VAR is used by banks as a standard risk measure and is accepted by regulation in setting capital, which makes it an issue for the central bank guarding against systemic risk. This paper investigates daily BUX returns over the period 1995-2002. The Bai-Perron algorithm found several breaks in the mean and volatility of BUX return. The shift in the level of unconditional mean return around 1997-1998 is likely to be explained by the evolving efficiency of the market, but most of all by the halt of a strong upward trend in the preceding period. Volatility jumped to very high levels due to the Asian and Russian crisis. There were longer lasting shift too, most likely due to increasing trading volume. When in-sample forecasts are evaluated, models with SB dummies outperform the alternative methods. According to the rolling-window estimation and out-of-sample forecast the SB models seem to perform slightly better. However the results are sensitive to the evaluation criteria used, and the choice on the probability level.Structural Break tests, volatility forecasting, Value-at-Risk, backtest.

    Prominent Numbers, Indices and Ratios in Exchange Rate Determination and Financial Crashes: in Economists’ Models, in the Field and in the Laboratory

    Get PDF
    The prior paper in this sequel, Pope (2009) introduced the concept of a nominalist heuristic, defined as a focus on prominent numbers, indices or ratios. In this paper the concept is used to show three things in how scientists and practitioners analyse and evaluate to decide (conclude). First, in constructing theories such as purchasing power and interest parity to predict exchange rates and to advocate floating exchange rates, economists unwittingly employ nominalist heuristics. Second, nominalist heuristics have influenced actual exchange rates through the centuries, and this finding is replicated in the laboratory. Third, nominalist heuristics are incompatible with expected utility theory which excludes the evaluation stage, and are also incompatible with prospect theory which assumes that, while the evaluation stage can involve systematic mistakes, the overall decision situation is ultra simple. It is so simple that: a) economists and psychologists can mechanically model and identify what is a mistake, and b) decision makers can maximise. However, contrary to prospect theory, in the typical complex situation, neither a) nor b) holds. Assuming that a) and b) hold has resulted in the 1988 crisis from applying the Black Scholes formulae to forward exchange rates and contributed to sequel financial crises including that of 2007-2009. What is required is a fundamentally different class of models that allow for the progressive anticipated changes in knowledge ahead faced under risk and uncertainty, namely models under the umbrella of SKAT, the Stages of Knowledge Ahead Theory. The paper’s findings support a single world currency rather than variable unpredictable exchange rates subjected to the vagaries of how prominent numbers, ratios and indices influence events via the models of scientists and practitioners.nominalism, money illusion, heuristic, unpredictability, experiment, SKAT the Stages of Knowledge Ahead Theory, prominent numbers, prominent indices, prominent ratios, transparent policy, nominal equality, historical benchmarks, complexity, decision costs, evaluation, maximisation, Black Scholes, Lehmann Brothers, sub-prime crisis, central bank swaps
    corecore