623 research outputs found

    The Effect of the 2018 Tariffs on European Wine

    Get PDF
    This paper estimates a vector autoregression model for average wine prices across U.S. cities to assess the impact of tariff changes on the U.K., France, Germany, and Spain after they were enacted in October 2019. It uses impulse response functions to gauge how a one-unit impulse in the per-liter duty rate may effect the average wine price in the U.S. and the quantity of wine from various exporters to the U.S. It finds that a one-unit impulse in the duty rate levied against the bloc of countries impacted by the tariff results in a fall in the quantity of wine imported from those countries and that wine from the bloc of countries is substituted with wine from the top three exporters not included in the bloc

    An Experimental Review on Deep Learning Architectures for Time Series Forecasting

    Get PDF
    In recent years, deep learning techniques have outperformed traditional models in many machine learning tasks. Deep neural networks have successfully been applied to address time series forecasting problems, which is a very important topic in data mining. They have proved to be an effective solution given their capacity to automatically learn the temporal dependencies present in time series. However, selecting the most convenient type of deep neural network and its parametrization is a complex task that requires considerable expertise. Therefore, there is a need for deeper studies on the suitability of all existing architectures for different forecasting tasks. In this work, we face two main challenges: a comprehensive review of the latest works using deep learning for time series forecasting; and an experimental study comparing the performance of the most popular architectures. The comparison involves a thorough analysis of seven types of deep learning models in terms of accuracy and efficiency. We evaluate the rankings and distribution of results obtained with the proposed models under many different architecture configurations and training hyperparameters. The datasets used comprise more than 50000 time series divided into 12 different forecasting problems. By training more than 38000 models on these data, we provide the most extensive deep learning study for time series forecasting. Among all studied models, the results show that long short-term memory (LSTM) and convolutional networks (CNN) are the best alternatives, with LSTMs obtaining the most accurate forecasts. CNNs achieve comparable performance with less variability of results under different parameter configurations, while also being more efficient

    An Experimental Review on Deep Learning Architectures for Time Series Forecasting

    Get PDF
    In recent years, deep learning techniques have outperformed traditional models in many machine learning tasks. Deep neural networks have successfully been applied to address time series forecasting problems, which is a very important topic in data mining. They have proved to be an effective solution given their capacity to automatically learn the temporal dependencies present in time series. However, selecting the most convenient type of deep neural network and its parametrization is a complex task that requires considerable expertise. Therefore, there is a need for deeper studies on the suitability of all existing architectures for different forecasting tasks. In this work, we face two main challenges: a comprehensive review of the latest works using deep learning for time series forecasting and an experimental study comparing the performance of the most popular architectures. The comparison involves a thorough analysis of seven types of deep learning models in terms of accuracy and efficiency. We evaluate the rankings and distribution of results obtained with the proposed models under many different architecture configurations and training hyperparameters. The datasets used comprise more than 50,000 time series divided into 12 different forecasting problems. By training more than 38,000 models on these data, we provide the most extensive deep learning study for time series forecasting. Among all studied models, the results show that long short-term memory (LSTM) and convolutional networks (CNN) are the best alternatives, with LSTMs obtaining the most accurate forecasts. CNNs achieve comparable performance with less variability of results under different parameter configurations, while also being more efficient.Ministerio de Ciencia, Innovación y Universidades TIN2017-88209-C2Junta de Andalucía US-1263341Junta de Andalucía P18-RT-277

    Defining probability-based rail station catchments for demand modelling

    No full text
    The aggregate models commonly used in the UK to estimate demand for new local rail stations require the station catchment to be defined first, so that inputs into the model, such as the population from which demand will be generated, can be specified. The methods typically used to define the catchment implicitly assume that station choice is a deterministic process, and that stations exist in isolation from each other. However, studies show that pre-defined catchments account for only 50-60 percent of observed trips, choice of station is not homogeneous within zones, catchments overlap, and catchments vary by access mode and station type. This paper describes early work to implement an alternative probability-based approach, through the development of a station choice prediction model. To derive realistic station access journey explanatory variables, a routable multi-modal network, incorporating data from OpenStreetMap, the Traveline National Data Set and National Rail timetable, was built using OpenTripPlanner and queried using an API wrapper developed in R. Results from a series of multinomial logit models are presented and a method for generating probabilistic catchments using estimated parameter values is described. An example probabilistic catchment is found to provide a realistic representation of the observed catchment, and to perform better than deterministic catchments

    Determinants of power spreads in electricity futures markets: A multinational analysis. ESRI WP580, December 2017

    Get PDF
    The growth in variable renewable energy (vRES) and the need for flexibility in power systems go hand in hand. We study how vRES and other factors, namely the price of substitute fuels, power price volatility, structural breaks, and seasonality impact the hedgeable power spreads (profit margins) of the main dispatchable flexibility providers in the current power systems - gas and coal power plants. We particularly focus on power spreads that are hedgeable in futures markets in three European electricity markets (Germany, UK, Nordic) over the time period 2009-2016. We find that market participants who use power spreads need to pay attention to the fundamental supply and demand changes in the underlying markets (electricity, CO2, and coal/gas). Specifically, we show that the total vRES capacity installed during 2009-2016 is associated with a drop of 3-22% in hedgeable profit margins of coal and especially gas power generators. While this shows that the expansion of vRES has a significant negative effect on the hedgeable profitability of dispatchable, flexible power generators, it also suggests that the overall decline in power spreads is further driven by the price dynamics in the CO2 and fuel markets during the sample period. We also find significant persistence (and asymmetric effects) in the power spreads volatility using a univariate TGARCH model
    corecore