2,743 research outputs found

    DALC: Distributed Automatic LSTM Customization for Fine-Grained Traffic Speed Prediction

    Full text link
    Over the past decade, several approaches have been introduced for short-term traffic prediction. However, providing fine-grained traffic prediction for large-scale transportation networks where numerous detectors are geographically deployed to collect traffic data is still an open issue. To address this issue, in this paper, we formulate the problem of customizing an LSTM model for a single detector into a finite Markov decision process and then introduce an Automatic LSTM Customization (ALC) algorithm to automatically customize an LSTM model for a single detector such that the corresponding prediction accuracy can be as satisfactory as possible and the time consumption can be as low as possible. Based on the ALC algorithm, we introduce a distributed approach called Distributed Automatic LSTM Customization (DALC) to customize an LSTM model for every detector in large-scale transportation networks. Our experiment demonstrates that the DALC provides higher prediction accuracy than several approaches provided by Apache Spark MLlib.Comment: 12 pages, 5 figures, the 34th International Conference on Advanced Information Networking and Applications (AINA 2020), Springe

    Nothing to gain, plenty to lose

    Get PDF
    Empirical research examining the proposed privatisation of the NSW electricity transmission and distribution businesses has found the asset recycling strategy is likely to negatively impact the state’s credit rating over the medium to long term. The analysis also examined the relative efficiency of public and private networks, finding that once the physical span of each network was considered — essentially the number of kilometres covered — publicly owned electricity networks currently operate more efficiently that privately owned assets in other states. Written by Market Economics managing director Stephen Koukoulas, who has more than 25 years experience as an economist in government and banking, the report found there was no logical case for privatisation, with many arguments based on questionable assumptions and generalisations. The report also highlights that the electricity transmission and distribution businesses currently provide a relatively stable and low-risk cash flow to the budget. The report made a number of key findings, including: • Privatising NSW’s transmission and distribution assets is likely to drive up prices due to higher overheads in comparable privatised businesses; • The physical span of different networks is the single largest factor behind variations in both operational and capital expenditure; • NSW’s publicly owned networks outperforms privately owned peers on operating expenses; • Publicly owned networks appear more willing to engage in long-term planning when undertaking capital expenditure; and • Privatising NSW’s electricity network assets offers little short term budgetary gain and could well be detrimental over the medium to long term

    A Framework for Spatial Database Explanations

    Get PDF
    abstract: In the last few years, there has been a tremendous increase in the use of big data. Most of this data is hard to understand because of its size and dimensions. The importance of this problem can be emphasized by the fact that Big Data Research and Development Initiative was announced by the United States administration in 2012 to address problems faced by the government. Various states and cities in the US gather spatial data about incidents like police calls for service. When we query large amounts of data, it may lead to a lot of questions. For example, when we look at arithmetic relationships between queries in heterogeneous data, there are a lot of differences. How can we explain what factors account for these differences? If we define the observation as an arithmetic relationship between queries, this kind of problem can be solved by aggravation or intervention. Aggravation views the value of our observation for different set of tuples while intervention looks at the value of the observation after removing sets of tuples. We call the predicates which represent these tuples, explanations. Observations by themselves have limited importance. For example, if we observe a large number of taxi trips in a specific area, we might ask the question: Why are there so many trips here? Explanations attempt to answer these kinds of questions. While aggravation and intervention are designed for non spatial data, we propose a new approach for explaining spatially heterogeneous data. Our approach expands on aggravation and intervention while using spatial partitioning/clustering to improve explanations for spatial data. Our proposed approach was evaluated against a real-world taxi dataset as well as a synthetic disease outbreak datasets. The approach was found to outperform aggravation in precision and recall while outperforming intervention in precision.Dissertation/ThesisMasters Thesis Computer Science 201
    • …
    corecore