17,130 research outputs found

    Long Memory and FIGARCH Models for Daily and High Frequency Commodity Prices

    Get PDF
    Daily futures returns on six important commodities are found to be well described as FIGARCH fractionally integrated volatility processes, with small departures from the martingale in mean property. The paper also analyzes several years of high frequency intra day commodity futures returns and finds very similar long memory in volatility features at this higher frequency level. Semi parametric Local Whittle estimation of the long memory parameter supports the conclusions. Estimating the long memory parameter across many different data sampling frequencies provides consistent estimates of the long memory parameter, suggesting that the series are self-similar. The results have important implications for future empirical work using commodity price and returns data.Commodity returns, Futures markets, Long memory, FIGARCH

    EIE: Efficient Inference Engine on Compressed Deep Neural Network

    Full text link
    State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120x energy saving; Exploiting sparsity saves 10x; Weight sharing gives 8x; Skipping zero activations from ReLU saves another 3x. Evaluated on nine DNN benchmarks, EIE is 189x and 13x faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102GOPS/s working directly on a compressed network, corresponding to 3TOPS/s on an uncompressed network, and processes FC layers of AlexNet at 1.88x10^4 frames/sec with a power dissipation of only 600mW. It is 24,000x and 3,400x more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9x, 19x and 3x better throughput, energy efficiency and area efficiency.Comment: External Links: TheNextPlatform: http://goo.gl/f7qX0L ; O'Reilly: https://goo.gl/Id1HNT ; Hacker News: https://goo.gl/KM72SV ; Embedded-vision: http://goo.gl/joQNg8 ; Talk at NVIDIA GTC'16: http://goo.gl/6wJYvn ; Talk at Embedded Vision Summit: https://goo.gl/7abFNe ; Talk at Stanford University: https://goo.gl/6lwuer. Published as a conference paper in ISCA 201

    Bumpless Topology Transition

    Full text link
    The topology transition problem of transmission networks is becoming increasingly crucial with topological flexibility more widely leveraged to promote high renewable penetration. This paper proposes a novel methodology to address this problem. Aiming at achieving a bumpless topology transition regarding both static and dynamic performance, this methodology utilizes various eligible control resources in transmission networks to cooperate with the optimization of line-switching sequence. Mathematically, a composite formulation is developed to efficiently yield bumpless transition schemes with AC feasibility and stability both ensured. With linearization of all non-convexities involved and tractable bumpiness metrics, a convex mixed-integer program firstly optimizes the line-switching sequence and partial control resources. Then, two nonlinear programs recover AC feasibility, and optimize the remaining control resources by minimizing the H2\mathcal{H}_2-norm of associated linearized systems, respectively. The final transition scheme is selected by accurate evaluation including stability verification using time-domain simulations. Finally, numerical studies demonstrate the effectiveness and superiority of the proposed methodology to achieve bumpless topology transition.Comment: Accepted by TPWR

    A key to room-temperature ferromagnetism in Fe-doped ZnO: Cu

    Full text link
    Successful synthesis of room-temperature ferromagnetic semiconductors, Zn1x_{1-x}Fex_{x}O, is reported. The essential ingredient in achieving room-temperature ferromagnetism in bulk Zn1x_{1-x}Fex_{x}O was found to be additional Cu doping. A transition temperature as high as 550 K was obtained in Zn0.94_{0.94}Fe0.05_{0.05}Cu0.01_{0.01}O; the saturation magnetization at room temperature reached a value of 0.75μB0.75 \mu_{\rm B} per Fe. Large magnetoresistance was also observed below 100100 K.Comment: 11 pages, 4 figures; to appear in Appl. Phys. Let
    corecore