55 research outputs found
The FLUXNET2015 dataset and the ONEFlux processing pipeline for eddy covariance data
The FLUXNET2015 dataset provides ecosystem-scale data on CO2, water, and energy exchange between the biosphere and the atmosphere, and other meteorological and biological measurements, from 212 sites around the globe (over 1500 site-years, up to and including year 2014). These sites, independently managed and operated, voluntarily contributed their data to create global datasets. Data were quality controlled and processed using uniform methods, to improve consistency and intercomparability across sites. The dataset is already being used in a number of applications, including ecophysiology studies, remote sensing studies, and development of ecosystem and Earth system models. FLUXNET2015 includes derived-data products, such as gap-filled time series, ecosystem respiration and photosynthetic uptake estimates, estimation of uncertainties, and metadata about the measurements, presented for the first time in this paper. In addition, 206 of these sites are for the first time distributed under a Creative Commons (CC-BY 4.0) license. This paper details this enhanced dataset and the processing methods, now made available as open-source codes, making the dataset more accessible, transparent, and reproducible.Peer reviewe
A Hybrid Architecture for Multimedia Processors
Novel algorithmic features of multimedia applications and System on Chip (SoC) design using state-of-the-art CMOS technology are driving forces behind new multimedia processors. In this paper we propose an architecture that - based on this approaching technology - provides high performance and flexibility. It is a hybrid design consisting of instruction systolic arrays (ISAs) to be used as a special-purpose accelerator and RISC cores to be used as the basis of a general-purpose processor. It is a hierarchical and scalable architecture, which facilitates the hardware-/software codesign of multimedia processing circuits and systems. While some control-intensive functions can be implemented using the general-purpose CPU, other computation-intensive functions can rely on the accelerator
A Minimal Reduction Approach for the Collapsing Knapsack Problem
Within the research releated to NP-hard problems major emphasis is now placed on the attempt to solve as large problems as possible within a given time. This paper follows this approach in relation to Collapsing Knapsack Problem (CKP). The collapsing knapsack problem is a generalized 0-1 knapsack problem where the capacity of the knapsack is a non-increasing function of the number of items included. We generalize a well known reduction approach to a standard knapsack problem (SKP), and propose a new reduction approach which leads to a significantly smaller capacity and to smaller coefficients of the resulting SKP. With the new reduction approach, the capacity of the resulting SKP is reduced from 3nA to 2(n+2)b(1) with b(1) <= A, where A is the total weight of all the items. The MINKNAP algorithm, proposed by Pisinger (1997) to solve SKP, is directly improved because of the moderate coefficients of the resulting SKP. The pseudo-polynomial solution time to solve CKP is reduced from O(n^3a') to O(n^2b(1)), where a' is the upper bound of the weights
TOMOGRAPHIC IMAGE RECONSTRUCTION ON THE INSTRUCTION SYSTOLIC ARRAY
Instruction systolic arrays (ISAs) have been developed in order to combine the speed and simplicity of systolic arrays with the flexibility of MIMD parallel computer systems. ISAs are available as square arrays of small RISC processors capable of performing integer and floating point arithmetic. In this paper we show that the systolic control flow can be used for an efficient reconstruction of images from its projections. The demand for fast image reconstruction arises in the field of computerized tomography. It is shown how the new parallel algorithm leads to a high-speed implementation on Systola 1024, the first commercial parallel computer with the ISA architecture
Major line removal morphological hough transform on a hybrid system
This paper describes an implementation of a novel major line removal hough transform on a new parallel architectural system, Hybrid System. A Hybrid System is a combination of single instruction multiple data (SIMD) and multiple instruction multiple data (MIMD) systems processing at the same time. The line removal algorithm, which is used for detecting lines in an image, strips away major lines so that minor lines can become more easily detectable. The algorithm is implemented and evaluated on the hybrid system. Being a new extended architecture, we also established the derivation for the hybrid system speedup. We were able to obtain speedup that surpasses that of MIMD systems with the same number of pes. In the paper, we also introduce a new SIMD concept
MIMD-SIMD hybrid system: towards a new low cost parallel system
This paper describes a new parallel architectural system which we have called an 'MIMD-SIMD hybrid system'. As the name implies, 'MIMD-SIMD hybrid system' (also denoted as hybrid system in this paper) is a combination of both SIMD and MIMD systems working concurrently to produce an optimal architecture. This new parallel architecture has the capability of achieving speedup rates more than its corresponding MIMD architecture can achieve alone. We introduce our new SIMD concept and also show the contribution of the SIMD on this hybrid system. We have also developed a general formula for determining the speedup of the hybrid system so that accurate predictions can be made on the performance of the hybrid system. A MIMD-SIMD hybrid system was constructed and was used to implement on a visualization algorithm
- …