23 research outputs found

    Evolution of the hydro-climate system in the Lake Baikal basin

    Get PDF
    SummaryClimatic changes can profoundly alter hydrological conditions in river basins. Lake Baikal is the deepest and largest freshwater reservoir on Earth, and has a unique ecosystem with numerous endemic animal and plant species. We here identify long-term historical (1938–2009) and projected future hydro-climatic trends in the Selenga River Basin, which is the largest sub-basin (>60% inflow) of Lake Baikal. Our analysis is based on long-term river monitoring and historical hydro-climatic observation data, as well as ensemble mean and 22 individual model results of the Coupled Model Intercomparison Project, Phase 5 (CMIP5). Study of the latter considers a historical period (from 1961) and projections for 2010–2039 and 2070–2099. Observations show almost twice as fast warming as the global average during the period 1938–2009. Decreased intra-annual variability of river discharge over this period indicates basin-scale permafrost degradation. CMIP5 ensemble projections show further future warming, implying continued permafrost thaw. Modelling of runoff change, however, is highly uncertain, with many models (64%) and their ensemble mean failing to reproduce historical behaviour, and with indicated future increase being small relative to the large differences among individual model results

    Data for wetlandscapes and their changes around the world

    Get PDF
    Geography and associated hydrological, hydroclimate and land-use conditions and their changes determine the states and dynamics of wetlands and their ecosystem services. The influences of these controls are not limited to just the local scale of each individual wetland but extend over larger landscape areas that integrate multiple wetlands and their total hydrological catchment – the wetlandscape. However, the data and knowledge of conditions and changes over entire wetlandscapes are still scarce, limiting the capacity to accurately understand and manage critical wetland ecosystems and their services under global change. We present a new Wetlandscape Change Information Database (WetCID), consisting of geographic, hydrological, hydroclimate and land-use information and data for 27 wetlandscapes around the world. This combines survey-based local information with geographic shapefiles and gridded datasets of large-scale hydroclimate and land-use conditions and their changes over whole wetlandscapes. Temporally, WetCID contains 30-year time series of data for mean monthly precipitation and temperature and annual land-use conditions. The survey-based site information includes local knowledge on the wetlands, hydrology, hydroclimate and land uses within each wetlandscape and on the availability and accessibility of associated local data. This novel database (available through PANGAEA https://doi.org/10.1594/PANGAEA.907398; Ghajarnia et al., 2019) can support site assessments; cross-regional comparisons; and scenario analyses of the roles and impacts of land use, hydroclimatic and wetland conditions, and changes in whole-wetlandscape functions and ecosystem services

    Publisher Correction: Hydro-climatic changes of wetlandscapes across the world

    Get PDF
    Assessments of ecosystem service and function losses of wetlandscapes (i.e., wetlands and their hydrological catchments) suffer from knowledge gaps regarding impacts of ongoing hydro-climatic change. This study investigates hydro-climatic changes during 1976–2015 in 25 wetlandscapes distributed across the world’s tropical, arid, temperate and cold climate zones. Results show that the wetlandscapes were subject to precipitation (P) and temperature (T) changes consistent with mean changes over the world’s land area. However, arid and cold wetlandscapes experienced higher T increases than their respective climate zone. Also, average P decreased in arid and cold wetlandscapes, contrarily to P of arid and cold climate zones, suggesting that these wetlandscapes are located in regions of elevated climate pressures. For most wetlandscapes with available runoff (R) data, the decreases were larger in R than in P, which was attributed to aggravation of climate change impacts by enhanced evapotranspiration losses, e.g. caused by land-use changes

    Implementacja funkcji elementarnych w FPGA na przykładzie algorytmu CORDIC w języku wysokiego poziomu Mitrion-C

    No full text
    The elementary functions are very often used in scientific computations. The quantum chemistry, physics, financial computing are only examples were elementary functions like exponent, logarithm are intensively computed. This paper presents implementation of an exp(x) core in a CORDIC-algorithm written in Mitrion-C lanuage. The Mitrion-C language is a new high level language. It enables implementing pipelined and wide paralleled algorithms on FPGA platforms. It makes process of algorithms implementation on FPGA faster. From gravitational forces to quantum chemistry or financial mathematics, computational scientists very often use exp(x) in computer simulations. The implemented core generates IEEE 754 standard single precision exponential values. The CORDIC algorithm can be used to compute wide spectrum of different elementary functions like sine, cosine, tangent. In our solution values of the exponent for integer part of the input argument are stored in a table. The table is allocated in an internal memory. The fractional part is computed by the CORDIC algorithm. The final result is achieved by multiplying the values of the fractional and integer part. Our implementation is made on SGI Altix 4700 hardware platform. It is SGI multiprocessor distributed shared memory computer system with Virtex-4 LX 200 FPGAs.Funkcje elementarne są bardzo często wykorzystywane w obliczeniach naukowych. Chemia kwantowa, matematyka finansowa, fizyka jedne z wielu dziedzin gdzie funkcje takie jak eksponenta, logarytm są intensywnie wykonywane. Praca ta przedstawia implementację funkcji eksponenty za pomocą algorytmu CORDIC w języku Mitrion-C. Mitrion-C jest nowym językiem wysokiego poziomu programowania układów FPGA. Język ten posiada odpowiednie instrukcje oraz wbudowane typy danych, które pozwalają na programowanie algorytmów potokowo jak i całkowicie równolegle. W naszym rozwiązaniu argument wejściowy jest rozdzielony na część całkowitą i część ułamkową. Wartości eksponenty dla części całkowitej przechowywane są w tablicy w pamięci wewnętrznej natomiast część wartość dla części ułamkowej obliczana jest algorytmem CORDIC. Wynik końcowy obliczany jest za pomocą mnożenia części ułamkowej i całkowitej. Implementacja wykonana jest na platformie sprzętowej SGI ALTIX 4700. Jest to platforma wieloprocesorowa ze współdzieloną pamięcią oraz układami FPGA typu Virtex-4 LX 200

    The Java profiler based on byte code analysis and instrumentation for many-core hardware accelerators

    No full text
    One of the most challenging issues in the case of many and multi-core architectures is how to exploit their potential computing power in legacy systems without a deep knowledge of their architecture. The analysis of static dependence and dynamic data dependences of a program run, can help to identify independent paths that could have been computed by individual parallel threads. The statistics of reusing the data and its size is also crucial in adapting the application in GPU many-core hardware architecture because of specific memory hierarchies. The proposed profiling system accomplishes static data analysis and computes dynamic dependencies for Java programs as well as recommends parts of source code with the highest potential for parallelization in GPU. Such an analysis can also provide starting point for automatic parallelization

    Multicore and GPGPU implementation of chosen text algorithms

    No full text
    Artykuł przedstawia implementację algorytmów tekstowych w wybranych platformach przetwarzania równoległego. Dostępność procesorów wielordzeniowych oraz kart graficznych ogólnego przeznaczenia sprawia, iż badania nad równoległą implementacją algorytmów w celu ich akceleracji nabierają coraz większego znaczenia. Algorytmy tekstowe są niezwykle istotnym i często niezbędnym elementem zaawansowanych algorytmów analizy tekstu oraz są także składowymi funkcji wyszukiwania wzorców w tekście wielu języków programowania. W pracy dokonano analizy najpopularniejszych algorytmów tekstowych oraz dokonano ich analizy pod kątem ich zrównoleglenia w celu ich implementacji w procesorze wielordzeniowym oraz karcie graficznej ogólnego przeznaczenia. Analizowanymi algorytmami są: boyer-moore, algorytm naiwny oraz algorytm knuth-morris-pratt. Następnie dokonano porównania efektywności ich realizacji na wymienionych platformach sprzętowych.This paper presents implementation of text algorithms in multicore CPU and GPGPU. The text algorithms are very common algorithms used in text analysis process and they are a part of functions used for text patterns recognition. The library functions for text searching implemented in many languages very often use most popular text-algorithms. The paper describes the analysis of these algorithms for parallel implementations in multicore processors and general purpose graphic cards. The research work presented in this paper shows that text algorithms can be partially parallelized. The process of acceleration can be done by appropriate dividing the input text between parallel threads (data parallelism). The comparative studies were performed for the following algorithms: boyer-moore (horspool) , naive and knuth-morris-pratt algorithm. The presented results show the efficiency of these algorithms in the case of different type and size of patterns. In the case of GPU the implementation was made in the CUDA framework. The OpenMP library was used for a multicore version

    A study of parallel techniques for dimensionality reduction and its impact on the quality of text processing algorithms

    No full text
    The presented algorithms employ the Vector Space Model (VSM) and its enhancements such as TFIDF (Term Frequency Inverse Document Frequency) with Singular Value Decomposition (SVD). TFIDF were applied to emphasize the important features of documents and SVD was used to reduce the analysis space. Consequently, a series of experiments were conducted. They revealed important properties of the algorithms and their accuracy. The accuracy of the algorithms was estimated in terms of their ability to match the human classification of the subject. For unsupervised algorithms the entropy was used as a quality evaluation measure. The combination of VSM, TFIDF, and SVD came out to be the best performing unsupervised algorithm with entropy of 0.16

    Real time 8K video quality assessment using FPGA

    No full text
    This paper presents a hardware architecture of the video quality assessment module. Two different metrics were implemented on FPGA using modern High Level Language for digital system design – Impulse C. FPGA resources consumption of the presented module is low, which enables module-level parallelization. Tests conducted for four modules working concurrently show that 1.96 GB/s throughput can be achieved. The module is capable of processing 8K video stream in a real-time manner i.e. 30 frames/second. Such high performance of the presented solution was achieved due to the series of architectural optimization introduced to the module, such as reduction of data precision and reuse of various module components

    Loop profiling tool for HPC code inspection as an efficient method of FPGA based acceleration

    No full text
    This paper presents research on FPGA based acceleration of HPC applications. The most important goal is to extract a code that can be sped up. A major drawback is the lack of a tool which could do it. HPC applications usually consist of a huge amount of a complex source code. This is one of the reasons why the process of acceleration should be as automated as possible. Another reason is to make use of HLLs (High Level Languages) such as Mitrion-C (Mohl, 2006). HLLs were invented to make the development of HPRC applications faster. Loop profiling is one of the steps to check if the insertion of an HLL to an existing HPC source code is possible to gain acceleration of these applications. Hence the most important step to achieve acceleration is to extract the most time consuming code and data dependency, which makes the code easier to be pipelined and parallelized. Data dependency also gives information on how to implement algorithms in an FPGA circuit with minimal initialization of it during the execution of algorithms

    The comparison of parallel sorting algorithms implemented on different hardware platforms

    No full text
    Sorting is a common problem in computer science. There are a lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, many-core and multi-core platforms have enabled the creation of wide parallel algorithms. We have standard processors that consist of multiple cores and hardware accelerators, like the GPU. Graphic cards, with their parallel architecture, provide new opportunities to speed up many algorithms. In this paper, we describe the results from the implementation of a few different parallel sorting algorithms on GPU cards and multi-core processors. Then, a hybrid algorithm will be presented, consisting of parts executed on both platforms (a standard CPU and GPU). In recent literature about the implementation of sorting algorithms in the GPU, a fair comparison between many core and multi-core platforms is lacking. In most cases, these describe the resulting time of sorting algorithm executions on the GPU platform and a single CPU core
    corecore