3,803 research outputs found

    Enhancing Energy Production with Exascale HPC Methods

    Get PDF
    High Performance Computing (HPC) resources have become the key actor for achieving more ambitious challenges in many disciplines. In this step beyond, an explosion on the available parallelism and the use of special purpose processors are crucial. With such a goal, the HPC4E project applies new exascale HPC techniques to energy industry simulations, customizing them if necessary, and going beyond the state-of-the-art in the required HPC exascale simulations for different energy sources. In this paper, a general overview of these methods is presented as well as some specific preliminary results.The research leading to these results has received funding from the European Union's Horizon 2020 Programme (2014-2020) under the HPC4E Project (www.hpc4e.eu), grant agreement n° 689772, the Spanish Ministry of Economy and Competitiveness under the CODEC2 project (TIN2015-63562-R), and from the Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP). Computer time on Endeavour cluster is provided by the Intel Corporation, which enabled us to obtain the presented experimental results in uncertainty quantification in seismic imagingPostprint (author's final draft

    BDEv 3.0: energy efficiency and microarchitectural characterization of Big Data processing frameworks

    Get PDF
    This is a post-peer-review, pre-copyedit version of an article published in Future Generation Computer Systems. The final authenticated version is available online at: https://doi.org/10.1016/j.future.2018.04.030[Abstract] As the size of Big Data workloads keeps increasing, the evaluation of distributed frameworks becomes a crucial task in order to identify potential performance bottlenecks that may delay the processing of large datasets. While most of the existing works generally focus only on execution time and resource utilization, analyzing other important metrics is key to fully understanding the behavior of these frameworks. For example, microarchitecture-level events can bring meaningful insights to characterize the interaction between frameworks and hardware. Moreover, energy consumption is also gaining increasing attention as systems scale to thousands of cores. This work discusses the current state of the art in evaluating distributed processing frameworks, while extending our Big Data Evaluator tool (BDEv) to extract energy efficiency and microarchitecture-level metrics from the execution of representative Big Data workloads. An experimental evaluation using BDEv demonstrates its usefulness to bring meaningful information from popular frameworks such as Hadoop, Spark and Flink.Ministerio de Economía, Industria y Competitividad; TIN2016-75845-PMinisterio de Educación; FPU14/02805Ministerio de Educación; FPU15/0338

    Spatiotemporal analysis of gapfilled high spatial resolution time series for crop monitoring.

    Full text link
    [ES] La obtención de mapas fiables de clasificación de cultivos es importante para muchas aplicaciones agrícolas, como el monitoreo de los campos y la seguridad alimentaria. Hoy en día existen distintas bases de datos de cobertura terrestre con diferentes escalas espaciales y temporales cubriendo diferentes regiones terrestres (por ejemplo, Corine Land cover (CORINE) en Europa o Cropland Data Layer (CDL) en Estados Unidos (EE.UU.)). Sin embargo, estas bases de datos son mapas históricos y por lo tanto no reflejan los estados fenológicos actuales de los cultivos. Normalmente estos mapas requieren un tiempo específico (anual) para generarse basándose en las diferentes fenologías de cada cultivo. Los objetivos de este trabajo son dos: 1- analizar la distribución espacial de los cultivos a diferentes regiones espaciales para identificar las áreas más representativas. 2- analizar el rango temporal utilizado para acelerar la generación de mapas de clasificación. El análisis se realiza sobre el contiguo de Estados Unidos (CONUS, de sus siglas en inglés) en 2019. Para abordar estos objetivos, se utilizan diferentes fuentes de datos. La capa CDL, una base de datos robusta y completa de mapas de cultivo en el CONUS, que proporciona datos anuales de cobertura terrestre rasterizados y georeferenciados. Así como, datos multiespectrales a 30 metros de resolución espacial, preprocesados para rellenar los posibles huecos debido a la presencia de nubes y aerosoles en los datos. Este conjunto de datos ha sido generado por la fusión de sensores Landsat y Moderate Resolution Imaging Spectroradiometer (MODIS). Para procesar tal elevada cantidad de datos se utilizará Google Earth Engine (GEE), que es una aplicación que procesa la información en la nube y está especializada en el procesamiento geoespacial. GEE se puede utilizar para obtener mapas de cultivos a nivel mundial, pero requiere algoritmos eficientes. En este estudio se analizarán diferentes algoritmos de aprendizaje de máquina (machine learning) para analizar la posible aceleración de la obtención de los mapas de clasificación de cultivo. En GEE hay diferentes tipos de algoritmos de clasificación disponibles, desde simples árboles de decisión (decision trees) hasta algoritmos más complejos como máquinas de vectores soporte (SVM) o redes neuronales (neural networks). Este estudio presentará los primeros resultados para la generación de mapas de clasificación de cultivos utilizando la menor cantidad posible de información, a nivel temporal, con una resolución espacial de 30 metros.[EN] Reliable crop classification maps are important for many agricultural applications, such as field monitoring and food security. Nowadays there are already several crop cover databases at different scales and temporal resolutions for different parts of the world (e. g. Corine Land cover in Europe (CORINE) or Cropland Data Layer (CDL) in the United States (US)). However, these databases are historical crop cover maps and hence do not reflect the actual crops on the ground. Usually, these maps require a specific time (annually) to be generated based on the diversity of the different crop phenologies. The aims of this work are two: 1- analyzing the multi-scale spatial crop distribution to identify the most representative areas. 2- analyzing the temporal range used to generate crop cover maps to build maps promptly. The analysis is done over the contiguous US (CONUS) in 2019. To address these objectives, different types of data are used. The CDL, a robust and complete cropland mapping in the CONUS, which provides annual land cover data raster geo-referenced. And, multispectral high-resolution gap-filled data at 30 meter spatial resolution used to avoid the presence of clouds and aerosols in the data. This dataset has been generated by the fusion of Landsat and Moderate Resolution Imaging Spectroradiometer (MODIS). To process this large amount of data will be used Google Earth Engine (GEE) which is a cloud-based application specialized in geospatial processing. GEE can be used to map crops globally, but it requires efficient algorithms. In this study, different machine learning algorithms will be analyzed to generate the promptest classification crop maps. Several options are available in GEE from simple decision trees to more complex algorithms like support vector machines or neural networks. This study will present the first results and the potential to generate crop classification maps using as less possible temporal range information at 30 meters spatial resolution.Rajadel Lambistos, C. (2020). Análisis espaciotemporal de series temporales sin huecos de alta resolución espacial. Universitat Politècnica de València. http://hdl.handle.net/10251/155879TFG
    corecore