10 research outputs found

    The design and engineering of a unified data access layer : bridging data consumers and diverse data sources

    Get PDF
    In the research and development process at Océ, considerable amounts of data are being generated by either machines or services. This data is used by different data consumers at Océ for various purposes such as analysis and reporting. However, since the data is generated by different machines and for different purposes, the data sources are incompatible in the sense of the format and the access technology. When a data consumer requires data from different data sources, it has to write its own integration code. This leads to effort and time for each data consumer. Therefore, there is a need to have a unified Data Access Layer between data sources and data consumers to reduce the effort and time for each data consumer. In addition, for solving the incompatibility of data formats, the goal is to provide a schema and type system, in which data structures are mapped with each other. Design and implementation of the data access layer is an important goal to show the validation of the chosen technology. Design challenges are: 1) mapping the schema of the data sources to the foreign tables. 2) automatic process of creating foreign tables when the schema evolves, and 3) designing reusable foreign data wrappers. These challenges are addressed in the design. The system quality is validated based on the user feedback. The report also lists several recommendations for future improvements

    The design and engineering of a unified data access layer : bridging data consumers and diverse data sources

    Get PDF
    In the research and development process at Océ, considerable amounts of data are being generated by either machines or services. This data is used by different data consumers at Océ for various purposes such as analysis and reporting. However, since the data is generated by different machines and for different purposes, the data sources are incompatible in the sense of the format and the access technology. When a data consumer requires data from different data sources, it has to write its own integration code. This leads to effort and time for each data consumer. Therefore, there is a need to have a unified Data Access Layer between data sources and data consumers to reduce the effort and time for each data consumer. In addition, for solving the incompatibility of data formats, the goal is to provide a schema and type system, in which data structures are mapped with each other. Design and implementation of the data access layer is an important goal to show the validation of the chosen technology. Design challenges are: 1) mapping the schema of the data sources to the foreign tables. 2) automatic process of creating foreign tables when the schema evolves, and 3) designing reusable foreign data wrappers. These challenges are addressed in the design. The system quality is validated based on the user feedback. The report also lists several recommendations for future improvements

    Advanced analytics through FPGA based query processing and deep reinforcement learning

    Get PDF
    Today, vast streams of structured and unstructured data have been incorporated in databases, and analytical processes are applied to discover patterns, correlations, trends and other useful relationships that help to take part in a broad range of decision-making processes. The amount of generated data has grown very large over the years, and conventional database processing methods from previous generations have not been sufficient to provide satisfactory results regarding analytics performance and prediction accuracy metrics. Thus, new methods are needed in a wide array of fields from computer architectures, storage systems, network design to statistics and physics. This thesis proposes two methods to address the current challenges and meet the future demands of advanced analytics. First, we present AxleDB, a Field Programmable Gate Array based query processing system which constitutes the frontend of an advanced analytics system. AxleDB melds highly-efficient accelerators with memory, storage and provides a unified programmable environment. AxleDB is capable of offloading complex Structured Query Language queries from host CPU. The experiments have shown that running a set of TPC-H queries, AxleDB can perform full queries between 1.8x and 34.2x faster and 2.8x to 62.1x more energy efficient compared to MonetDB, and PostgreSQL on a single workstation node. Second, we introduce TauRieL, a novel deep reinforcement learning (DRL) based method for combinatorial problems. The design idea behind combining DRL and combinatorial problems is to apply the prediction capabilities of deep reinforcement learning and to use the universality of combinatorial optimization problems to explore general purpose predictive methods. TauRieL utilizes an actor-critic inspired DRL architecture that adopts ordinary feedforward nets. Furthermore, TauRieL performs online training which unifies training and state space exploration. The experiments show that TauRieL can generate solutions two orders of magnitude faster and performs within 3% of accuracy compared to the state-of-the-art DRL on the Traveling Salesman Problem while searching for the shortest tour. Also, we present that TauRieL can be adapted to the Knapsack combinatorial problem. With a very minimal problem specific modification, TauRieL can outperform a Knapsack specific greedy heuristics.Hoy en día, se han incorporado grandes cantidades de datos estructurados y no estructurados en las bases de datos, y se les aplican procesos analíticos para descubrir patrones, correlaciones, tendencias y otras relaciones útiles que se utilizan mayormente para la toma de decisiones. La cantidad de datos generados ha crecido enormemente a lo largo de los años, y los métodos de procesamiento de bases de datos convencionales utilizados en las generaciones anteriores no son suficientes para proporcionar resultados satisfactorios respecto al rendimiento del análisis y respecto de la precisión de las predicciones. Por lo tanto, se necesitan nuevos métodos en una amplia gama de campos, desde arquitecturas de computadoras, sistemas de almacenamiento, diseño de redes hasta estadísticas y física. Esta tesis propone dos métodos para abordar los desafíos actuales y satisfacer las demandas futuras de análisis avanzado. Primero, presentamos AxleDB, un sistema de procesamiento de consultas basado en FPGAs (Field Programmable Gate Array) que constituye la interfaz de un sistema de análisis avanzado. AxleDB combina aceleradores altamente eficientes con memoria, almacenamiento y proporciona un entorno programable unificado. AxleDB es capaz de descargar consultas complejas de lenguaje de consulta estructurado desde la CPU del host. Los experimentos han demostrado que al ejecutar un conjunto de consultas TPC-H, AxleDB puede realizar consultas completas entre 1.8x y 34.2x más rápido y 2.8x a 62.1x más eficiente energéticamente que MonetDB, y PostgreSQL en un solo nodo de una estación de trabajo. En segundo lugar, presentamos TauRieL, un nuevo método basado en Deep Reinforcement Learning (DRL) para problemas combinatorios. La idea central que está detrás de la combinación de DRL y problemas combinatorios, es aplicar las capacidades de predicción del aprendizaje de refuerzo profundo y el uso de la universalidad de los problemas de optimización combinatoria para explorar métodos predictivos de propósito general. TauRieL utiliza una arquitectura DRL inspirada en el actor-crítico que se adapta a redes feedforward. Además, TauRieL realiza el entrenamieton en línea que unifica el entrenamiento y la exploración espacial de los estados. Los experimentos muestran que TauRieL puede generar soluciones dos órdenes de magnitud más rápido y funciona con un 3% de precisión en comparación con el estado del arte en DRL aplicado al problema del viajante mientras busca el recorrido más corto. Además, presentamos que TauRieL puede adaptarse al problema de la Mochila. Con una modificación específica muy mínima del problema, TauRieL puede superar a una heurística codiciosa de Knapsack Problem.Postprint (published version

    Advanced analytics through FPGA based query processing and deep reinforcement learning

    Get PDF
    Today, vast streams of structured and unstructured data have been incorporated in databases, and analytical processes are applied to discover patterns, correlations, trends and other useful relationships that help to take part in a broad range of decision-making processes. The amount of generated data has grown very large over the years, and conventional database processing methods from previous generations have not been sufficient to provide satisfactory results regarding analytics performance and prediction accuracy metrics. Thus, new methods are needed in a wide array of fields from computer architectures, storage systems, network design to statistics and physics. This thesis proposes two methods to address the current challenges and meet the future demands of advanced analytics. First, we present AxleDB, a Field Programmable Gate Array based query processing system which constitutes the frontend of an advanced analytics system. AxleDB melds highly-efficient accelerators with memory, storage and provides a unified programmable environment. AxleDB is capable of offloading complex Structured Query Language queries from host CPU. The experiments have shown that running a set of TPC-H queries, AxleDB can perform full queries between 1.8x and 34.2x faster and 2.8x to 62.1x more energy efficient compared to MonetDB, and PostgreSQL on a single workstation node. Second, we introduce TauRieL, a novel deep reinforcement learning (DRL) based method for combinatorial problems. The design idea behind combining DRL and combinatorial problems is to apply the prediction capabilities of deep reinforcement learning and to use the universality of combinatorial optimization problems to explore general purpose predictive methods. TauRieL utilizes an actor-critic inspired DRL architecture that adopts ordinary feedforward nets. Furthermore, TauRieL performs online training which unifies training and state space exploration. The experiments show that TauRieL can generate solutions two orders of magnitude faster and performs within 3% of accuracy compared to the state-of-the-art DRL on the Traveling Salesman Problem while searching for the shortest tour. Also, we present that TauRieL can be adapted to the Knapsack combinatorial problem. With a very minimal problem specific modification, TauRieL can outperform a Knapsack specific greedy heuristics.Hoy en día, se han incorporado grandes cantidades de datos estructurados y no estructurados en las bases de datos, y se les aplican procesos analíticos para descubrir patrones, correlaciones, tendencias y otras relaciones útiles que se utilizan mayormente para la toma de decisiones. La cantidad de datos generados ha crecido enormemente a lo largo de los años, y los métodos de procesamiento de bases de datos convencionales utilizados en las generaciones anteriores no son suficientes para proporcionar resultados satisfactorios respecto al rendimiento del análisis y respecto de la precisión de las predicciones. Por lo tanto, se necesitan nuevos métodos en una amplia gama de campos, desde arquitecturas de computadoras, sistemas de almacenamiento, diseño de redes hasta estadísticas y física. Esta tesis propone dos métodos para abordar los desafíos actuales y satisfacer las demandas futuras de análisis avanzado. Primero, presentamos AxleDB, un sistema de procesamiento de consultas basado en FPGAs (Field Programmable Gate Array) que constituye la interfaz de un sistema de análisis avanzado. AxleDB combina aceleradores altamente eficientes con memoria, almacenamiento y proporciona un entorno programable unificado. AxleDB es capaz de descargar consultas complejas de lenguaje de consulta estructurado desde la CPU del host. Los experimentos han demostrado que al ejecutar un conjunto de consultas TPC-H, AxleDB puede realizar consultas completas entre 1.8x y 34.2x más rápido y 2.8x a 62.1x más eficiente energéticamente que MonetDB, y PostgreSQL en un solo nodo de una estación de trabajo. En segundo lugar, presentamos TauRieL, un nuevo método basado en Deep Reinforcement Learning (DRL) para problemas combinatorios. La idea central que está detrás de la combinación de DRL y problemas combinatorios, es aplicar las capacidades de predicción del aprendizaje de refuerzo profundo y el uso de la universalidad de los problemas de optimización combinatoria para explorar métodos predictivos de propósito general. TauRieL utiliza una arquitectura DRL inspirada en el actor-crítico que se adapta a redes feedforward. Además, TauRieL realiza el entrenamieton en línea que unifica el entrenamiento y la exploración espacial de los estados. Los experimentos muestran que TauRieL puede generar soluciones dos órdenes de magnitud más rápido y funciona con un 3% de precisión en comparación con el estado del arte en DRL aplicado al problema del viajante mientras busca el recorrido más corto. Además, presentamos que TauRieL puede adaptarse al problema de la Mochila. Con una modificación específica muy mínima del problema, TauRieL puede superar a una heurística codiciosa de Knapsack Problem

    Doctor of Philosophy

    Get PDF
    dissertationAs the base of the software stack, system-level software is expected to provide ecient and scalable storage, communication, security and resource management functionalities. However, there are many computationally expensive functionalities at the system level, such as encryption, packet inspection, and error correction. All of these require substantial computing power. What's more, today's application workloads have entered gigabyte and terabyte scales, which demand even more computing power. To solve the rapidly increased computing power demand at the system level, this dissertation proposes using parallel graphics pro- cessing units (GPUs) in system software. GPUs excel at parallel computing, and also have a much faster development trend in parallel performance than central processing units (CPUs). However, system-level software has been originally designed to be latency-oriented. GPUs are designed for long-running computation and large-scale data processing, which are throughput-oriented. Such mismatch makes it dicult to t the system-level software with the GPUs. This dissertation presents generic principles of system-level GPU computing developed during the process of creating our two general frameworks for integrating GPU computing in storage and network packet processing. The principles are generic design techniques and abstractions to deal with common system-level GPU computing challenges. Those principles have been evaluated in concrete cases including storage and network packet processing applications that have been augmented with GPU computing. The signicant performance improvement found in the evaluation shows the eectiveness and eciency of the proposed techniques and abstractions. This dissertation also presents a literature survey of the relatively young system-level GPU computing area, to introduce the state of the art in both applications and techniques, and also their future potentials

    Modern data analytics in the cloud era

    Get PDF
    Cloud Computing ist die dominante Technologie des letzten Jahrzehnts. Die Benutzerfreundlichkeit der verwalteten Umgebung in Kombination mit einer nahezu unbegrenzten Menge an Ressourcen und einem nutzungsabhängigen Preismodell ermöglicht eine schnelle und kosteneffiziente Projektrealisierung für ein breites Nutzerspektrum. Cloud Computing verändert auch die Art und Weise wie Software entwickelt, bereitgestellt und genutzt wird. Diese Arbeit konzentriert sich auf Datenbanksysteme, die in der Cloud-Umgebung eingesetzt werden. Wir identifizieren drei Hauptinteraktionspunkte der Datenbank-Engine mit der Umgebung, die veränderte Anforderungen im Vergleich zu traditionellen On-Premise-Data-Warehouse-Lösungen aufweisen. Der erste Interaktionspunkt ist die Interaktion mit elastischen Ressourcen. Systeme in der Cloud sollten Elastizität unterstützen, um den Lastanforderungen zu entsprechen und dabei kosteneffizient zu sein. Wir stellen einen elastischen Skalierungsmechanismus für verteilte Datenbank-Engines vor, kombiniert mit einem Partitionsmanager, der einen Lastausgleich bietet und gleichzeitig die Neuzuweisung von Partitionen im Falle einer elastischen Skalierung minimiert. Darüber hinaus führen wir eine Strategie zum initialen Befüllen von Puffern ein, die es ermöglicht, skalierte Ressourcen unmittelbar nach der Skalierung auszunutzen. Cloudbasierte Systeme sind von fast überall aus zugänglich und verfügbar. Daten werden häufig von zahlreichen Endpunkten aus eingespeist, was sich von ETL-Pipelines in einer herkömmlichen Data-Warehouse-Lösung unterscheidet. Viele Benutzer verzichten auf die Definition von strikten Schemaanforderungen, um Transaktionsabbrüche aufgrund von Konflikten zu vermeiden oder um den Ladeprozess von Daten zu beschleunigen. Wir führen das Konzept der PatchIndexe ein, die die Definition von unscharfen Constraints ermöglichen. PatchIndexe verwalten Ausnahmen zu diesen Constraints, machen sie für die Optimierung und Ausführung von Anfragen nutzbar und bieten effiziente Unterstützung bei Datenaktualisierungen. Das Konzept kann auf beliebige Constraints angewendet werden und wir geben Beispiele für unscharfe Eindeutigkeits- und Sortierconstraints. Darüber hinaus zeigen wir, wie PatchIndexe genutzt werden können, um fortgeschrittene Constraints wie eine unscharfe Multi-Key-Partitionierung zu definieren, die eine robuste Anfrageperformance bei Workloads mit unterschiedlichen Partitionsanforderungen bietet. Der dritte Interaktionspunkt ist die Nutzerinteraktion. Datengetriebene Anwendungen haben sich in den letzten Jahren verändert. Neben den traditionellen SQL-Anfragen für Business Intelligence sind heute auch datenwissenschaftliche Anwendungen von großer Bedeutung. In diesen Fällen fungiert das Datenbanksystem oft nur als Datenlieferant, während der Rechenaufwand in dedizierten Data-Science- oder Machine-Learning-Umgebungen stattfindet. Wir verfolgen das Ziel, fortgeschrittene Analysen in Richtung der Datenbank-Engine zu verlagern und stellen das Grizzly-Framework als DataFrame-zu-SQL-Transpiler vor. Auf dieser Grundlage identifizieren wir benutzerdefinierte Funktionen (UDFs) und maschinelles Lernen (ML) als wichtige Aufgaben, die von einer tieferen Integration in die Datenbank-Engine profitieren würden. Daher untersuchen und bewerten wir Ansätze für die datenbankinterne Ausführung von Python-UDFs und datenbankinterne ML-Inferenz.Cloud computing has been the groundbreaking technology of the last decade. The ease-of-use of the managed environment in combination with nearly infinite amount of resources and a pay-per-use price model enables fast and cost-efficient project realization for a broad range of users. Cloud computing also changes the way software is designed, deployed and used. This thesis focuses on database systems deployed in the cloud environment. We identify three major interaction points of the database engine with the environment that show changed requirements compared to traditional on-premise data warehouse solutions. First, software is deployed on elastic resources. Consequently, systems should support elasticity in order to match workload requirements and be cost-effective. We present an elastic scaling mechanism for distributed database engines, combined with a partition manager that provides load balancing while minimizing partition reassignments in the case of elastic scaling. Furthermore we introduce a buffer pre-heating strategy that allows to mitigate a cold start after scaling and leads to an immediate performance benefit using scaling. Second, cloud based systems are accessible and available from nearly everywhere. Consequently, data is frequently ingested from numerous endpoints, which differs from bulk loads or ETL pipelines in a traditional data warehouse solution. Many users do not define database constraints in order to avoid transaction aborts due to conflicts or to speed up data ingestion. To mitigate this issue we introduce the concept of PatchIndexes, which allow the definition of approximate constraints. PatchIndexes maintain exceptions to constraints, make them usable in query optimization and execution and offer efficient update support. The concept can be applied to arbitrary constraints and we provide examples of approximate uniqueness and approximate sorting constraints. Moreover, we show how PatchIndexes can be exploited to define advanced constraints like an approximate multi-key partitioning, which offers robust query performance over workloads with different partition key requirements. Third, data-centric workloads changed over the last decade. Besides traditional SQL workloads for business intelligence, data science workloads are of significant importance nowadays. For these cases the database system might only act as data delivery, while the computational effort takes place in data science or machine learning (ML) environments. As this workflow has several drawbacks, we follow the goal of pushing advanced analytics towards the database engine and introduce the Grizzly framework as a DataFrame-to-SQL transpiler. Based on this we identify user-defined functions (UDFs) and machine learning inference as important tasks that would benefit from a deeper engine integration and investigate approaches to push these operations towards the database engine

    Atti del XXXV Convegno Nazionale di Idraulica e Costruzioni Idrauliche

    Get PDF
    La XXXV edizione del Convegno Nazionale di Idraulica e Costruzioni Idrauliche (IDRA16), co-organizzata dal Gruppo Italiano di Idraulica (GII) e dal Dipartimento di Ingegneria Civile, Chimica, Ambientale, e dei Materiali (DICAM) dell’Alma Mater Studiorum - Università di Bologna, si è svolta a Bologna dal 14 al 16 settembre 2016. Il Convegno Nazionale è tornato pertanto ad affacciarsi all’ombra del “Nettuno”, dopo l’edizione del 1982 (XVIII edizione). Il titolo della XXXV edizione, “Ambiente, Risorse, Energia: le sfide dell’Ingegneria delle acque in un mondo che cambia”, sottolinea l’importanza e la complessità delle tematiche che rivestono la sfera dello studio e del governo delle risorse idriche. Le sempre più profonde interconnessioni tra risorse idriche, sviluppo economico e benessere sociale, infatti, spronano sia l’Accademia che l’intera comunità tecnico-scientifica nazionale ed internazionale all’identificazione ed alla messa in atto di strategie di gestione innovative ed ottimali: sfide percepite quanto mai necessarie in un contesto ambientale in continua evoluzione, come quello in cui viviamo. La XXXV edizione del Convegno di Idraulica e Costruzioni Idrauliche, pertanto, si è posta come punto d’incontro della comunità tecnico-scientifica italiana per la discussione a tutto tondo di tali problematiche, offrendo un programma scientifico particolarmente ricco e articolato, che ha coperto tutti gli ambiti riconducibili all’Ingegneria delle Acque. L’apertura dei lavori del Convegno si è svolta nella storica cornice della Chiesa di Santa Cristina, uno dei luoghi più caratteristici e belli della città ed oggi luogo privilegiato per l’ascolto della musica classica, mentre le attività di presentazione e discussione scientifica si sono svolte principalmente presso la sede della Scuola di Ingegneria e Architettura dell’Università di Bologna sita in Via Terracini. Il presente volume digitale ad accesso libero (licenza Creative Commons 4.0) raccoglie le memorie brevi pervenute al Comitato Scientifico di IDRA16 ed accettate per la presentazione al convegno a valle di un processo di revisione tra pari. Il volume articola dette memorie in sette macro-tematiche, che costituiscono i capitoli del volume stesso: I. meccanica dei fluidi; II. ambiente marittimo e costiero; III. criteri, metodi e modelli per l’analisi dei processi idrologici e la gestione delle acque; IV. gestione e tutela dei corpi idrici e degli ecosistemi; V. valutazione e mitigazione del rischio idrologico e idraulico; VI. dinamiche acqua-società: sviluppo sostenibile e gestione del territorio; VII. monitoraggio, open-data e software libero. Ciascuna macro-tematica raggruppa più sessioni specialistiche autonome sviluppatesi in parallelo durante le giornate del Convegno, i cui titoli vengono richiamati all’interno del presente volume. La vastità e la diversità delle tematiche affrontate, che ben rappresentano la complessità delle numerose sfide dell’Ingegneria delle Acque, appaiono evidenti dalla consultazione dell’insieme di memorie brevi presentate. La convinta partecipazione della Comunità Scientifica Italiana è dimostrata dalle oltre 350 memorie brevi, distribuite in maniera pressoché uniforme tra le sette macro-tematiche di riferimento. Dette memorie sono sommari estesi di lunghezza variabile redatti in lingua italiana, o inglese. In particolare, la possibilità di stesura in inglese è stata concessa con l’auspicio di portare la visibilità del lavoro presentato ad un livello sovranazionale, grazie alla pubblicazione open access del volume degli Atti del Convegno. Il volume si divide in tre parti: la parte iniziale è dedicata alla presentazione del volume ed all’indice generale dei contributi divisi per macro-tematiche; la parte centrale raccoglie le memorie brevi; la terza parte riporta l’indice analitico degli Autori, che chiude il volume

    Atti del XXXV Convegno Nazionale di Idraulica e Costruzioni Idrauliche

    Get PDF
    La XXXV edizione del Convegno Nazionale di Idraulica e Costruzioni Idrauliche (IDRA16), co-organizzata dal Gruppo Italiano di Idraulica (GII) e dal Dipartimento di Ingegneria Civile, Chimica, Ambientale, e dei Materiali (DICAM) dell’Alma Mater Studiorum - Università di Bologna, si è svolta a Bologna dal 14 al 16 settembre 2016. Il Convegno Nazionale è tornato pertanto ad affacciarsi all’ombra del “Nettuno”, dopo l’edizione del 1982 (XVIII edizione). Il titolo della XXXV edizione, “Ambiente, Risorse, Energia: le sfide dell’Ingegneria delle acque in un mondo che cambia”, sottolinea l’importanza e la complessità delle tematiche che rivestono la sfera dello studio e del governo delle risorse idriche. Le sempre più profonde interconnessioni tra risorse idriche, sviluppo economico e benessere sociale, infatti, spronano sia l’Accademia che l’intera comunità tecnico-scientifica nazionale ed internazionale all’identificazione ed alla messa in atto di strategie di gestione innovative ed ottimali: sfide percepite quanto mai necessarie in un contesto ambientale in continua evoluzione, come quello in cui viviamo. La XXXV edizione del Convegno di Idraulica e Costruzioni Idrauliche, pertanto, si è posta come punto d’incontro della comunità tecnico-scientifica italiana per la discussione a tutto tondo di tali problematiche, offrendo un programma scientifico particolarmente ricco e articolato, che ha coperto tutti gli ambiti riconducibili all’Ingegneria delle Acque. L’apertura dei lavori del Convegno si è svolta nella storica cornice della Chiesa di Santa Cristina, uno dei luoghi più caratteristici e belli della città ed oggi luogo privilegiato per l’ascolto della musica classica, mentre le attività di presentazione e discussione scientifica si sono svolte principalmente presso la sede della Scuola di Ingegneria e Architettura dell’Università di Bologna sita in Via Terracini. Il presente volume digitale ad accesso libero (licenza Creative Commons 4.0) raccoglie le memorie brevi pervenute al Comitato Scientifico di IDRA16 ed accettate per la presentazione al convegno a valle di un processo di revisione tra pari. Il volume articola dette memorie in sette macro-tematiche, che costituiscono i capitoli del volume stesso: I. meccanica dei fluidi; II. ambiente marittimo e costiero; III. criteri, metodi e modelli per l’analisi dei processi idrologici e la gestione delle acque; IV. gestione e tutela dei corpi idrici e degli ecosistemi; V. valutazione e mitigazione del rischio idrologico e idraulico; VI. dinamiche acqua-società: sviluppo sostenibile e gestione del territorio; VII. monitoraggio, open-data e software libero. Ciascuna macro-tematica raggruppa più sessioni specialistiche autonome sviluppatesi in parallelo durante le giornate del Convegno, i cui titoli vengono richiamati all’interno del presente volume. La vastità e la diversità delle tematiche affrontate, che ben rappresentano la complessità delle numerose sfide dell’Ingegneria delle Acque, appaiono evidenti dalla consultazione dell’insieme di memorie brevi presentate. La convinta partecipazione della Comunità Scientifica Italiana è dimostrata dalle oltre 350 memorie brevi, distribuite in maniera pressoché uniforme tra le sette macro-tematiche di riferimento. Dette memorie sono sommari estesi di lunghezza variabile redatti in lingua italiana, o inglese. In particolare, la possibilità di stesura in inglese è stata concessa con l’auspicio di portare la visibilità del lavoro presentato ad un livello sovranazionale, grazie alla pubblicazione open access del volume degli Atti del Convegno. Il volume si divide in tre parti: la parte iniziale è dedicata alla presentazione del volume ed all’indice generale dei contributi divisi per macro-tematiche; la parte centrale raccoglie le memorie brevi; la terza parte riporta l’indice analitico degli Autori, che chiude il volume
    corecore