10 research outputs found
New techniques to integrate blockchain in Internet of Things scenarios for massive data management
Mención Internacional en el título de doctorNowadays, regardless of the use case, most IoT data is processed using
workflows that are executed on different infrastructures (edge-fog-cloud),
which produces dataflows from the IoT through the edge to the fog/cloud.
In many cases, they also involve several actors (organizations and users),
which poses a challenge for organizations to establish verification of the
transactions performed by the participants in the dataflows built by the
workflow engines and pipeline frameworks. It is essential for organizations,
not only to verify that the execution of applications is performed in the
strict sequence previously established in a DAG by authenticated participants,
but also to verify that the incoming and outgoing IoT data of each
stage of a workflow/pipeline have not been altered by third parties or by the
users associated to the organizations participating in a workflow/pipeline.
Blockchain technology and its mechanism for recording immutable transactions
in a distributed and decentralized manner, characterize it as an
ideal technology to support the aforementioned challenges and challenges since it allows the verification of the records generated in a secure manner.
However, the integration of blockchain technology with workflows for IoT
data processing is not trivial considering that it is a challenge not to lose
the generalization of workflows and/or pipeline engines, which must be
modified to include the embedded blockchain module. The main objective
of this doctoral research was to create new techniques to use blockchain
in the Internet of Things (IoT). Thus, we defined the main goal of this thesis
is to develop new techniques to integrate blockchain in Internet of
Things scenarios for massive data management in edge-fog-cloud environments.
To fulfill this general objective, we have designed a content
delivery model for processing big IoT data in Edge-Fog-Cloud computing
by using micro/nanoservice composition, a continuous verification model
based on blockchain to register significant events from the continuous delivery
model, selecting techniques to integrate blockchain in quasi-real systems
that allow ensuring traceability and non-repudiation of data obtained
from devices and sensors. The evaluation proposed has been thoroughly
evaluated, showing its feasibility and good performance.Hoy en día, independientemente del caso de uso, la mayoría de los datos
de IoT se procesan utilizando flujos de trabajo que se ejecutan en diferentes
infraestructuras (edge-fog-cloud) desde IoT a través del edge hasta la
fog/cloud. En muchos casos, también involucran a varios actores (organizaciones
y usuarios), lo que plantea un desafío para las organizaciones a la
hora de verificar las transacciones realizadas por los participantes en los
flujos de datos. Es fundamental para las organizaciones, no solo para verificar
que la ejecución de aplicaciones se realiza en la secuencia previamente
establecida en un DAG y por participantes autenticados, sino también para
verificar que los datos IoT entrantes y salientes de cada etapa de un flujo
de trabajo no han sido alterados por terceros o por usuarios asociados a
las organizaciones que participan en el mismo. La tecnología Blockchain,
gracias a su mecanismo para registrar transacciones de manera distribuida
y descentralizada, es un tecnología ideal para soportar los retos y desafíos
antes mencionados ya que permite la verificación de los registros generados de manera segura. Sin embargo, la integración de la tecnología blockchain
con flujos de trabajo para IoT no es baladí considerando que es un desafío
proporcionar el rendimiento necesario sin perder la generalización de los
motores de flujos de trabajo, que deben ser modificados para incluir el
módulo blockchain integrado. El objetivo principal de esta investigación
doctoral es desarrollar nuevas técnicas para integrar blockchain en Internet
de las Cosas (IoT) para la gestión masiva de datos en un entorno
edge-fog-cloud. Para cumplir con este objetivo general, se ha diseñado
un modelo de flujos para procesar grandes datos de IoT en computación
Edge-Fog-Cloud mediante el uso de la composición de micro/nanoservicio,
un modelo de verificación continua basado en blockchain para registrar
eventos significativos de la modelo de entrega continua de datos, seleccionando
técnicas para integrar blockchain en sistemas cuasi-reales que
permiten asegurar la trazabilidad y el no repudio de datos obtenidos de
dispositivos y sensores, La evaluación propuesta ha sido minuciosamente
evaluada, mostrando su factibilidad y buen rendimiento.This work has been partially supported by the project "CABAHLA-CM: Convergencia
Big data-Hpc: de los sensores a las Aplicaciones" S2018/TCS-4423
from Madrid Regional Government.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Paolo Trunfio.- Secretario: David Exposito Singh.- Vocal: Rafael Mayo Garcí
Fast algorithm for real-time rings reconstruction
The GAP project is dedicated to study the application of GPU in several contexts in which
real-time response is important to take decisions. The definition of real-time depends on
the application under study, ranging from answer time of μs up to several hours in case
of very computing intensive task. During this conference we presented our work in low
level triggers [1] [2] and high level triggers [3] in high energy physics experiments, and
specific application for nuclear magnetic resonance (NMR) [4] [5] and cone-beam CT [6].
Apart from the study of dedicated solution to decrease the latency due to data transport
and preparation, the computing algorithms play an essential role in any GPU application.
In this contribution, we show an original algorithm developed for triggers application, to
accelerate the ring reconstruction in RICH detector when it is not possible to have seeds
for reconstruction from external trackers
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
High-Performance and Power-Aware Graph Processing on GPUs
Graphs are a common representation in many problem domains, including engineering, finance, medicine, and scientific applications. Different problems map to very large graphs, often involving millions of vertices. Even though very efficient sequential implementations of graph algorithms exist, they become impractical when applied on such actual very large graphs. On the other hand, graphics processing units (GPUs) have become widespread architectures as they provide massive parallelism at low cost. Parallel execution on GPUs may achieve speedup up to three orders of magnitude with respect to the sequential counterparts. Nevertheless, accelerating efficient and optimized sequential algorithms and porting (i.e., parallelizing) their implementation to such many-core architectures is a very challenging task. The task is made even harder since energy and power consumption are becoming constraints in addition, or in same case as an alternative, to performance. This work aims at developing a platform that provides (I) a library of parallel, efficient, and tunable implementations of the most important graph algorithms for GPUs, and (II) an advanced profiling model to analyze both performance and power consumption of the algorithm implementations. The platform goal is twofold. Through the library, it aims at saving developing effort in the parallelization task through a primitive-based approach. Through the profiling framework, it aims at customizing such primitives by considering both the architectural details and the target efficiency metrics (i.e., performance or power)
Recommended from our members
Next Generation Nuclear Plant Pre-Conceptual Design Report
Pre-conceptual design report for the Next Generation Nuclear Plan