4,594 research outputs found
Language Design for Reactive Systems: On Modal Models, Time, and Object Orientation in Lingua Franca and SCCharts
Reactive systems play a crucial role in the embedded domain. They continuously interact with their environment, handle concurrent operations, and are commonly expected to provide deterministic behavior to enable application in safety-critical systems. In this context, language design is a key aspect, since carefully tailored language constructs can aid in addressing the challenges faced in this domain, as illustrated by the various concurrency models that prevent the known pitfalls of regular threads. Today, many languages exist in this domain and often provide unique characteristics that make them specifically fit for certain use cases. This thesis evolves around two distinctive languages: the actor-oriented polyglot coordination language Lingua Franca and the synchronous statecharts dialect SCCharts. While they take different approaches in providing reactive modeling capabilities, they share clear similarities in their semantics and complement each other in design principles. This thesis analyzes and compares key design aspects in the context of these two languages. For three particularly relevant concepts, it provides and evaluates lean and seamless language extensions that are carefully aligned with the fundamental principles of the underlying language. Specifically, Lingua Franca is extended toward coordinating modal behavior, while SCCharts receives a timed automaton notation with an efficient execution model using dynamic ticks and an extension toward the object-oriented modeling paradigm
Serverless Strategies and Tools in the Cloud Computing Continuum
Tesis por compendio[ES] En los últimos años, la popularidad de la computación en nube ha permitido a los usuarios acceder a recursos de cómputo, red y almacenamiento sin precedentes bajo un modelo de pago por uso. Esta popularidad ha propiciado la aparición de nuevos servicios para resolver determinados problemas informáticos a gran escala y simplificar el desarrollo y el despliegue de aplicaciones. Entre los servicios más destacados en los últimos años se encuentran las plataformas FaaS (Función como Servicio), cuyo principal atractivo es la facilidad de despliegue de pequeños fragmentos de código en determinados lenguajes de programación para realizar tareas específicas en respuesta a eventos. Estas funciones son ejecutadas en los servidores del proveedor Cloud sin que los usuarios se preocupen de su mantenimiento ni de la gestión de su elasticidad, manteniendo siempre un modelo de pago por uso de grano fino.
Las plataformas FaaS pertenecen al paradigma informático conocido como Serverless, cuyo propósito es abstraer la gestión de servidores por parte de los usuarios, permitiéndoles centrar sus esfuerzos únicamente en el desarrollo de aplicaciones. El problema del modelo FaaS es que está enfocado principalmente en microservicios y tiende a tener limitaciones en el tiempo de ejecución y en las capacidades de computación (por ejemplo, carece de soporte para hardware de aceleración como GPUs). Sin embargo, se ha demostrado que la capacidad de autoaprovisionamiento y el alto grado de paralelismo de estos servicios pueden ser muy adecuados para una mayor variedad de aplicaciones. Además, su inherente ejecución dirigida por eventos hace que las funciones sean perfectamente adecuadas para ser definidas como pasos en flujos de trabajo de procesamiento de archivos (por ejemplo, flujos de trabajo de computación científica).
Por otra parte, el auge de los dispositivos inteligentes e integrados (IoT), las innovaciones en las redes de comunicación y la necesidad de reducir la latencia en casos de uso complejos han dado lugar al concepto de Edge computing, o computación en el borde. El Edge computing consiste en el procesamiento en dispositivos cercanos a las fuentes de datos para mejorar los tiempos de respuesta. La combinación de este paradigma con la computación en nube, formando arquitecturas con dispositivos a distintos niveles en función de su proximidad a la fuente y su capacidad de cómputo, se ha acuñado como continuo de la computación en la nube (o continuo computacional).
Esta tesis doctoral pretende, por lo tanto, aplicar diferentes estrategias Serverless para permitir el despliegue de aplicaciones generalistas, empaquetadas en contenedores de software, a través de los diferentes niveles del continuo computacional. Para ello, se han desarrollado múltiples herramientas con el fin de: i) adaptar servicios FaaS de proveedores Cloud públicos; ii) integrar diferentes componentes software para definir una plataforma Serverless en infraestructuras privadas y en el borde; iii) aprovechar dispositivos de aceleración en plataformas Serverless; y iv) facilitar el despliegue de aplicaciones y flujos de trabajo a través de interfaces de usuario. Además, se han creado y adaptado varios casos de uso para evaluar los desarrollos conseguidos.[CA] En els últims anys, la popularitat de la computació al núvol ha permès als usuaris accedir a recursos de còmput, xarxa i emmagatzematge sense precedents sota un model de pagament per ús. Aquesta popularitat ha propiciat l'aparició de nous serveis per resoldre determinats problemes informàtics a gran escala i simplificar el desenvolupament i desplegament d'aplicacions. Entre els serveis més destacats en els darrers anys hi ha les plataformes FaaS (Funcions com a Servei), el principal atractiu de les quals és la facilitat de desplegament de petits fragments de codi en determinats llenguatges de programació per realitzar tasques específiques en resposta a esdeveniments. Aquestes funcions són executades als servidors del proveïdor Cloud sense que els usuaris es preocupen del seu manteniment ni de la gestió de la seva elasticitat, mantenint sempre un model de pagament per ús de gra fi.
Les plataformes FaaS pertanyen al paradigma informàtic conegut com a Serverless, el propòsit del qual és abstraure la gestió de servidors per part dels usuaris, permetent centrar els seus esforços únicament en el desenvolupament d'aplicacions. El problema del model FaaS és que està enfocat principalment a microserveis i tendeix a tenir limitacions en el temps d'execució i en les capacitats de computació (per exemple, no té suport per a maquinari d'acceleració com GPU). Tot i això, s'ha demostrat que la capacitat d'autoaprovisionament i l'alt grau de paral·lelisme d'aquests serveis poden ser molt adequats per a més aplicacions. A més, la seva inherent execució dirigida per esdeveniments fa que les funcions siguen perfectament adequades per ser definides com a passos en fluxos de treball de processament d'arxius (per exemple, fluxos de treball de computació científica).
D'altra banda, l'auge dels dispositius intel·ligents i integrats (IoT), les innovacions a les xarxes de comunicació i la necessitat de reduir la latència en casos d'ús complexos han donat lloc al concepte d'Edge computing, o computació a la vora. L'Edge computing consisteix en el processament en dispositius propers a les fonts de dades per millorar els temps de resposta. La combinació d'aquest paradigma amb la computació en núvol, formant arquitectures amb dispositius a diferents nivells en funció de la proximitat a la font i la capacitat de còmput, s'ha encunyat com a continu de la computació al núvol (o continu computacional).
Aquesta tesi doctoral pretén, doncs, aplicar diferents estratègies Serverless per permetre el desplegament d'aplicacions generalistes, empaquetades en contenidors de programari, a través dels diferents nivells del continu computacional. Per això, s'han desenvolupat múltiples eines per tal de: i) adaptar serveis FaaS de proveïdors Cloud públics; ii) integrar diferents components de programari per definir una plataforma Serverless en infraestructures privades i a la vora; iii) aprofitar dispositius d'acceleració a plataformes Serverless; i iv) facilitar el desplegament d'aplicacions i fluxos de treball mitjançant interfícies d'usuari. A més, s'han creat i s'han adaptat diversos casos d'ús per avaluar els desenvolupaments aconseguits.[EN] In recent years, the popularity of Cloud computing has allowed users to access unprecedented compute, network, and storage resources under a pay-per-use model. This popularity led to new services to solve specific large-scale computing challenges and simplify the development and deployment of applications. Among the most prominent services in recent years are FaaS (Function as a Service) platforms, whose primary appeal is the ease of deploying small pieces of code in certain programming languages to perform specific tasks on an event-driven basis. These functions are executed on the Cloud provider's servers without users worrying about their maintenance or elasticity management, always keeping a fine-grained pay-per-use model.
FaaS platforms belong to the computing paradigm known as Serverless, which aims to abstract the management of servers from the users, allowing them to focus their efforts solely on the development of applications. The problem with FaaS is that it focuses on microservices and tends to have limitations regarding the execution time and the computing capabilities (e.g. lack of support for acceleration hardware such as GPUs). However, it has been demonstrated that the self-provisioning capability and high degree of parallelism of these services can be well suited to broader applications. In addition, their inherent event-driven triggering makes functions perfectly suitable to be defined as steps in file processing workflows (e.g. scientific computing workflows).
Furthermore, the rise of smart and embedded devices (IoT), innovations in communication networks and the need to reduce latency in challenging use cases have led to the concept of Edge computing. Edge computing consists of conducting the processing on devices close to the data sources to improve response times. The coupling of this paradigm together with Cloud computing, involving architectures with devices at different levels depending on their proximity to the source and their compute capability, has been coined as Cloud Computing Continuum (or Computing Continuum).
Therefore, this PhD thesis aims to apply different Serverless strategies to enable the deployment of generalist applications, packaged in software containers, across the different tiers of the Cloud Computing Continuum. To this end, multiple tools have been developed in order to: i) adapt FaaS services from public Cloud providers; ii) integrate different software components to define a Serverless platform on on-premises and Edge infrastructures; iii) leverage acceleration devices on Serverless platforms; and iv) facilitate the deployment of applications and workflows through user interfaces. Additionally, several use cases have been created and adapted to assess the developments achieved.Risco Gallardo, S. (2023). Serverless Strategies and Tools in the Cloud Computing Continuum [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202013Compendi
La traduzione specializzata all’opera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.
Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The “Language Toolkit – Le lingue straniere al servizio dell’internazionalizzazione dell’impresa” project, promoted by the Department of Interpreting and Translation (Forlì Campus) in collaboration with the Romagna Chamber of Commerce (Forlì-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices
Multi-omics of AML
Acute myeloid leukemia (AML) is one of the most aggressive hematopoietic malignancies and has been
recognized as a heterogeneous disease due to a lack of unifying characteristics. It is driven by different
genome aberrations, gene expression changes, and epigenomic dysregulations. Therefore a multi-omics
approach is needed to unravel the complex biology of this disease. This thesis deals with the challenges of
identifying driver events that account for differences in clinical phenotypes and responses to treatment.
The work presented here investigates the driver events of AML and epigenetics drug response profiles.
The thesis consists of three main projects. The first study identifies recurrent mutations in AML carrying
t(8;16)(p11;p13), a rare abnormality. The second project is identifying prospective drivers of mutation-
negative nkAML. The third project concentrates on epigenetic changes after AML drugs.
t(8;16) AML is a rare and distinguishable clinicopathological entity. Some previous reports that rep-
resented the characteristics of patients with this type of AML suggest that the t(8;16) translocation
could be sufficient to induce hematopoietic cell transformation to AML without acquiring other genetic
alterations. Therefore here I evaluate the frequently mutated genes and compare them with the most
frequent mutated genes in AML in general and AML carrying t(8;16) translocation. FLT3 mutation was
found in 3 patients of my cohort, a potential target for therapy with tyrosine kinase inhibitors. However,
exciting finding was the mutations in EYS, KRTAP9-1, PSIP1, and SPTBN5 that were depicted earlier
in AML.
Elucidating different layers of aberrations in normal karyotype no-driver acute myeloid leukemia pro-
vides better biology insight and may impact risk-group stratification and new potential driver events.
Therefore, this study aimed to detect such anomalies in samples without known driver genetic abnor-
malities using multi-omic molecular profiling. Samples were analyzed using RNA sequencing (n=43),
whole genome sequencing (n=43), and EPIC DNA methylation array (n=42). In 33 of 43 patients, all
three layers of data were available. I developed a pipeline looking for a driver in any layer of data by
connecting the information of all layers of data and utilizing public genomic, transcriptomic, and clinical
data available from TCGA. Genetic alterations of somatic cells can drive malignant clone formation
and promote leukemogenesis. Therefore I first built a mutation prioritization workflow that checks each
patient’s genomic mutation drivers. Here I use the information on the allele frequency of the specific mu-
tation combining information from WGS and RNA sequencing data. Finally, I compared each mutation
on a positional level with AML and other TCGA cancer cohorts to assess the causative genomic muta-
tions. I found potential driver stopgain mutation in genes implicated in chromosome segregation during
mitosis and some tumor suppressor genes. I found new stopgain mutations in cancer genes (NIPBL
and NF1). Since fusions are increasingly acknowledged as oncology therapeutic targets, I investigated
potential driver fusion events by evaluating high-confidence and in-frame cancer-related fusion findings.
As a result, I found specific gene fusion patterns. Kinases activated by gene fusions define a meaningful
class of oncogenes associated with hematopoietic malignancies. I identify several novel and recurrent
fusions involving kinases that potentially play a role in leukemogenesis. I detected previously unreported
fusions involving known cancer-related genes, such as PIM3- RAC2 and PROK2- EIF4E3. In addition,
outliers, such as gene expression levels, can pinpoint potential pathogenic events. Therefore, combining
my AML cohort with a healthy control group, I determined aberrant gene expression levels as possible
pathogenic events using the deep learning method. Finally, I combined the data and looked for a com-
parison to the methylation pattern of each patient. Overall, the analysis uncovered a rich landscape of
potential drivers. In different data layers, I found an altered genomic and transcriptomic signature of
different GTPases, which are known to be involved in many stages of tumorigenesis. My methods and
results demonstrate the power of integrating multi-omics data to study complex driver alterations in
AML and point to future directions of research that aim to bridge gaps in research and clinical applications. Furthermore, I provide in vitro evidence for antileukemic cooperativity and epigenetic activity
between DAC and ATRA. I performed differential methylation analysis on CpG resolution and across
genomic and transposable elements regions, enhancing the results’ statistical power and interpretabil-
ity. I demonstrated that single-agent ATRA caused no global demethylation, nor did ATRA improve
the demethylation mediated by DAC. In summary, combining multi-omics profiling is a powerful tool
for studying dysregulated patterns in AML. Furthermore, multi-omics profiling performed on mutation-
negative nkAML reveals several promising drivers. My findings not only go beyond augmenting my
understanding of the heterogeneity landscape of AML but also may have immediate implications for new
targeted therapy studies
ELECTROMYOGRAPHY DEVICE FOR USER AUTHENTICATION
A method and system for user authentication. The method includes receiving input data from an electromyography (EMG) sensor included in a client device worn by a user. The method includes extracting an EMG signal from the input data received as the user performs a task. The method includes identifying features of the task based on the EMG signal. The method includes determining whether the features of the task match features of a reference task stored on the client device. The method includes determining whether the user embeddings extracted from the EMG signal match reference embeddings corresponding to the reference task. The method includes, if the features of the task match the features of the reference task and the user embeddings match the reference embeddings, determining that the user is an authorized user authorizing/allowing access to the client device
The Universal Safety Format in Action: Tool Integration and Practical Application
Designing software that meets the stringent requirements of functional safety standards imposes a significant development effort compared to conventional software. A key aspect is the integration of safety mechanisms into the functional design to ensure a safe state during operation even in the event of hardware errors. These safety mechanisms can be applied at different levels of abstraction during the development process and are usually implemented and integrated manually into the design. This does not only cause significant effort but does also reduce the overall maintainability of the software. To mitigate this, we present the Universal Safety Format (USF), which enables the generation of safety mechanisms based on the separation of concerns principle in a model-driven approach. Safety mechanisms are described as generic patterns using a transformation language independent from the functional design or any particular programming language. The USF was designed to be easily integrated into existing tools and workflows that can support different programming languages. Tools supporting the USF can utilize the patterns in a functional design to generate and integrate specific safety mechanisms for different languages using the transformation rules contained within the patterns. This enables not only the reuse of safety patterns in different designs, but also across different programming languages. The approach is demonstrated with an automotive use-case as well as different tools supporting the USF
Fish aggregating devices (FADs) as conservation tools: understanding community dynamics at pelagic moored FADs
The pelagic ocean covers the majority of the planet and is the largest ecosystem by volume. It is estimated to harbor considerable biodiversity, and a few select species support some of the largest fisheries in existence. The expanse of the open ocean provides important resources and services to humans, and also poses challenges to understanding the biology and ecology of its resident species. Focusing effort, both fishing and research, on hotspots or other aggregation sites increases the feasibility of interacting with these often patchily-distributed animals. Fish aggregating devices (FADs) have become a widespread fishing tool in many of the world’s oceans. Leveraging the natural behavior of many fish species to aggregate around floating or submerged structures, FADs are used to increase the capture efficiency in a range of marine fisheries and the scale of their use has raised concerns around potential effects these artificial structures have on the ecosystems in which they are used. Research efforts have focused on understanding these impacts by taking advantage of the fisheries-related opportunities and data made available by these fishing tools and the fleets that use them. However, this may potentially bias studies towards fishing hotspots and larger, commercially important species. Here, we discuss how subsurface FADs, purpose-built and discretely deployed, can act as useful research platforms to address important pelagic ecology questions and conservation topics. We describe the colonization of new FADs and the aggregation fluctuations through long-term video and visual surveys, provide evidence for invertebrate micronekton aggregation as a potential mechanism behind fish attraction to FADs, and detail a new acoustic telemetry array design that can provide previously unavailable position metrics of tagged fish in the open ocean, a notably challenging habitat to study. These new data and scientific tools will allow for the continued and enhanced study of the pelagic ecosystem and the diversity of species that inhabit it
Enabling dynamic and intelligent workflows for HPC, data analytics, and AI convergence
The evolution of High-Performance Computing (HPC) platforms enables the design and execution of progressively larger and more complex workflow applications in these systems. The complexity comes not only from the number of elements that compose the workflows but also from the type of computations they perform. While traditional HPC workflows target simulations and modelling of physical phenomena, current needs require in addition data analytics (DA) and artificial intelligence (AI) tasks. However, the development of these workflows is hampered by the lack of proper programming models and environments that support the integration of HPC, DA, and AI, as well as the lack of tools to easily deploy and execute the workflows in HPC systems. To progress in this direction, this paper presents use cases where complex workflows are required and investigates the main issues to be addressed for the HPC/DA/AI convergence. Based on this study, the paper identifies the challenges of a new workflow platform to manage complex workflows. Finally, it proposes a development approach for such a workflow platform addressing these challenges in two directions: first, by defining a software stack that provides the functionalities to manage these complex workflows; and second, by proposing the HPC Workflow as a Service (HPCWaaS) paradigm, which leverages the software stack to facilitate the reusability of complex workflows in federated HPC infrastructures. Proposals presented in this work are subject to study and development as part of the EuroHPC eFlows4HPC project.This work has received funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No 955558. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Spain, Germany, France, Italy, Poland, Switzerland and Norway. In Spain, it has received complementary funding from MCIN/AEI/10.13039/501100011033, Spain and the European Union NextGenerationEU/PRTR (contracts PCI2021-121957, PCI2021-121931, PCI2021-121944, and PCI2021-121927). In Germany, it has received complementary funding from the German Federal Ministry of Education and Research (contracts 16HPC016K, 6GPC016K, 16HPC017 and 16HPC018). In France, it has received financial support from Caisse des dépôts et consignations (CDC) under the action PIA ADEIP (project Calculateurs). In Italy, it has been preliminary approved for complimentary funding by Ministero dello Sviluppo Economico (MiSE) (ref. project prop. 2659). In Norway, it has received complementary funding from the Norwegian Research Council, Norway under project number 323825. In Switzerland, it has been preliminary approved for complimentary funding by the State Secretariat for Education, Research, and Innovation (SERI), Norway. In Poland, it is partially supported by the National Centre for Research and Development under decision DWM/EuroHPCJU/4/2021. The authors also acknowledge financial support by MCIN/AEI /10.13039/501100011033, Spain through the “Severo Ochoa Programme for Centres of Excellence in R&D” under Grant CEX2018-000797-S, the Spanish Government, Spain (contract PID2019-107255 GB) and by Generalitat de Catalunya, Spain (contract 2017-SGR-01414). Anna Queralt is a Serra Húnter Fellow.With funding from the Spanish government through the ‘Severo Ochoa Centre of Excellence’ accreditation (CEX2018-000797-S)
Assisting Users in an Emergency Situation Using Geolocation Data and SOS Button
The disclosure describes techniques to automatically determine the presence of an emergency situation such as a natural disaster (e.g., an earthquake, cycle, or other event) or man-made crisis (e.g., an accident or other event) using machine learning and artificial intelligence techniques. When an emergency situation is determined to be present in a location, users of a social media platform or other online service within a crisis radius of the location are identified based on geolocation data from user devices. The users are provided with an alert regarding the emergency situation and a SOS button or other mechanism that can be used to generate a distress signal. Location information and user profile information of the identified users is transmitted to a third party such as emergency response services in response to the SOS button being triggered by the user. The user is directly connected with the third party via text, audio, or video chat, enabling rescue teams and emergency services to communicate with the user. The described techniques leverage the widespread availability of portable user devices and the record of users available to an online service such as a social media platform to quickly alert users to an emergency situation, enable them to generate a distress signal, and to connect them to emergency responders. The techniques also guide emergency responders to locations where users are in distress and can help reduce their response time
Adversarial Deep Learning and Security with a Hardware Perspective
Adversarial deep learning is the field of study which analyzes deep learning in the presence of adversarial entities. This entails understanding the capabilities, objectives, and attack scenarios available to the adversary to develop defensive mechanisms and avenues of robustness available to the benign parties. Understanding this facet of deep learning helps us improve the safety of the deep learning systems against external threats from adversaries. However, of equal importance, this perspective also helps the industry understand and respond to critical failures in the technology. The expectation of future success has driven significant interest in developing this technology broadly. Adversarial deep learning stands as a balancing force to ensure these developments remain grounded in the real-world and proceed along a responsible trajectory. Recently, the growth of deep learning has begun intersecting with the computer hardware domain to improve performance and efficiency for resource constrained application domains. The works investigated in this dissertation constitute our pioneering efforts in migrating adversarial deep learning into the hardware domain alongside its parent field of research
- …