10,052 research outputs found

    Colour technologies for content production and distribution of broadcast content

    Get PDF
    The requirement of colour reproduction has long been a priority driving the development of new colour imaging systems that maximise human perceptual plausibility. This thesis explores machine learning algorithms for colour processing to assist both content production and distribution. First, this research studies colourisation technologies with practical use cases in restoration and processing of archived content. The research targets practical deployable solutions, developing a cost-effective pipeline which integrates the activity of the producer into the processing workflow. In particular, a fully automatic image colourisation paradigm using Conditional GANs is proposed to improve content generalisation and colourfulness of existing baselines. Moreover, a more conservative solution is considered by providing references to guide the system towards more accurate colour predictions. A fast-end-to-end architecture is proposed to improve existing exemplar-based image colourisation methods while decreasing the complexity and runtime. Finally, the proposed image-based methods are integrated into a video colourisation pipeline. A general framework is proposed to reduce the generation of temporal flickering or propagation of errors when such methods are applied frame-to-frame. The proposed model is jointly trained to stabilise the input video and to cluster their frames with the aim of learning scene-specific modes. Second, this research explored colour processing technologies for content distribution with the aim to effectively deliver the processed content to the broad audience. In particular, video compression is tackled by introducing a novel methodology for chroma intra prediction based on attention models. Although the proposed architecture helped to gain control over the reference samples and better understand the prediction process, the complexity of the underlying neural network significantly increased the encoding and decoding time. Therefore, aiming at efficient deployment within the latest video coding standards, this work also focused on the simplification of the proposed architecture to obtain a more compact and explainable model

    Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse

    Get PDF
    This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses. This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups. In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in users’ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018—6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena

    Countermeasures for the majority attack in blockchain distributed systems

    Get PDF
    La tecnología Blockchain es considerada como uno de los paradigmas informáticos más importantes posterior al Internet; en función a sus características únicas que la hacen ideal para registrar, verificar y administrar información de diferentes transacciones. A pesar de esto, Blockchain se enfrenta a diferentes problemas de seguridad, siendo el ataque del 51% o ataque mayoritario uno de los más importantes. Este consiste en que uno o más mineros tomen el control de al menos el 51% del Hash extraído o del cómputo en una red; de modo que un minero puede manipular y modificar arbitrariamente la información registrada en esta tecnología. Este trabajo se enfocó en diseñar e implementar estrategias de detección y mitigación de ataques mayoritarios (51% de ataque) en un sistema distribuido Blockchain, a partir de la caracterización del comportamiento de los mineros. Para lograr esto, se analizó y evaluó el Hash Rate / Share de los mineros de Bitcoin y Crypto Ethereum, seguido del diseño e implementación de un protocolo de consenso para controlar el poder de cómputo de los mineros. Posteriormente, se realizó la exploración y evaluación de modelos de Machine Learning para detectar software malicioso de tipo Cryptojacking.DoctoradoDoctor en Ingeniería de Sistemas y Computació

    Desarrollo de una herramienta integral de gestión de gases de efecto invernadero para la toma de decisión contra el cambio climático a nivel regional y local en la Comunitat Valenciana

    Full text link
    Tesis por compendio[ES] Actualmente, los responsables de tomar decisiones contra el cambio climático carecen de herramientas para desarrollar inventarios de emisiones de gases de efecto invernadero (GEI) con suficiente rigor científico-técnico y precisión para priorizar e invertir los recursos disponibles de manera eficiente en las medidas necesarias para luchar contra el cambio climático. Por ello, en esta tesis se expone el desarrollo de un sistema de información territorial y sectorial (SITE) para monitorear las emisiones de GEI que sirva como herramienta de gobernanza climática local y regional. SITE combina las ventajas de los enfoques metodológicos descendente o top-down (de arriba hacia abajo) y ascendente o bottom-up (de abajo hacia arriba), para lograr un enfoque híbrido innovador para contabilizar y gestionar de manera eficiente las emisiones de GEI. Por tanto, en esta tesis se definen los diferentes desarrollos metodológicos, tanto generales como específicos de sectores clave del Panel Intergubernamental de Cambio Climático (IPPC) (edificación, transporte, sector forestal, etc.), un desarrollo informático para la parte de SITE que se ejecuta del lado del servidor, que de ahora en adelante denominaremos back-end del sistema, y siete implementaciones como casos de estudio representativos, a diferentes escalas y aplicados sobre diferentes sectores. Estas implementaciones a diferentes escalas y sectores demuestran el potencial del sistema como herramienta de apoyo en la toma de decisión contra el cambio climático a nivel regional y local. Las diferentes implementaciones en casos piloto representativos, tanto a nivel regional en la Comunitat Valenciana como a nivel local en municipios grandes (València) y medianos (Quart de Poblet y Llíria) muestran el potencial de adaptación territorial y sectorial que tiene la herramienta. Las metodologías desarrolladas para los sectores específicos de tráfico rodado, edificación o sector forestal, ofrecen cuantificaciones con una resolución espacial con gran capacidad de optimizar las políticas locales y regionales. Por tanto, la herramienta cuenta con un gran potencial de escalabilidad y gran capacidad de mejora continua mediante la inclusión de nuevos enfoques metodológicos, adaptación de las metodologías a la disponibilidad de datos, metodologías concretas para sectores clave y actualización a las mejores metodologías disponibles derivadas de actividades de investigación de la comunidad científica.[CA] Actualment, els responsables de prendre decisions contra el canvi climàtic no tenen eines per aconseguir inventaris d'emissions de gasos d'efecte hivernacle (GEH) amb prou cientificotècnic rigor, precisió i integritat per invertir els recursos disponibles de manera eficient en les mesures necessàries contra el canvi climàtic. Per això, en aquesta tesis se exposa el desenvolupa un sistema d'informació territorial i sectorial (SITE) per monitoritzar les emissions de GEH com a eina de governança climàtica local i regional. Aquest sistema combina els avantatges dels enfocaments metodològics descendent o top-down (de dalt a baix) i ascendent o bottom-up (de baix a dalt), per aconseguir un enfocament híbrid innovador per comptabilitzar i gestionar de manera eficient les emissions de GEH. Per tant, en aquesta tesi doctoral es descriuen els diferents desenvolupaments metodològics, tant generals com específics de sectors clau del Panel Intergovernamental contra el Canvi Climàtic (edificació, transport, forestal, etc.), un desenvolupament informàtic per al back-end del sistema i set implementacions com a casos d'estudi representatius, a diferents escales, amb els diferents enfocaments metodològics i aplicats sobre diferents sectors. Això queda descrit en sis capítols. Aquestes implementacions a diferents escales i sectors demostren el potencial del sistema com a eina de suport en la presa de decisió contra el canvi climàtic a nivell regional i local. Les diferents implementacions en casos pilot representatius, tant a nivell regional a la Comunitat Valenciana com a nivell local en municipis grans (València) i mitjans (Quart de Poblet i Llíria,) mostren el potencial d'adaptació territorial i sectorial que té l'eina. Les metodologies desenvolupades per als sectors específics de trànsit rodat, edificació i forestal, ofereixen quantificacions amb una resolució espacial amb gran capacitat d'optimitzar les polítiques locals i regionals. Per tant, l'eina compta amb un gran potencial d'escalabilitat i gran capacitat de millora contínua mitjançant la inclusió de nous enfocaments metodològics, adaptació de les metodologies a la disponibilitat de dades, metodologies concretes per a sectors clau, i actualització a les millors metodologies disponibles derivades de activitats de investigació de la comunitat científica.[EN] Currently, regional and local decision-makers lack of tools to achieve greenhouse gases (GHG) emissions inventories with enough rigor, accuracy and completeness in order to prioritize available resources efficiently against climate change. Thus, in this thesis the development of a territorial and sectoral information system (SITE) to monitor GHG emissions as a local and regional climate governance tool is exposed. This system combines the advantages of both, top-down and bottom-up approaches, to achieve an innovative hybrid approach to account and manage efficiently GHG emissions. Furthermore, this thesis defines the methodologies developed, a computer proposal for the back-end of the system and seven implementations as representative case studies at different scales (local and regional level), with the different methodological approaches and applied to different sectors. Thus, these implementations demonstrate the potential of the system as decision-making tool against climate change at the regional and local level as climate governance tool. The different implementations in representative pilot cases, both at the regional level in the Valencian Community and at the local level in large (Valencia) and medium-sized municipalities (Quart de Poblet and Llíria) demonstrate the potential for territorial and sectoral adaptation of the system developed. The methodologies developed for the specific sectors of road transport, building and forestry, offer quantifications with a spatial resolution with a great capacity to optimize local and regional policies. Therefore, the tool has a great potential for scalability and a great capacity for continuous improvement through the inclusion of new methodological approaches, adapting the methodologies to the availability of data, specific methodologies for key sectors, and updating to the best methodologies available in the scientific community.Lorenzo Sáez, E. (2022). Desarrollo de una herramienta integral de gestión de gases de efecto invernadero para la toma de decisión contra el cambio climático a nivel regional y local en la Comunitat Valenciana [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181662TESISCompendi

    The Adirondack Chronology

    Get PDF
    The Adirondack Chronology is intended to be a useful resource for researchers and others interested in the Adirondacks and Adirondack history.https://digitalworks.union.edu/arlpublications/1000/thumbnail.jp

    Machine learning for managing structured and semi-structured data

    Get PDF
    As the digitalization of private, commercial, and public sectors advances rapidly, an increasing amount of data is becoming available. In order to gain insights or knowledge from these enormous amounts of raw data, a deep analysis is essential. The immense volume requires highly automated processes with minimal manual interaction. In recent years, machine learning methods have taken on a central role in this task. In addition to the individual data points, their interrelationships often play a decisive role, e.g. whether two patients are related to each other or whether they are treated by the same physician. Hence, relational learning is an important branch of research, which studies how to harness this explicitly available structural information between different data points. Recently, graph neural networks have gained importance. These can be considered an extension of convolutional neural networks from regular grids to general (irregular) graphs. Knowledge graphs play an essential role in representing facts about entities in a machine-readable way. While great efforts are made to store as many facts as possible in these graphs, they often remain incomplete, i.e., true facts are missing. Manual verification and expansion of the graphs is becoming increasingly difficult due to the large volume of data and must therefore be assisted or substituted by automated procedures which predict missing facts. The field of knowledge graph completion can be roughly divided into two categories: Link Prediction and Entity Alignment. In Link Prediction, machine learning models are trained to predict unknown facts between entities based on the known facts. Entity Alignment aims at identifying shared entities between graphs in order to link several such knowledge graphs based on some provided seed alignment pairs. In this thesis, we present important advances in the field of knowledge graph completion. For Entity Alignment, we show how to reduce the number of required seed alignments while maintaining performance by novel active learning techniques. We also discuss the power of textual features and show that graph-neural-network-based methods have difficulties with noisy alignment data. For Link Prediction, we demonstrate how to improve the prediction for unknown entities at training time by exploiting additional metadata on individual statements, often available in modern graphs. Supported with results from a large-scale experimental study, we present an analysis of the effect of individual components of machine learning models, e.g., the interaction function or loss criterion, on the task of link prediction. We also introduce a software library that simplifies the implementation and study of such components and makes them accessible to a wide research community, ranging from relational learning researchers to applied fields, such as life sciences. Finally, we propose a novel metric for evaluating ranking results, as used for both completion tasks. It allows for easier interpretation and comparison, especially in cases with different numbers of ranking candidates, as encountered in the de-facto standard evaluation protocols for both tasks.Mit der rasant fortschreitenden Digitalisierung des privaten, kommerziellen und öffentlichen Sektors werden immer größere Datenmengen verfügbar. Um aus diesen enormen Mengen an Rohdaten Erkenntnisse oder Wissen zu gewinnen, ist eine tiefgehende Analyse unerlässlich. Das immense Volumen erfordert hochautomatisierte Prozesse mit minimaler manueller Interaktion. In den letzten Jahren haben Methoden des maschinellen Lernens eine zentrale Rolle bei dieser Aufgabe eingenommen. Neben den einzelnen Datenpunkten spielen oft auch deren Zusammenhänge eine entscheidende Rolle, z.B. ob zwei Patienten miteinander verwandt sind oder ob sie vom selben Arzt behandelt werden. Daher ist das relationale Lernen ein wichtiger Forschungszweig, der untersucht, wie diese explizit verfügbaren strukturellen Informationen zwischen verschiedenen Datenpunkten nutzbar gemacht werden können. In letzter Zeit haben Graph Neural Networks an Bedeutung gewonnen. Diese können als eine Erweiterung von CNNs von regelmäßigen Gittern auf allgemeine (unregelmäßige) Graphen betrachtet werden. Wissensgraphen spielen eine wesentliche Rolle bei der Darstellung von Fakten über Entitäten in maschinenlesbaren Form. Obwohl große Anstrengungen unternommen werden, so viele Fakten wie möglich in diesen Graphen zu speichern, bleiben sie oft unvollständig, d. h. es fehlen Fakten. Die manuelle Überprüfung und Erweiterung der Graphen wird aufgrund der großen Datenmengen immer schwieriger und muss daher durch automatisierte Verfahren unterstützt oder ersetzt werden, die fehlende Fakten vorhersagen. Das Gebiet der Wissensgraphenvervollständigung lässt sich grob in zwei Kategorien einteilen: Link Prediction und Entity Alignment. Bei der Link Prediction werden maschinelle Lernmodelle trainiert, um unbekannte Fakten zwischen Entitäten auf der Grundlage der bekannten Fakten vorherzusagen. Entity Alignment zielt darauf ab, gemeinsame Entitäten zwischen Graphen zu identifizieren, um mehrere solcher Wissensgraphen auf der Grundlage einiger vorgegebener Paare zu verknüpfen. In dieser Arbeit stellen wir wichtige Fortschritte auf dem Gebiet der Vervollständigung von Wissensgraphen vor. Für das Entity Alignment zeigen wir, wie die Anzahl der benötigten Paare reduziert werden kann, während die Leistung durch neuartige aktive Lerntechniken erhalten bleibt. Wir erörtern auch die Leistungsfähigkeit von Textmerkmalen und zeigen, dass auf Graph-Neural-Networks basierende Methoden Schwierigkeiten mit verrauschten Paar-Daten haben. Für die Link Prediction demonstrieren wir, wie die Vorhersage für unbekannte Entitäten zur Trainingszeit verbessert werden kann, indem zusätzliche Metadaten zu einzelnen Aussagen genutzt werden, die oft in modernen Graphen verfügbar sind. Gestützt auf Ergebnisse einer groß angelegten experimentellen Studie präsentieren wir eine Analyse der Auswirkungen einzelner Komponenten von Modellen des maschinellen Lernens, z. B. der Interaktionsfunktion oder des Verlustkriteriums, auf die Aufgabe der Link Prediction. Außerdem stellen wir eine Softwarebibliothek vor, die die Implementierung und Untersuchung solcher Komponenten vereinfacht und sie einer breiten Forschungsgemeinschaft zugänglich macht, die von Forschern im Bereich des relationalen Lernens bis hin zu angewandten Bereichen wie den Biowissenschaften reicht. Schließlich schlagen wir eine neuartige Metrik für die Bewertung von Ranking-Ergebnissen vor, wie sie für beide Aufgaben verwendet wird. Sie ermöglicht eine einfachere Interpretation und einen leichteren Vergleich, insbesondere in Fällen mit einer unterschiedlichen Anzahl von Kandidaten, wie sie in den de-facto Standardbewertungsprotokollen für beide Aufgaben vorkommen

    Antibody Targeting of HIV-1 Env: A Structural Perspective

    Get PDF
    A key component of contemporary efforts toward a human immunodeficiency virus 1 (HIV-1) vaccine is the use of structural biology to understand the structural characteristics of antibodies elicited both from human patients and animals immunized with engineered 'immunogens,' or early vaccine candidates. This thesis will report on projects characterizing both types of antibodies against HIV-1. Chapter 1 will introduce relevant topics, including the reasons HIV-1 is particularly capable of evading the immune system in natural infection and after vaccination, the 20+ year history of unsuccessful HIV-1 vaccine large-scale efficacy trials, an introduction to broadly neutralizing antibodies (bNAbs), and a review of common strategies utilized in HIV-1 immunogen design today. Chapter 2 describes the isolation, high-resolution structural characterization, and in vitro resistance profile of a new bNAb, 1-18, that is both very broad and potent, as well as able to restrict HIV-1 escape in vivo. Chapter 3 reports the results of an epitope-focusing immunogen design and immunization experiment carried out in wild type mice, rabbits, and non-human primates where it was shown that B cells targeting the desired epitope were expanded after a single prime immunization with immunogen RC1 or a variant, RC1-4fill. Chapter 4 describes Ab1245, an off-target non-neutralizing monoclonal antibody isolated in a macaque that had been immunized with a series of sequential immunogens after the prime immunization reported in Chapter 3. The antibody structure describes a specific type of distracting response as it binds in a way that causes a large structural change in Env, resulting in the destruction of the neutralizing fusion peptide epitope. Chapter 5 is adapted from a review about how antibodies differentially recognize the viruses HIV-1, SARS-CoV-2, and Zika virus. This review serves as an introduction to the virus SARS-CoV-2, which is the topic of the final chapter, Chapter 6. In this chapter, structures of many neutralizing antibodies isolated from SARS-CoV-2 patients were used to define potentially therapeutic classes of neutralizing receptor-binding domain (RBD) antibodies based on their epitopes and binding profiles

    AIUCD 2022 - Proceedings

    Get PDF
    L’undicesima edizione del Convegno Nazionale dell’AIUCD-Associazione di Informatica Umanistica ha per titolo Culture digitali. Intersezioni: filosofia, arti, media. Nel titolo è presente, in maniera esplicita, la richiesta di una riflessione, metodologica e teorica, sull’interrelazione tra tecnologie digitali, scienze dell’informazione, discipline filosofiche, mondo delle arti e cultural studies

    SYSTEMS METHODS FOR ANALYSIS OF HETEROGENEOUS GLIOBLASTOMA DATASETS TOWARDS ELUCIDATION OF INTER-TUMOURAL RESISTANCE PATHWAYS AND NEW THERAPEUTIC TARGETS

    Get PDF
    In this PhD thesis is described an endeavour to compile litterature about Glioblastoma key molecular mechanisms into a directed network followin Disease Maps standards, analyse its topology and compare results with quantitative analysis of multi-omics datasets in order to investigate Glioblastoma resistance mechanisms. The work also integrated implementation of Data Management good practices and procedures
    corecore