724 research outputs found
From a causal representation of multiloop scattering amplitudes to quantum computing in the Loop-Tree Duality
La teoría cúantica de campos con enfoque perturbativo ha logrado de manera exitosa proporcionar predicciones teóricas increíblemente precisas en física de altas energías. A pesar del desarrollo de diversas técnicas con el objetivo de incrementar la eficiencia de estos cálculos, algunos ingredientes continuan siendo un verdadero reto. Este es el caso de las amplitudes de dispersión con lazos múltiples, las cuales describen las fluctuaciones cuánticas en los procesos de dispersión a altas energías.
La Dualidad Lazo-Árbol (LTD) es un método innovador, propuesto con el objetivo de afrontar estas dificultades abriendo las amplitudes de lazo a amplitudes conectadas de tipo árbol. En esta tesis presentamos tres logros fundamentales: la reformulación de la Dualidad Lazo-Árbol a todos los órdenes en la expansión perturbativa, una metodología general para obtener expresiones LTD con un comportamiento manifiestamente causal, y la primera aplicación de un algoritmo cuántico a integrales de lazo de Feynman. El cambio de estrategia propuesto para implementar la metodología LTD, consiste en la aplicación iterada del teorema del residuo de Cauchy a un conjunto de topologías con lazos m\'ultiples y configuraciones internas arbitrarias. La representación LTD que se obtiene, sigue una estructura factorizada en términos de subtopologías más simples, caracterizada por un comportamiento causal bien conocido. Además, a través de un proceso avanzado desarrollamos representaciones duales analíticas explícitamente libres de singularidades no causales. Estas propiedades permiten escribir cualquier amplitud de dispersión, hasta cinco lazos, de forma factorizada con una mejor estabilidad numérica en comparación con otras representaciones, debido a la ausencia de singularidades no causales.
Por último, establecemos la conexión entre las integrales de lazo de Feynman y la computación cuántica, mediante la asociación de los dos estados sobre la capa de masas de un propagador de Feynman con los dos estados de un qubit. Proponemos una modificación del algoritmo cuántico de Grover para encontrar las configuraciones singulares causales de los diagramas de Feynman con lazos múltiples. Estas configuraciones son requeridas para establecer la representación causal de topologías con lazos múltiples.The perturbative approach to Quantum Field Theories has successfully provided incredibly accurate theoretical predictions in high-energy physics. Despite the development of several techniques to boost the efficiency of these calculations, some ingredients remain a hard bottleneck. This is the case of multiloop scattering amplitudes, describing the quantum fluctuations at high-energy scattering processes.
The Loop-Tree Duality (LTD) is a novel method aimed to overcome these difficulties by opening the loop amplitudes into connected tree-level diagrams. In this thesis we present three core achievements: the reformulation of the Loop-Tree Duality to all orders in the perturbative expansion, a general methodology to obtain LTD expressions which are manifestly causal, and the first flagship application of a quantum algorithm to Feynman loop integrals.
The proposed strategy to implement the LTD framework consists in the iterated application of the Cauchy's residue theorem to a series of mutiloop topologies with arbitrary internal configurations. We derive a LTD representation exhibiting a factorized cascade form in terms of simpler subtopologies characterized by a well-known causal behaviour. Moreover, through a clever approach we extract analytic dual representations that are explicitly free of noncausal singularities. These properties enable to open any scattering amplitude of up to five loops in a factorized form, with a better numerical stability than in other representations due to the absence of noncausal singularities. Last but not least, we establish the connection between Feynman loop integrals and quantum computing by encoding the two on-shell states of a Feynman propagator through the two states of a qubit. We propose a modified Grover's quantum algorithm to unfold the causal singular configurations of multiloop Feynman diagrams used to bootstrap the causal LTD representation of multiloop topologies
Out-of-Distribution Generalization of Deep Learning to Illuminate Dark Protein Functional Space
Dark protein illumination is a fundamental challenge in drug discovery where majority human proteins are understudied, i.e. with only known protein sequence but no known small molecule binder. It\u27s a major road block to enable drug discovery paradigm shift from single-targeted which looks to identify a single target and design drug to regulate the single target to multi-targeted in a Systems Pharmacology perspective. Diseases such as Alzheimer\u27s and Opioid-Use-Disorder plaguing millions of patients call for effective multi-targeted approach involving dark proteins. Using limited protein data to predict dark protein property requires deep learning systems with OOD generalization capacity. Out-of-Distribution (OOD) generalization is a problem hindering the application and adoption of deep learning in real world problems. Classic deep learning setting in contrast is assuming training and testing data are independent identically distributed (iid). A well trained model under iid setting with reported 98% accuracy could deteriorate to worse than random guess in deployment to OOD data significantly different from training data. Numerous techniques in the research field emerged but are only addressing some specific OOD scenario instead of a general one. Dark protein illumination has unique complexity comparing to common deep learning tasks. There are three OOD axes, protein-OOD, compound-OOD, interaction-OOD. Previous research have only focused on compound-OOD, where new compound design algorithms are developed but still for 500 common proteins, instead of whole human genome 20,000 proteins, and only for single-targeted paradigm instead of multi-targeted. Focusing on an instrumental problem in drug discovery, dark protein function illumination problem is introduced from the OOD perspective. A series of dark protein OOD algorithms are developed to predict dark protein ligand interaction where multiple instrumental deep learning techniques are adapted to the biology context. By proposing the dark protein illumination problem, highlighting the neglected axes, demonstrating possibilities, numerous diseases now embrace new hopes
Cross-cultural patterns in mobile playtime: an analysis of 118 billion hours of human data
open access articleDespite the prevalence of gaming as a human activity, the literature on playtime is uninformed by large-scale, high-quality data. This has led to an evidence-base in which the existence of specific cultural gaming cultures (e.g. exceptional levels of gaming in East Asian nations) are not well-supported by evidence. Here we address this evidence gap by conducting the world’s first large-scale investigation of cross-cultural differences in mobile gaming via telemetry analysis. Our data cover 118 billion hours of playtime occurring in 214 countries and regions between October 2020 and October 2021. A cluster analysis establishes a data-driven set of cross-cultural groupings that describe differences in how the world plays mobile games. Despite contemporary arguments regarding Asian exceptionalism in terms of playtime, analysis shows that many East Asian countries (e.g., China) were not highly differentiated from most high-GDP Northern European nations across several measures of play. Instead, a range of previously unstudied and highly differentiated cross-cultural clusters emerged from the data and are presented here, showcasing the diversity of global gaming
Hadron Structure using Continuum Schwinger Function Methods
The vast bulk of visible mass emerges from nonperturbative dynamics within
quantum chromodynamics (QCD) -- the strong interaction sector of the Standard
Model. The past decade has revealed the three pillars that support this
emergent hadron mass (EHM); namely, a nonzero gluon mass-scale, a
process-independent effective charge, and dressed-quarks with constituent-like
masses. Theory is now working to expose their manifold and diverse expressions
in hadron observables and highlighting the types of measurements that can be
made in order to validate the paradigm. In sketching some of these
developments, this discussion stresses the role of EHM in forming nucleon
electroweak structure and the wave functions of excited baryons through the
generation of dynamical diquark correlations; producing and constraining the
dilation of the leading-twist pion distribution amplitude; shaping pion and
nucleon parton distribution functions -- valence, glue and sea, including the
antisymmetry of antimatter; and moulding pion and proton charge and mass
distributions.Comment: 24 pages, 11 figures. Invited contribution to a Special Issue of Few
Body Systems: "Emergence and Structure of Baryons -- Selected Contributions
from the International Conference Baryons 2022'
Northeastern Illinois University, Academic Catalog 2023-2024
https://neiudc.neiu.edu/catalogs/1064/thumbnail.jp
Learning from Audio, Vision and Language Modalities for Affect Recognition Tasks
The world around us as well as our responses to worldly events are multimodal in nature. For intelligent machines to integrate seamlessly into our world, it is imperative that they can process and derive useful information from multimodal signals. Such capabilities can be provided to machines by employing multimodal learning algorithms that consider both the individual characteristics of unimodal signals as well as the complementariness provided by multimodal signals. Based on the number of modalities available during the training and testing phases, learning algorithms can be of three categories: unimodal trained and unimodal tested, multimodal trained and multimodal tested, and multimodal trained and unimodal tested algorithms. This thesis provides three contributions, one for each category and focuses on three modalities that are important for human-human and human-machine communication, namely, audio (paralinguistic speech), vision (facial expressions) and language (linguistic speech) signals. For several applications, either due to hardware limitations or deployment specifications, unimodal trained and tested systems suffice. Our first contribution, for the unimodal trained and unimodal tested category, is an end-to-end deep neural network that uses raw speech signals as input for a computational paralinguistic task, namely, verbal conflict intensity estimation. Our model, which uses a convolutional recurrent architecture equipped with attention mechanism to focus on task-relevant instances of the input speech signal, eliminates the need for task-specific meta data or domain knowledge based manual refinement of hand-crafted generic features. The second contribution, for the multimodal trained and multimodal tested category, is a multimodal fusion framework that exploits both cross (inter) and intra-modal interactions for categorical emotion recognition from audiovisual clips. We explore the effectiveness of two types of attention mechanisms, namely, intra and cross-modal attention by creating two versions of our fusion framework. In many applications, multimodal signals might be available during model training phase, yet we cannot expect the availability of all modality signals during testing phase. Our third contribution addresses this situation wherein we propose a framework for cross-modal learning where paired audio-visual instances are used during training to develop test-time stand-alone unimodal models
Teoria da relevância para construção de personas fictícias: aproximação e aplicação
Essa tese propõe uma aproximação da teoria da relevância para construção de personas fictícias. A indústria de conteúdos tem, como suas forças motoras, o escritor, as ferramentas tecnológicas e, também, as estratégicas, tal como a utilização de personas fictícias para produção de conteúdos. Porém, a atual forma de produção textual na Web somente tem gerado uma explosão de conteúdos que não estão sendo consumidos pelos internautas. Diante desse problema, surge a necessidade de pesquisas que possam contribuir para a construção de personas nas quais se valorize o processo de interpretação do usuário/leitor de forma a que se ultrapassem estratégias voltadas unicamente à apreensão da atenção do mesmo, e que sejam capazes, portanto, de refletirem-se em produção de conteúdos mais relevantes e que atendam efetivamente às necessidades e aos desejos dos internautas. Para responder ao problema identificado, elegemos como questões norteadores de nossa pesquisa as seguintes: (i) Há possibilidade de fazer uma aproximação da teoria da relevância de Sperber e Wilson para servir como instrumento paralelo às abordagens já existentes na área de construção de personas fictícias?; (ii) Quais são os fundamentos da teoria da relevância que podem contribuir para a metodologia de elaboração de personas fictícias?; (iii) Quais são os principais métodos, atualmente, para a produção de personas fictícias?; (iv) Como a teoria da relevância pode aproximar-se dos métodos de construção de personas fictícias da atualidade?; (v) Como realizar um experimento com base na técnica/processo proposto? Tais questões servem como etapas, refletidas em capítulos da tese, para o alcance do objetivo dela, que é: propor a inserção dos dispositivos do processo cognitivo de interpretação, extraídos da teoria da relevância, para auxiliar na metodologia de construção de personas fictícias. A hipótese inicial é de que os conceitos da teoria da relevância podem contribuir para a criação de personas. Para alcançar o objetivo proposto, a tese está dividida em duas partes: uma teórica e outra composta por um estudo empírico. Na parte teórica temos dois capítulos: no primeiro, expomos os conceitos da teoria da relevância que podem contribuir para a elaboração de personas fictícias; e, no segundo, apresentamos os principais métodos, atualmente, para a produção de personas fictícias, abordando os aspectos centrais que as caracterizam. Na segunda parte, no estudo empírico, temos três capítulos: o primeiro apresenta os dispositivos levantados na teoria da relevância que servem como suporte à nossa proposta de construção de personas fictícias; o segundo expõe a metodologia do estudo empírico aplicado; e, o terceiro, faz a leitura e a discussão dos dados levantados na aplicação do estudo empírico. O estudo empírico tem, como amostra, os alunos do 1º Ciclo do Curso de Ciências da Comunicação da Universidade Fernando Pessoa e ocorre em duas etapas. Na primeira, a partir de entrevistas, criamos duas personas. Na segunda etapa, validamos tais personas, aplicando um questionário com duas categorias de títulos distintos, considerando a relevância dos mesmos diante do contexto das personas criadas. Através desse experimento, concluímos que efetivamente a teoria da relevância pode ser um apoio para a criação de personas fictícias, contribuindo para o que já existe na literatura atual, destacando-se o acréscimo que realizamos do mapa mental aos templates já existentes no mercado, instrumentos que podem cooperar, além disso, para a atualização constante das personas, conforme recomendam as principais pesquisas da área.This thesis proposes an approximation of relevance theory to the construction of fictitious personas. The content industry has, as its driving forces the writer, technological tools, and strategic tools, such as the use of fictitious personas for the production of contents. However, the current form of textual production on the Web has generated an explosion of content that is not being consumed by Internet users. Faced with this problem, there is a need for research that can contribute to the construction of personas in which the interpretation process of the user/reader is examined in order to go beyond strategies aimed solely at capturing the user's attention, and that are capable of being reflected in the production of relevant content that effectively meets the needs and desires of internet users. To respond to this challenge, we chose the following guiding questions for our research: (i) Is it possible to approach Sperber and Wilson's theory of relevance to serve as a parallel instrument in the examination of fictitious personas? ; (ii) What are the foundations of relevance theory that can contribute to the methodology for creating fictional personas?; (iii) What are the main methods. currently, for the production of fictional personas?; (iv) How can relevance theory approach current methods of constructing fictitious personas?; (v) How can one perform an experiment based on the proposed technique/process? Such questions serve as steps, reflected in chapters of the thesis, to reach the objective of the present study: to propose the insertion of the devices of the cognitive process of interpretation, extracted from the theory of relevance, to help in the methodology of construction of fictitious personas. The initial hypothesis is that the concepts of relevance theory can contribute to the creation of personas. The present study is divided into two parts: one theoretical and the other empirical. In the theoretical part, we have two chapters: in the first, we expose the concepts of relevance theory that can contribute to the elaboration of fictitious personas; and, in the second, we present the main methods, currently, for the production of fictional personas, approaching the central aspects that characterize them. In the empirical study, we have three chapters: the first presents the devices raised in the relevance theory that serve as support to our proposal of construction of fictitious personas; the second exposes the methodology of the applied empirical study; and, the third, reads and discusses the data collected in the application of the empirical study. The empirical study has, as a target audience, students of the 1st Cycle of the Communication Sciences Course at the Fernando Pessoa University and takes place in two stages. In the first, based on interviews, we created two personas. In the second stage, we validated these personas, applying a questionnaire with two categories of different titles, considering their relevance in the context of the created personas. The findings of the present study suggest that the theory of relevance can be a support for the creation of fictitious personas, contributing to what already exists in the current literature. In addition, it supports the creation of the mental map to the templates already existing in the market, as well as instruments that cooperate with the constant updating of personas, as recommended by other extant research in this area.Esta tesis propone una aproximación de la teoría de la relevancia a la construcción de personas ficticias. La industria de contenidos tiene como motores al escritor, las herramientas tecnológicas y también las estratégicas, como el uso de personas ficticias para la producción de contenidos. Sin embargo, la forma actual de producción textual en la Web sólo ha generado una explosión de contenidos que no están siendo consumidos por los internautas. Frente a este problema, surge la necesidad de investigaciones que puedan contribuir a la construcción de personas en los que se valore el proceso de interpretación del usuario/lector para ir más allá de las estrategias dirigidas únicamente a captar la atención del usuario, y que sean capaces, por lo tanto, para verse reflejado en la producción de contenido más relevante que satisfaga efectivamente las necesidades y deseos de los usuarios de Internet. Para dar respuesta al problema identificado, elegimos como preguntas orientadoras de nuestra investigación las siguientes: (i) ¿Es posible abordar la teoría de la relevancia de Sperber y Wilson para que sirva como instrumento paralelo a los enfoques existentes en el área de la construcción de personas ficticias?; (ii) ¿Cuáles son los fundamentos de la teoría de la relevancia que pueden contribuir a la metodología para la creación de personas ficticias?; (iii) ¿Cuáles son los principales métodos, actualmente, para la producción de personas ficticias?; (iv) ¿Cómo puede la teoría de la relevancia abordar los métodos actuales de construcción de personas ficticias?; (v) ¿Cómo realizar un experimento basado en la técnica/proceso propuesto? Tales preguntas sirven como pasos, reflejados en capítulos de la tesis, para alcanzar su objetivo, que es: proponer la inserción de los dispositivos del proceso cognitivo de interpretación, extraídos de la teoría de la relevancia, para ayudar en la metodología de construcción de personas ficticias. La hipótesis inicial es que los conceptos de la teoría de la relevancia pueden contribuir a la creación de personas. Para lograr el objetivo propuesto, la tesis se divide en dos partes: una teórica y otra compuesta por un estudio empírico. En la parte teórica tenemos dos capítulos: en el primero, exponemos los conceptos de la teoría de la relevancia que pueden contribuir a la elaboración de personas ficticias; y, en el segundo, presentamos los principales métodos, en la actualidad, para la producción de personas ficticias, abordando los aspectos centrales que los caracterizan. En la segunda parte, en el estudio empírico, tenemos tres capítulos: el primero presenta los dispositivos planteados en la teoría de la relevancia que sirven de sustento a nuestra propuesta de construcción de personas ficticias; el segundo expone la metodología del estudio empírico aplicado; y, el tercero, lee y discute los datos recogidos en la aplicación del estudio empírico. El estudio empírico tiene como público objetivo a los estudiantes del 1º Ciclo de la Carrera de Ciencias de la Comunicación de la Universidad Fernando Pessoa y se desarrolla en dos etapas. En la primera, a partir de entrevistas, creamos dos personajes. En la segunda etapa, validamos estas personas, aplicando un cuestionario con dos categorías de diferentes títulos, considerando su relevancia en el contexto de las personas creadas. A través de este experimento concluimos que efectivamente la teoría de la relevancia puede ser un apoyo para la creación de personas ficticias, contribuyendo a lo ya existente en la literatura actual, destacando la adición que hicimos del mapa mental a las plantillas ya existentes en la mercado, instrumentos que pueden cooperar, además, para la constante actualización de personas, como lo recomiendan las principales investigaciones en el área.Cette thèse propose une approximation de la théorie de la pertinence à la construction de personas fictifs. L'industrie du contenu a pour moteurs l'écrivain, les outils technologiques mais aussi stratégiques, comme l'utilisation de personas fictifs pour la production de contenus. Or, la forme actuelle de production textuelle sur le Web n'a généré qu'une explosion de contenus non consommés par les internautes. Face à ce problème, il y a un besoin de recherche qui puisse contribuer à la construction de personas dans lesquels le processus d'interprétation de l'utilisateur/lecteur est valorisé afin d'aller au-delà des stratégies visant uniquement à appréhender l'attention de l'utilisateur, et qui soient capables, donc, se traduire par la production de contenus plus pertinents répondant efficacement aux besoins et envies des internautes. Pour répondre au problème identifié, nous avons choisi les questions suivantes comme questions directrices pour notre recherche: (i) Est-il possible d'approcher la théorie de la pertinence de Sperber et Wilson pour servir d'instrument parallèle aux approches existantes dans le domaine de la construction de personas fictifs?; (ii) Quels sont les fondements de la théorie de la pertinence qui peuvent contribuer à la méthodologie de création de personas fictifs?; (iii) Quelles sont les principales méthodes, actuellement, pour la production de personas fictifs?; (iv) Comment la théorie de la pertinence peutelle approcher les méthodes actuelles de construction de personas fictifs?; (v) Comment réaliser une expérience basée sur la technique/le processus proposé? De telles questions servent d'étapes, reflétées dans les chapitres de la thèse, pour atteindre son objectif, qui est de: proposer l'insertion des dispositifs du processus cognitif d'interprétation, extraits de la théorie de la pertinence, pour aider à la méthodologie de construction de personas fictifs. L'hypothèse de départ est que les concepts de la théorie de la pertinence peuvent contribuer à la création de personas. Pour atteindre l'objectif proposé, la thèse est divisée en deux parties: l'une théorique et l'autre composée d'une étude empirique. Dans la partie théorique, nous avons deux chapitres: dans le premier, nous exposons les concepts de la théorie de la pertinence qui peuvent contribuer à l'élaboration de personas fictifs ; et, dans la seconde, nous présentons les principales méthodes, actuellement, pour la production de personas fictifs, en abordant les aspects centraux qui les caractérisent. Dans la deuxième partie, dans l'étude empirique, nous avons trois chapitres: le premier présente les dispositifs soulevés dans la théorie de la pertinence qui servent de support à notre proposition de construction de personas fictifs; le second expose la méthodologie de l'étude empirique appliquée; et, le troisième, lit et discute les données recueillies dans l'application de l'étude empirique. L'étude empirique a pour public cible les étudiants du 1er cycle du cours de sciences de la communication de l'Université Fernando Pessoa et se déroule en deux étapes. Dans la première, basée sur des entretiens, nous avons créé deux personas. Dans la deuxième étape, nous avons validé ces personas, en appliquant un questionnaire avec deux catégories de titres différents, en considérant leur pertinence dans le contexte des personas créés. A travers cette expérience, nous avons conclu qu'effectivement la théorie de la pertinence peut être un support à la création de personas fictifs, contribuant à ce qui existe déjà dans la littérature actuelle, mettant en évidence l'ajout que nous avons fait de la carte mentale aux gabarits déjà existants dans la marché, des instruments qui peuvent coopérer, en outre, pour la mise à jour constante des personas, comme le recommandent les principales recherches dans le domaine
Optimising the energy performance of the residential stock of the Kingdom of Saudi Arabia by retrofit measures
Building energy demands and green house gases are raising and a variety of energy efficiency frameworks, legislation, and housing approvals have evolved worldwide. The KSA is one of the largest energy producers and consumers internationally, with the residential sector using 52% of total energy generation. The KSA government has begun energy efficiency initiatives and policies that intend to reduce the residential energy demands via a series of regulations including Vision 2030 and the KSA building code. The regulations aim to assess the energy performance of residential buildings in order to lower the energy demands and greenhouse gas emissions to meet international carbon emissions requirements. The KSA targets to generate 9.5 GW from renewable energy by 2023, and 58.7 GW by 2030, which accounts for about 30% of the total energy generation capacity. Research has shown that in order to effectively reduce the energy demands and achieve worldwide carbon emissions targets, large-scale implementation interventions are required.
The KSA housing stock consists of 3.6 million wide and varied residences due to various terrain. The diversity of the KSA dwellings encompasses housing type, age, amounts of rooms and bedrooms and flooring areas while common characteristics comprise construction materials and energy and cooking fuels. Therefore, this thesis develops housing archetypes that are representative of the KSA housing stock to be assessed and evaluated for the aim of reducing there energy demands and associated carbon emissions along with monthly running costs. The housing archetypes are used to quantify the housing energy performance and define the major sources of heat loss or gain. Two major reason for the high energy demands are solar radiation and heat gain due to infiltration. The infiltration occurs due to pressure differential across the thermal envelope. This is responsible for 40 TWh of lost energy from the housing stock, which accounts for 9.9 million MtCO2e.
The research methodology applied an engineering bottom-up approach to quantify the energy performance of the KSA’s housing stock using EnergyPlus dynamic tool. EnergyPlus is a new generation modelling tool that incorporates the best features of two prior modelling tools: Building Load Analysis and System Thermodynamics (BLAST) and the Department of Energy (DOE–2). EnergyPlus is a free available tool and so allows data comparisons with international housing stocks. EnergyPlus was used to create the KSA’s housing energy baselines to predict the existing housing energy performance and to simulate various scenarios to reduce the total energy demands.
The KSA housing energy demands can be optimised through a large-scale implementation of energy efficiency retrofitting schemes comprising 25 exterior thermal insulation types, eight exterior shading systems, and LED lighting systems and equipment, and the application of PV systems. This resulted in reducing the total KSA housing energy demands by 12.95 TWh/month, equivalent to 40% of the monthly housing energy use, and lowered associated carbon emissions by a total of 5.61 million MtCO2e/month, equivalent to 40% of monthly housing carbon emissions, and decreased the total housing stock cost about 72.39 million USD/month, equivalent to 50% of the total monthly cost
SDbQfSum: Query-focused summarization framework basedon diversity and text semantic analysis
Query-focused multi-document summarization (Qf-MDS) is a sub-task of automatic text summarization that aims to extract a substitute summary from a document cluster of the same topic and based on a user query. Unlike other summarization tasks, Qf-MDS has specific research challenges including the differences and similarities across related document sets, the high degree of redundancy inherent in the summaries created from multiple related sources, relevance to the given query, topic diversity in the produced summary and the small source-to-summary compression ratio. In this work, we propose a semantic diversity feature based query-focused extractive summarizer (SDbQfSum) built on powerful text semantic representation techniques underpinned with Wikipedia commonsense knowledge in order to address the query-relevance, centrality, redundancy and diversity challenges. Specifically, a semantically parsed document text is combined with knowledge-based vectorial representation to extract effective sentence importance and query-relevance features. The proposed monolingual summarizer is evaluated on a standard English dataset for automatic query-focused summarization tasks, that is, the DUC2006 dataset. The obtained results show that our summarizer outperforms most state-of-the-art related approaches on one or more ROUGE measures achieving 0.418, 0.092 and 0.152 in ROUGE-1, ROUGE-2,and ROUGE-SU4 respectively. It also attains competitive performance with the slightly outperforming system(s), for example, the difference between our system's result and best system in ROUGE-1 is just 0.006. We also found through the conducted experiments that our proposed custom cluster merging algorithm significantly reduces information redundancy while maintaining topic diversity across documents
Topological Manipulations of Quantum Field Theories
In this thesis we study some topological aspects of Quantum Field Theories (QFTs). In particular, we study the way in which an arbitrary QFT can be separated into “local” and “global” data by means of a “symmetry Topological Field Theory” (symmetry TFT). We also study how various “topological manipulations” of the global data correspond to various well-known operations that previously existed in the literature, and how the symmetry TFT perspective provides a systematic tool for studying these topological manipulations.
We start by reviewing the bijection between G-symmetric d-dimensional QFTs and boundary conditions for G-gauge theories in (d+1)-dimensions, which effectively defines the symmetry TFT. We use this relationship to study the “orbifold groupoids” which control the composition of “topological manipulations,” relating theories with the same local data but different global data. Particular attention is paid to examples in d = 2 dimensions. We also discuss the extension to fermionic symmetry groups and find that the familiar “Jordan-Wigner transformation” (fermionization) and “GSO projection” (bosonization) appear as examples of topological manipulations. We also study applications to fusion categorical symmetries and constraining RG flows in WZW models as well.
After this, we present a short chapter showcasing an application of this symmetry TFT framework to the study of minimal models in 2d CFT. In particular, we complete the classification of 2d fermionic unitary minimal models.
Finally, we discuss how the symmetry TFT intuition can be used to classify duality defects in QFTs. In particular, we focus on Zm duality defects in holomorphic Vertex Operator Algebras (VOAs) (and especially the E8 lattice VOA), where we use symmetry TFT intuition to conjecture, and then rigorously prove, a formula relating (duality-)defected partition functions to Z2 twists of invariant sub-VOAs
- …