18 research outputs found

    Effekten av Depression under Graviditeten pÄ Neonatal Nervfiberintegritet

    Get PDF
    Introduction: Prenatal maternal depression is a common complication and has been associated with poor mental health outcomes and altered brain structure in the child. Neurobiological mechanisms mediating these outcomes remain unclear to date. To yield more knowledge surrounding this, it would be interesting to examine how prenatal maternal depression may impact neonatal brain structures that play key roles in optimal mental health. Two such structures are the fornix and the uncinate fasciculus (UF). Aim: To explore the influence of maternal depression during pregnancy on microstructural integrity of the white matter tracts of the fornix and uncinate fasciculus in newborns. Materials and Methods: 87 mother-infant pairs were included. Inclusion criteria were pregnant women in the first trimester and who after birth were deemed obstetrically free of complications. Exclusion criteria were maternal use of psychotropic medicines, corticosteroids, or tobacco or other drugs, gestational age at birth <34 weeks, congenital, genetic or other major neurological disorders at birth. Depressive symptoms were measured with a standard questionnaire. Neonatal white matter maturation was measured with diffusion tensor imaging. The relationship between these parameters was analyzed with ANOVA models. Results: Prenatal maternal depression during the third trimester was positively correlated with higher fractional anisotropy in 12 consecutive points along the tract of the right UF in boys (p=0.0428). No other significant (p<0.05) associations were found with sex, tracts or trimesters. Conclusions: Maternal depression affects the maturation of the fetal male brain. The consequence of this for child affective problems should be examined in longitudinal studies.Bakgrund: Depression hos modern Àr en vanlig komplikation och har korrelerats till ogynnsam utveckling av bÄde hjÀrnstrukturer och beteende gÀllande psykologiska aspekter hos barnet. Bakomliggande mekanismer för dessa fenomen Àr Ànnu oklara. För att bidra till kunskapen om detta vore det intressant att undersöka hur depression under graviditeten pÄverkar neonatala hjÀrnstrukturer som Àr essentiella för optimal mental hÀlsa. TvÄ sÄdana strukturer Àr fornix och fasciculus uncinatus (FU). Syfte: Att utforska effekten av depression under graviditeten pÄ den mikrostrukturella integriteten av vit hjÀrnsubstans fibrerna fornix och FU hos nyfödda. Material och Metoder: 87 moder-spÀdbarnpar inkluderades. Inklusionskriterier var gravida kvinnor i början av första trimestern och som födde utan obstetriska komplikationer. Exklusionskriterier var bruk av psykofarmaka, kortikosteroider, tobak eller andra droger, gestationsÄlder <34 veckor vid födseln, samt kongenitala, genetiska eller andra större neurologiska störningar vid födseln. Depressiva symtom mÀttes med ett standard frÄgeformulÀr. Neonatal vitsubstansmognad mÀttes med diffusionstensoravbildning (DTI). FörhÄllandet mellan dessa parametrar undersöktes med variansanalys. Resultat: Prenatal moderlig depression under tredje trimestern korrelerade med högre fraktionell anisotropi i 12 konsekutiva punkter i högra fasciculus uncinatus hos pojkar (p = 0,0428). Inga andra signifikanta (p<0,05) korrelationer upptÀcktes i resterande kön, nervfibrer eller trimestrar. Slutsatser: Depression under graviditeten pÄverkar mognaden av det manliga fostrets hjÀrna. Innebörden av denna effekt för barnets psykiska hÀlsa bör undersökas i longitudinella studier

    Ultra-High-Resolution Detector Simulation with Intra-Event Aware GAN and Self-Supervised Relational Reasoning

    Full text link
    Simulating high-resolution detector responses is a storage-costly and computationally intensive process that has long been challenging in particle physics. Despite the ability of deep generative models to make this process more cost-efficient, ultra-high-resolution detector simulation still proves to be difficult as it contains correlated and fine-grained mutual information within an event. To overcome these limitations, we propose Intra-Event Aware GAN (IEA-GAN), a novel fusion of Self-Supervised Learning and Generative Adversarial Networks. IEA-GAN presents a Relational Reasoning Module that approximates the concept of an ''event'' in detector simulation, allowing for the generation of correlated layer-dependent contextualized images for high-resolution detector responses with a proper relational inductive bias. IEA-GAN also introduces a new intra-event aware loss and a Uniformity loss, resulting in significant enhancements to image fidelity and diversity. We demonstrate IEA-GAN's application in generating sensor-dependent images for the high-granularity Pixel Vertex Detector (PXD), with more than 7.5M information channels and a non-trivial geometry, at the Belle II Experiment. Applications of this work include controllable simulation-based inference and event generation, high-granularity detector simulation such as at the HL-LHC (High Luminosity LHC), and fine-grained density estimation and sampling. To the best of our knowledge, IEA-GAN is the first algorithm for faithful ultra-high-resolution detector simulation with event-based reasoning

    Do DALL-E and Flamingo Understand Each Other?

    Full text link
    The field of multimodal research focusing on the comprehension and creation of both images and text has witnessed significant strides. This progress is exemplified by the emergence of sophisticated models dedicated to image captioning at scale, such as the notable Flamingo model and text-to-image generative models, with DALL-E serving as a prominent example. An interesting question worth exploring in this domain is whether Flamingo and DALL-E understand each other. To study this question, we propose a reconstruction task where Flamingo generates a description for a given image and DALL-E uses this description as input to synthesize a new image. We argue that these models understand each other if the generated image is similar to the given image. Specifically, we study the relationship between the quality of the image reconstruction and that of the text generation. We find that an optimal description of an image is one that gives rise to a generated image similar to the original one. The finding motivates us to propose a unified framework to finetune the text-to-image and image-to-text models. Concretely, the reconstruction part forms a regularization loss to guide the tuning of the models. Extensive experiments on multiple datasets with different image captioning and image generation models validate our findings and demonstrate the effectiveness of our proposed unified framework. As DALL-E and Flamingo are not publicly available, we use Stable Diffusion and BLIP in the remaining work. Project website: https://dalleflamingo.github.io.Comment: Accepted to ICCV 202

    The Tensor Brain: A Unified Theory of Perception, Memory and Semantic Decoding

    Full text link
    We present a unified computational theory of an agent's perception and memory. In our model, perception, episodic memory, and semantic memory are realized by different operational modes of the oscillating interactions between a symbolic index layer and a subsymbolic representation layer. The two layers form a bilayer tensor network (BTN). Although memory appears to be about the past, its main purpose is to support the agent in the present and the future. Recent episodic memory provides the agent with a sense of the here and now. Remote episodic memory retrieves relevant past experiences to provide information about possible future scenarios. This aids the agent in decision-making. "Future" episodic memory, based on expected future events, guides planning and action. Semantic memory retrieves specific information, which is not delivered by current perception, and defines priors for future observations. We argue that it is important for the agent to encode individual entities, not just classes and attributes. We demonstrate that a form of self-supervised learning can acquire new concepts and refine existing ones. We test our model on a standard benchmark data set, which we expanded to contain richer representations for attributes, classes, and individuals. Our key hypothesis is that obtaining a better understanding of perception and memory is a crucial prerequisite to comprehending human-level intelligence.Comment: Accepted for publication at Neural Computatio

    Prior-RadGraphFormer: A Prior-Knowledge-Enhanced Transformer for Generating Radiology Graphs from X-Rays

    Full text link
    The extraction of structured clinical information from free-text radiology reports in the form of radiology graphs has been demonstrated to be a valuable approach for evaluating the clinical correctness of report-generation methods. However, the direct generation of radiology graphs from chest X-ray (CXR) images has not been attempted. To address this gap, we propose a novel approach called Prior-RadGraphFormer that utilizes a transformer model with prior knowledge in the form of a probabilistic knowledge graph (PKG) to generate radiology graphs directly from CXR images. The PKG models the statistical relationship between radiology entities, including anatomical structures and medical observations. This additional contextual information enhances the accuracy of entity and relation extraction. The generated radiology graphs can be applied to various downstream tasks, such as free-text or structured reports generation and multi-label classification of pathologies. Our approach represents a promising method for generating radiology graphs directly from CXR images, and has significant potential for improving medical image analysis and clinical decision-making.Comment: In GRAIL @ MICCAI 202

    PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings

    Full text link
    Recently, knowledge graph embeddings (KGEs) received significant attention, and several software libraries have been developed for training and evaluating KGEs. While each of them addresses specific needs, we re-designed and re-implemented PyKEEN, one of the first KGE libraries, in a community effort. PyKEEN 1.0 enables users to compose knowledge graph embedding models (KGEMs) based on a wide range of interaction models, training approaches, loss functions, and permits the explicit modeling of inverse relations. Besides, an automatic memory optimization has been realized in order to exploit the provided hardware optimally, and through the integration of Optuna extensive hyper-parameter optimization (HPO) functionalities are provided

    MemeGraphs: Linking Memes to Knowledge Graphs

    Full text link
    Memes are a popular form of communicating trends and ideas in social media and on the internet in general, combining the modalities of images and text. They can express humor and sarcasm but can also have offensive content. Analyzing and classifying memes automatically is challenging since their interpretation relies on the understanding of visual elements, language, and background knowledge. Thus, it is important to meaningfully represent these sources and the interaction between them in order to classify a meme as a whole. In this work, we propose to use scene graphs, that express images in terms of objects and their visual relations, and knowledge graphs as structured representations for meme classification with a Transformer-based architecture. We compare our approach with ImgBERT, a multimodal model that uses only learned (instead of structured) representations of the meme, and observe consistent improvements. We further provide a dataset with human graph annotations that we compare to automatically generated graphs and entity linking. Analysis shows that automatic methods link more entities than human annotators and that automatically generated graphs are better suited for hatefulness classification in memes
    corecore