3,298 research outputs found

    Truly On-The-Fly LTL Model Checking

    Get PDF
    We propose a novel algorithm for automata-based LTL model checking that interleaves the construction of the generalized B\"{u}chi automaton for the negation of the formula and the emptiness check. Our algorithm first converts the LTL formula into a linear weak alternating automaton; configurations of the alternating automaton correspond to the locations of a generalized B\"{u}chi automaton, and a variant of Tarjan's algorithm is used to decide the existence of an accepting run of the product of the transition system and the automaton. Because we avoid an explicit construction of the B\"{u}chi automaton, our approach can yield significant improvements in runtime and memory, for large LTL formulas. The algorithm has been implemented within the SPIN model checker, and we present experimental results for some benchmark examples

    BERT WEAVER: Using WEight AVERaging to enable lifelong learning for transformer-based models in biomedical semantic search engines

    Full text link
    Recent developments in transfer learning have boosted the advancements in natural language processing tasks. The performance is, however, dependent on high-quality, manually annotated training data. Especially in the biomedical domain, it has been shown that one training corpus is not enough to learn generic models that are able to efficiently predict on new data. Therefore, in order to be used in real world applications state-of-the-art models need the ability of lifelong learning to improve performance as soon as new data are available - without the need of re-training the whole model from scratch. We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model, thereby reducing catastrophic forgetting. We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once, while being computationally more efficient. Because there is no need of data sharing, the presented method is also easily applicable to federated learning settings and can for example be beneficial for the mining of electronic health records from different clinics

    Chandra X-ray Observations of Galaxies in an Off-Center Region of the Coma Cluster

    Full text link
    We have performed a pilot Chandra survey of an off-center region of the Coma cluster to explore the X-ray properties and Luminosity Function of normal galaxies. We present results on 13 Chandra-detected galaxies with optical photometric matches, including four spectroscopically-confirmed Coma-member galaxies. All seven spectroscopically confirmed giant Coma galaxies in this field have detections or limits consistent with low X-ray to optical flux ratios (fX/fR < 10^-3). We do not have sufficient numbers of X-ray detected galaxies to directly measure the galaxy X-ray Luminosity Function (XLF). However, since we have a well-measured optical LF, we take this low X-ray to optical flux ratio for the 7 spectroscopically confirmed galaxies to translate the optical LF to an XLF. We find good agreement with Finoguenov et al. (2004), indicating that the X-ray emission per unit optical flux per galaxy is suppressed in clusters of galaxies, but extends this work to a specific off-center environment in the Coma cluster. Finally, we report the discovery of a region of diffuse X-ray flux which might correspond to a small group interacting with the Coma Intra-Cluster Medium (ICM).Comment: Accepted for publication in the Astrophysical Journa

    Reservoir Memory Machines as Neural Computers

    Full text link
    Differentiable neural computers extend artificial neural networks with an explicit memory without interference, thus enabling the model to perform classic computation tasks such as graph traversal. However, such models are difficult to train, requiring long training times and large datasets. In this work, we achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently, namely an echo state network with an explicit memory without interference. This extension enables echo state networks to recognize all regular languages, including those that contractive echo state networks provably can not recognize. Further, we demonstrate experimentally that our model performs comparably to its fully-trained deep version on several typical benchmark tasks for differentiable neural computers.Comment: In print at the special issue 'New Frontiers in Extremely Efficient Reservoir Computing' of IEEE TNNL

    Band gap engineering by Bi intercalation of graphene on Ir(111)

    Get PDF
    We report on the structural and electronic properties of a single bismuth layer intercalated underneath a graphene layer grown on an Ir(111) single crystal. Scanning tunneling microscopy (STM) reveals a hexagonal surface structure and a dislocation network upon Bi intercalation, which we attribute to a 3×3R30deg\sqrt{3}\times\sqrt{3}R30{\deg} Bi structure on the underlying Ir(111) surface. Ab-initio calculations show that this Bi structure is the most energetically favorable, and also illustrate that STM measurements are most sensitive to C atoms in close proximity to intercalated Bi atoms. Additionally, Bi intercalation induces a band gap (Eg=0.42E_g=0.42\,eV) at the Dirac point of graphene and an overall n-doping (0.39\sim 0.39\,eV), as seen in angular-resolved photoemission spectroscopy. We attribute the emergence of the band gap to the dislocation network which forms favorably along certain parts of the moir\'e structure induced by the graphene/Ir(111) interface.Comment: 5 figure

    Olig2 regulates Sox10 expression in oligodendrocyte precursors through an evolutionary conserved distal enhancer

    Get PDF
    The HMG-domain transcription factor Sox10 is expressed throughout oligodendrocyte development and is an important component of the transcriptional regulatory network in these myelin-forming CNS glia. Of the known Sox10 regulatory regions, only the evolutionary conserved U2 enhancer in the distal 5′-flank of the Sox10 gene exhibits oligodendroglial activity. We found that U2 was active in oligodendrocyte precursors, but not in mature oligodendrocytes. U2 activity also did not mediate the initial Sox10 induction after specification arguing that Sox10 expression during oligodendroglial development depends on the activity of multiple regulatory regions. The oligodendroglial bHLH transcription factor Olig2, but not the closely related Olig1 efficiently activated the U2 enhancer. Olig2 bound U2 directly at several sites including a highly conserved one in the U2 core. Inactivation of this site abolished the oligodendroglial activity of U2 in vivo. In contrast to Olig2, the homeodomain transcription factor Nkx6.2 repressed U2 activity. Repression may involve recruitment of Nkx6.2 to U2 and inactivation of Olig2 and other activators by protein–protein interactions. Considering the selective expression of Nkx6.2 at the time of specification and in differentiated oligodendrocytes, Nkx6.2 may be involved in limiting U2 activity to the precursor stage during oligodendrocyte development

    Batch and median neural gas

    Full text link
    Neural Gas (NG) constitutes a very robust clustering algorithm given euclidian data which does not suffer from the problem of local minima like simple vector quantization, or topological restrictions like the self-organizing map. Based on the cost function of NG, we introduce a batch variant of NG which shows much faster convergence and which can be interpreted as an optimization of the cost function by the Newton method. This formulation has the additional benefit that, based on the notion of the generalized median in analogy to Median SOM, a variant for non-vectorial proximity data can be introduced. We prove convergence of batch and median versions of NG, SOM, and k-means in a unified formulation, and we investigate the behavior of the algorithms in several experiments.Comment: In Special Issue after WSOM 05 Conference, 5-8 september, 2005, Pari
    corecore