41,469 research outputs found

    Similarity Reasoning and Filtration for Image-Text Matching

    Full text link
    Image-text matching plays a critical role in bridging the vision and language, and great progress has been made by exploiting the global alignment between image and sentence, or local alignments between regions and words. However, how to make the most of these alignments to infer more accurate matching scores is still underexplored. In this paper, we propose a novel Similarity Graph Reasoning and Attention Filtration (SGRAF) network for image-text matching. Specifically, the vector-based similarity representations are firstly learned to characterize the local and global alignments in a more comprehensive manner, and then the Similarity Graph Reasoning (SGR) module relying on one graph convolutional neural network is introduced to infer relation-aware similarities with both the local and global alignments. The Similarity Attention Filtration (SAF) module is further developed to integrate these alignments effectively by selectively attending on the significant and representative alignments and meanwhile casting aside the interferences of non-meaningful alignments. We demonstrate the superiority of the proposed method with achieving state-of-the-art performances on the Flickr30K and MSCOCO datasets, and the good interpretability of SGR and SAF modules with extensive qualitative experiments and analyses.Comment: 14 pages, 8 figures, Accepted by AAAI202

    Stiles–Crawford effect of the first kind: assessment of photoreceptor alignments following dark patching

    Get PDF
    AbstractProperties of presumed mechanisms controlling photoreceptor alignments are partially defined. A phototropic mechanism normally dominates alignment, but do modest changes in orientations occur with dark patching? Here, new photopic Stiles–Crawford (SCE-I) determinations were made before patching (pre-patch), just after 8-days of dark-patching (post-patch), and 3 days after patch removal (recovery test). We tested at 0, 11 and 22° in the temporal retina of both eyes. Ten eyes of adult subjects were tested. SCE-I peak positions and Stile's parameter ‘rho’ were assessed. Dark-patching effects were small. Observations revealed meaningful corrective alignment overshoots with recovery in the light. Results suggest (1) the presence of multiple weak mechanisms affecting receptor alignments in the dark; (2) the phototropic mechanism to be dominant in the light; (3) the need for multiple test loci to be sampled in such studies, and (4) small changes in the SCE-I in the pupil plane can reflect meaningful events occurring at the retina

    PIntron: a Fast Method for Gene Structure Prediction via Maximal Pairings of a Pattern and a Text

    Full text link
    Current computational methods for exon-intron structure prediction from a cluster of transcript (EST, mRNA) data do not exhibit the time and space efficiency necessary to process large clusters of over than 20,000 ESTs and genes longer than 1Mb. Guaranteeing both accuracy and efficiency seems to be a computational goal quite far to be achieved, since accuracy is strictly related to exploiting the inherent redundancy of information present in a large cluster. We propose a fast method for the problem that combines two ideas: a novel algorithm of proved small time complexity for computing spliced alignments of a transcript against a genome, and an efficient algorithm that exploits the inherent redundancy of information in a cluster of transcripts to select, among all possible factorizations of EST sequences, those allowing to infer splice site junctions that are highly confirmed by the input data. The EST alignment procedure is based on the construction of maximal embeddings that are sequences obtained from paths of a graph structure, called Embedding Graph, whose vertices are the maximal pairings of a genomic sequence T and an EST P. The procedure runs in time linear in the size of P, T and of the output. PIntron, the software tool implementing our methodology, is able to process in a few seconds some critical genes that are not manageable by other gene structure prediction tools. At the same time, PIntron exhibits high accuracy (sensitivity and specificity) when compared with ENCODE data. Detailed experimental data, additional results and PIntron software are available at http://www.algolab.eu/PIntron

    Accuracy of Protein-Protein Binding Sites in High-Throughput Template-Based Modeling

    Get PDF
    The accuracy of protein structures, particularly their binding sites, is essential for the success of modeling protein complexes. Computationally inexpensive methodology is required for genome-wide modeling of such structures. For systematic evaluation of potential accuracy in high-throughput modeling of binding sites, a statistical analysis of target-template sequence alignments was performed for a representative set of protein complexes. For most of the complexes, alignments containing all residues of the interface were found. The full interface alignments were obtained even in the case of poor alignments where a relatively small part of the target sequence (as low as 40%) aligned to the template sequence, with a low overall alignment identity (<30%). Although such poor overall alignments might be considered inadequate for modeling of whole proteins, the alignment of the interfaces was strong enough for docking. In the set of homology models built on these alignments, one third of those ranked 1 by a simple sequence identity criteria had RMSD<5 Å, the accuracy suitable for low-resolution template free docking. Such models corresponded to multi-domain target proteins, whereas for single-domain proteins the best models had 5 Å<RMSD<10 Å, the accuracy suitable for less sensitive structure-alignment methods. Overall, ∼50% of complexes with the interfaces modeled by high-throughput techniques had accuracy suitable for meaningful docking experiments. This percentage will grow with the increasing availability of co-crystallized protein-protein complexes

    Declarative Guideline Conformance Checking of Clinical Treatments: A Case Study

    Full text link
    Conformance checking is a process mining technique that allows verifying the conformance of process instances to a given model. Thus, this technique is predestined to be used in the medical context for the comparison of treatment cases with clinical guidelines. However, medical processes are highly variable, highly dynamic, and complex. This makes the use of imperative conformance checking approaches in the medical domain difficult. Studies show that declarative approaches can better address these characteristics. However, none of the approaches has yet gained practical acceptance. Another challenge are alignments, which usually do not add any value from a medical point of view. For this reason, we investigate in a case study the usability of the HL7 standard Arden Syntax for declarative, rule-based conformance checking and the use of manually modeled alignments. Using the approach, it was possible to check the conformance of treatment cases and create medically meaningful alignments for large parts of a medical guideline

    Helping scientists integrate and interact with biomedical data

    Get PDF
    Tese de mestrado, Bioinformática e Biologia Computacional , 2021, Universidade de Lisboa, Faculdade de CiênciasFor the past decades, the amount and complexity of biomedical data available have increased and far exceeded the human capacity to process it. To support this, knowledge graphs and ontologies have been increasingly used, allowing semantic integration of heterogeneous data within and across domains. However, the independent development of biomedical ontologies has created heterogeneity problems, with the design of ontologies with overlapping domains or significant differences. Automated ontology alignment techniques have been developed to tackle the semantic heterogeneity problem, by establishing meaningful correspondences between entities of two ontologies. However, their performance is limited, and the alignments they produce can contain erroneous, incoherent, or missing mappings. Therefore, manual validation of automated ontology alignments remains essential to ensure their quality. Given the complexity of the ontology matching process, is important to provide visualization and a user interface with the necessary features to support the exploration, validation, and edition of alignments. However, these aspects are often overlooked, as few alignment systems feature user interfaces enabling alignment visualization, fewer allow editing alignments, and fewer provide the functionalities needed to make the task seamless for users. This dissertation developed VOWLMap — an extension for the standalone web application, WebVOWL — for visualizing, editing, and validating biomedical ontology alignments. This work extended the Visual Notation for OWL Ontologies (VOWL), which defines a visual representation for most language constructs of OWL, to support graphical representations of alignments and restructured WebVOWL to load and visualize alignments. VOWLMap employs modularization techniques to facilitate the visualization of large alignments, while maintaining the context of each mapping, and offers a dynamic visualization that supports interaction mechanisms, including direct interaction with and editing of graph representations. A user study was conducted to evaluate the usability and performance of VOWLMap, having obtained positive feedback with an excellent score in a standard usability questionnaire

    The WiggleZ Dark Energy Survey: Direct constraints on blue galaxy intrinsic alignments at intermediate redshifts

    Get PDF
    Correlations between the intrinsic shapes of galaxy pairs, and between the intrinsic shapes of galaxies and the large-scale density field, may be induced by tidal fields. These correlations, which have been detected at low redshifts (z<0.35) for bright red galaxies in the Sloan Digital Sky Survey (SDSS), and for which upper limits exist for blue galaxies at z~0.1, provide a window into galaxy formation and evolution, and are also an important contaminant for current and future weak lensing surveys. Measurements of these alignments at intermediate redshifts (z~0.6) that are more relevant for cosmic shear observations are very important for understanding the origin and redshift evolution of these alignments, and for minimising their impact on weak lensing measurements. We present the first such intermediate-redshift measurement for blue galaxies, using galaxy shape measurements from SDSS and spectroscopic redshifts from the WiggleZ Dark Energy Survey. Our null detection allows us to place upper limits on the contamination of weak lensing measurements by blue galaxy intrinsic alignments that, for the first time, do not require significant model-dependent extrapolation from the z~0.1 SDSS observations. Also, combining the SDSS and WiggleZ constraints gives us a long redshift baseline with which to constrain intrinsic alignment models and contamination of the cosmic shear power spectrum. Assuming that the alignments can be explained by linear alignment with the smoothed local density field, we find that a measurement of \sigma_8 in a blue-galaxy dominated, CFHTLS-like survey would be contaminated by at most +/-0.02 (95% confidence level, SDSS and WiggleZ) or +/-0.03 (WiggleZ alone) due to intrinsic alignments. [Abridged]Comment: 18 pages, 12 figures, accepted to MNRAS; v2 has correction to one author's name, NO other changes; v3 has minor changes in explanation and calculations, no significant difference in results or conclusions; v4 has an additional footnote about model interpretation, no changes to data/calculations/result
    • …
    corecore