8,040 research outputs found

    Mapping hybrid functional-structural connectivity traits in the human connectome

    Get PDF
    One of the crucial questions in neuroscience is how a rich functional repertoire of brain states relates to its underlying structural organization. How to study the associations between these structural and functional layers is an open problem that involves novel conceptual ways of tackling this question. We here propose an extension of the Connectivity Independent Component Analysis (connICA) framework, to identify joint structural-functional connectivity traits. Here, we extend connICA to integrate structural and functional connectomes by merging them into common hybrid connectivity patterns that represent the connectivity fingerprint of a subject. We test this extended approach on the 100 unrelated subjects from the Human Connectome Project. The method is able to extract main independent structural-functional connectivity patterns from the entire cohort that are sensitive to the realization of different tasks. The hybrid connICA extracted two main task-sensitive hybrid traits. The first, encompassing the within and between connections of dorsal attentional and visual areas, as well as fronto-parietal circuits. The second, mainly encompassing the connectivity between visual, attentional, DMN and subcortical networks. Overall, these findings confirms the potential ofthe hybrid connICA for the compression of structural/functional connectomes into integrated patterns from a set of individual brain networks.Comment: article: 34 pages, 4 figures; supplementary material: 5 pages, 5 figure

    GRAMPAL: A Morphological Processor for Spanish implemented in Prolog

    Full text link
    A model for the full treatment of Spanish inflection for verbs, nouns and adjectives is presented. This model is based on feature unification and it relies upon a lexicon of allomorphs both for stems and morphemes. Word forms are built by the concatenation of allomorphs by means of special contextual features. We make use of standard Definite Clause Grammars (DCG) included in most Prolog implementations, instead of the typical finite-state approach. This allows us to take advantage of the declarativity and bidirectionality of Logic Programming for NLP. The most salient feature of this approach is simplicity: A really straightforward rule and lexical components. We have developed a very simple model for complex phenomena. Declarativity, bidirectionality, consistency and completeness of the model are discussed: all and only correct word forms are analysed or generated, even alternative ones and gaps in paradigms are preserved. A Prolog implementation has been developed for both analysis and generation of Spanish word forms. It consists of only six DCG rules, because our {\em lexicalist\/} approach --i.e. most information is in the dictionary. Although it is quite efficient, the current implementation could be improved for analysis by using the non logical features of Prolog, especially in word segmentation and dictionary access.Comment: 11 page

    Study of alkaline hydrothermal activation of belite cements by thermal analysis

    Get PDF
    The effect of alkaline hydrothermal activation of class-C fly ash belite cement was studied using thermal analysis (TG/DTG) by determining the increase in the combined water during a period of hydration of 180 days. The results were compared with those obtained for a belite cement hydrothermally activated in water. The two belite cements were fabricated via the hydrothermal-calcination route of class-C fly ash in 1 M NaOH solution (FABC-2-N) or demineralised water (FABC-2-W). From the results, the effect of the alkaline hydrothermal activation of belite cement (FABC-2-N) was clearly differentiated, mainly at early ages of hydration, for which the increase in the combined water was markedly higher than that of the belite cement that was hydrothermally activated in water. Important direct quantitative correlations were obtained among physicochemical parameters, such as the combined water, the BET surface area, the volume of nano-pores, and macro structural engineering properties such as the compressive mechanical strength

    A framework for lexical representation

    Full text link
    In this paper we present a unification-based lexical platform designed for highly inflected languages (like Roman ones). A formalism is proposed for encoding a lemma-based lexical source, well suited for linguistic generalizations. From this source, we automatically generate an allomorph indexed dictionary, adequate for efficient processing. A set of software tools have been implemented around this formalism: access libraries, morphological processors, etc.Comment: 9 page

    Predictability of drug expenditures: An application using morbidity data

    Get PDF
    The growth of pharmaceutical expenditure and its prediction is a major concern for policy makers and health care managers. This paper explores different predictive models to estimate future drug expenses, using demographic and morbidity individual information from an integrated healthcare delivery organization in Catalonia for years 2002 and 2003. The morbidity information consists of codified health encounters grouped through the Clinical Risk Groups (CRGs). We estimate pharmaceutical costs using several model specifications, and CRGs as risk adjusters, providing an alternative way of obtaining high predictive power comparable to other estimations of drug expenditures in the literature. These results have clear implications for the use of risk adjustment and CRGs in setting the premiums for pharmaceutical benefits.Drug expenditure, risk-adjustment, morbidity, clinical risk groups

    Effectiveness of a telephone prevention programme on the recurrence of suicidal behaviour. One-year follow-up

    Get PDF
    People who have attempted suicide are considered a risk population for repeating the behaviour. Therapeutic interventions, such as telephone follow-up programmes (TFPs), are promising but more evidence for its efficacy is needed. In this multicentre, open, ex-post-facto, pre/post, one year prospective study, a previous cohort discharged from the emergency department for a suicide attempt (SA) and given routine treatment (n=207) was compared with a similar group who received the same intervention plus a structured TFP of six calls (n=203). At one year of follow-up, the efficacy of the TFP at preventing SA was assessed. A total of 53.2% (n=108) of the patients finished the TFP. A total of 20.3% (n=42) of the routine treatment group and 23.6% (n=48) of the TFP group reattempted at least once in the follow-up period (χ2=0.7;df=1;p=.412). However, in both groups, different subsamples of patients who presented extreme risk of SA at follow-up (0-57%) were identified. In the TFP group, the recurrence of suicidal behaviour was lower in patients admitted after the index attempt and in those who had more severe psychopathological symptoms, but not in the other profiles. Thus, this study has identified a specific profile of patients who could benefit from a brief-contact intervention.This study was supported in part, by a grant (Resolución 3036/2014) from the Departamento de Salud del Gobierno de Navarra and by the Award “Federico Soto a la investigación sobre el suicidio 2019” from Fundación Colegio de Médicos de Navarra

    La riera de Breda

    Get PDF

    MIRACLE at GeoCLEF Query Parsing 2007: Extraction and Classification of Geographical Information

    Get PDF
    This paper describes the participation of MIRACLE research consortium at the Query Parsing task of GeoCLEF 2007. Our system is composed of three main modules. First, the Named Geo-entity Identifier, whose objective is to perform the geo-entity identification and tagging, i.e., to extract the “where” component of the geographical query, should there be any. This module is based on a gazetteer built up from the Geonames geographical database and carries out a sequential process in three steps that consist on geo-entity recognition, geo-entity selection and query tagging. Then, the Query Analyzer parses this tagged query to identify the “what” and “geo-relation” components by means of a rule-based grammar. Finally, a two-level multiclassifier first decides whether the query is indeed a geographical query and, should it be positive, then determines the query type according to the type of information that the user is supposed to be looking for: map, yellow page or information. According to a strict evaluation criterion where a match should have all fields correct, our system reaches a precision value of 42.8% and a recall of 56.6% and our submission is ranked 1st out of 6 participants in the task. A detailed evaluation of the confusion matrixes reveal that some extra effort must be invested in “user-oriented” disambiguation techniques to improve the first level binary classifier for detecting geographical queries, as it is a key component to eliminate many false-positives
    corecore