206,167 research outputs found

    A characterization of attribute evaluation in passes

    Get PDF
    This paper describes the evaluation of semantic attributes in a bounded number of passes from left-to-right and/or from right-to-left over the derivation tree of a program. Evaluation strategies where different instances of the same attribute in any derivation tree are restricted to be evaluated in one pass, with for every derivation tree the same pass number, are referred to as simple multi-pass whereas the unrestricted pass-oriented strategies are referred to as pure multi-pass.\ud \ud A graph theoretic characterization is given, showing in which cases an attribute grammar meets the simple multi-pass requirements and what are the minimal pass numbers of its attributes for a given sequence of pass directions. For the special cases where only left-to-right passes are made or where left-to-right and right-to-left passes strictly alternate, new algorithms are developed that associate minimal pass numbers with attributes and indicate in case of failure the attributes that cause the rejection of the grammar. Mixing of a simple multi-pass strategy with other evaluation strategies, in case the grammar is not simple multi-pass, is discussed

    Right hemisphere has the last laugh: neural dynamics of joke appreciation

    Get PDF
    Understanding a joke relies on semantic, mnemonic, inferential, and emotional contributions from multiple brain areas. Anatomically constrained magnetoencephalography (aMEG) combining high-density whole-head MEG with anatomical magnetic resonance imaging allowed us to estimate where the humor-specific brain activations occur and to understand their temporal sequence. Punch lines provided either funny, not funny (semantically congruent), or nonsensical (incongruent) replies to joke questions. Healthy subjects rated them as being funny or not funny. As expected, incongruous endings evoke the largest N400m in left-dominant temporo-prefrontal areas, due to integration difficulty. In contrast, funny punch lines evoke the smallest N400m during this initial lexical–semantic stage, consistent with their primed “surface congruity” with the setup question. In line with its sensitivity to ambiguity, the anteromedial prefrontal cortex may contribute to the subsequent “second take” processing, which, for jokes, presumably reflects detection of a clever “twist” contained in the funny punch lines. Joke-selective activity simultaneously emerges in the right prefrontal cortex, which may lead an extended bilateral temporo-frontal network in establishing the distant unexpected creative coherence between the punch line and the setup. This progression from an initially promising but misleading integration from left frontotemporal associations, to medial prefrontal ambiguity evaluation and right prefrontal reprocessing, may reflect the essential tension and resolution underlying humor

    Right temporal variant frontotemporal dementia is pathologically heterogeneous: a case-series and a systematic review

    Get PDF
    Although the right temporal variant frontotemporal dementia (rtvFTD) is characterised by distinct clinical and radiological features, its underlying histopathology remains elusive. Being considered a right-sided variant of semantic variant primary progressive aphasia (svPPA), TDP-43 type C pathology has been linked to the syndrome, but this has not been studied in detail in large cohorts. In this case report and systematic review, we report the autopsy results of five subjects diagnosed with rtvFTD from our cohort and 44 single rtvFTD subjects from the literature. Macroscopic pathological evaluation of the combined results revealed that rtvFTD demonstrated either a frontotemporal or temporal evolution, even if the degeneration started in the right temporal lobe initially. FTLD-TDP type C was the most common underlying pathology in rtvFTD, however, in 64% of rtvFTD, other underlying pathologies than FTLD-TDP type C were present, such as Tau-MAPT and FTLD-TDP type A and B. Additionally, accompanying motor neuron or corticospinal tract degeneration was observed in 28% of rtvFTD patients. Our results show that in contrast to the general assumption, rtvFTD might not be a pure FTLD-TDP type C disorder, unlike its left temporal counterpart svPPA. Large sample size pathological studies are warranted to understand the diverse pathologies of the right and left temporal variants of frontotemporal dementia

    Evaluating automatically acquired f-structures against PropBank

    Get PDF
    An automatic method for annotating the Penn-II Treebank (Marcus et al., 1994) with high-level Lexical Functional Grammar (Kaplan and Bresnan, 1982; Bresnan, 2001; Dalrymple, 2001) f-structure representations is presented by Burke et al. (2004b). The annotation algorithm is the basis for the automatic acquisition of wide-coverage and robust probabilistic approximations of LFG grammars (Cahill et al., 2004) and for the induction of subcategorisation frames (O’Donovan et al., 2004; O’Donovan et al., 2005). Annotation quality is, therefore, extremely important and to date has been measured against the DCU 105 and the PARC 700 Dependency Bank (King et al., 2003). The annotation algorithm achieves f-scores of 96.73% for complete f-structures and 94.28% for preds-only f-structures against the DCU 105 and 87.07% against the PARC 700 using the feature set of Kaplan et al. (2004). Burke et al. (2004a) provides detailed analysis of these results. This paper presents an evaluation of the annotation algorithm against PropBank (Kingsbury and Palmer, 2002). PropBank identifies the semantic arguments of each predicate in the Penn-II treebank and annotates their semantic roles. As PropBank was developed independently of any grammar formalism it provides a platform for making more meaningful comparisons between parsing technologies than was previously possible. PropBank also allows a much larger scale evaluation than the smaller DCU 105 and PARC 700 gold standards. In order to perform the evaluation, first, we automatically converted the PropBank annotations into a dependency format. Second, we developed conversion software to produce PropBank-style semantic annotations in dependency format from the f-structures automatically acquired by the annotation algorithm from Penn-II. The evaluation was performed using the evaluation software of Crouch et al. (2002) and Riezler et al. (2002). Using the Penn-II Wall Street Journal Section 24 as the development set, currently we achieve an f-score of 76.58% against PropBank for the Section 23 test set

    A review of user interface adaption in current semantic web browsers

    Get PDF
    The semantic web is an example of an innumerable corpus because it contains innumerable subjects expressed using innumerable ontologies. This paper reviews current semantic web browsers to see if they can adaptively show meaningful data presentations to users. The paper also seeks to discover if current semantic web browsers provide a rich enough set of capabilities for future user interface work to be built upon

    Improving Facial Attribute Prediction using Semantic Segmentation

    Full text link
    Attributes are semantically meaningful characteristics whose applicability widely crosses category boundaries. They are particularly important in describing and recognizing concepts where no explicit training example is given, \textit{e.g., zero-shot learning}. Additionally, since attributes are human describable, they can be used for efficient human-computer interaction. In this paper, we propose to employ semantic segmentation to improve facial attribute prediction. The core idea lies in the fact that many facial attributes describe local properties. In other words, the probability of an attribute to appear in a face image is far from being uniform in the spatial domain. We build our facial attribute prediction model jointly with a deep semantic segmentation network. This harnesses the localization cues learned by the semantic segmentation to guide the attention of the attribute prediction to the regions where different attributes naturally show up. As a result of this approach, in addition to recognition, we are able to localize the attributes, despite merely having access to image level labels (weak supervision) during training. We evaluate our proposed method on CelebA and LFWA datasets and achieve superior results to the prior arts. Furthermore, we show that in the reverse problem, semantic face parsing improves when facial attributes are available. That reaffirms the need to jointly model these two interconnected tasks
    corecore