7 research outputs found

    Annotating Cognates and Etymological Origin in Turkic Languages

    Get PDF
    Turkic languages exhibit extensive and diverse etymological relationships among lexical items. These relationships make the Turkic languages promising for exploring automated translation lexicon induction by leveraging cognate and other etymological information. However, due to the extent and diversity of the types of relationships between words, it is not clear how to annotate such information. In this paper, we present a methodology for annotating cognates and etymological origin in Turkic languages. Our method strives to balance the amount of research effort the annotator expends with the utility of the annotations for supporting research on improving automated translation lexicon induction

    Modeling Indirect Evidence

    No full text
    This thesis develops a threshold-based semantics for the Turkish indirect evidential marker that predicts unexpected discrepancies in its distribution. The marker’s behavior in interrogatives, so-called interrogative flip, is shown in turn to follow from the structure of discourse, as formulated by models that incorporate speaker commitment. I first establish that the indirect evidential marks information for which a speaker’s evidence is at best second-best, given general knowledge about the world. I then formalize this generalization in modal semantic terms and show that it explains the marker’s canonical absence in reports of well-known historical fact, as well as its optional presence in evaluative and mirative expressions. After examining the account’s predictions on the level of discourse, I discuss two corollaries: that evidential content in Turkish is not propositional, and that an at best second-best account brings to light anaphoric parallels between indirect evidentiality and the present perfect relative tense

    Fast Human Detection for Indoor Mobile Robots Using Depth Images

    No full text
    A human detection algorithm running on an indoor mobile robot has to address challenges including occlusions due to cluttered environments, changing backgrounds due to the robot’s motion, and limited on-board computational resources. We introduce a fast human detection algorithm for mobile robots equipped with depth cameras. First, we segment the raw depth image using a graph-based segmentation algorithm. Next, we apply a set of parameterized heuristics to filter and merge the segmented regions to obtain a set of candidates. Finally, we compute a Histogram of Oriented Depth (HOD) descriptor for each candidate, and test for human presence with a linear SVM. We experimentally evaluate our approach on a publicly available dataset of humans in an open area as well as our own dataset of humans in a cluttered cafe environment. Our algorithm performs comparably well on a single CPU core against another HOD-based algorithm that runs on a GPU even when the number of training examples is decreased by half. We discuss the impact of the number of training examples on performance, and demonstrate that our approach is able to detect humans in different postures (e.g. standing, walking, sitting) and with occlusions

    Contributors

    No full text
    corecore