301 research outputs found

    Natural language querying for video databases

    Get PDF
    Cataloged from PDF version of article.The video databases have become popular in various areas due to the recent advances in technology. Video archive systems need user-friendly interfaces to retrieve video frames. In this paper, a user interface based on natural language processing (NLP) to a video database system is described. The video database is based on a content-based spatio-temporal video data model. The data model is focused on the semantic content which includes objects, activities, and spatial properties of objects. Spatio-temporal relationships between video objects and also trajectories of moving objects can be queried with this data model. In this video database system, a natural language interface enables flexible querying. The queries, which are given as English sentences, are parsed using link parser. The semantic representations of the queries are extracted from their syntactic structures using information extraction techniques. The extracted semantic representations are used to call the related parts of the underlying video database system to return the results of the queries. Not only exact matches but similar objects and activities are also returned from the database with the help of the conceptual ontology module. This module is implemented using a distance-based method of semantic similarity search on the semantic domain-independent ontology, WordNet. (C) 2008 Elsevier Inc. All rights reserved

    Tactical Generation in a Free Constituent Order Language

    Full text link
    This paper describes tactical generation in Turkish, a free constituent order language, in which the order of the constituents may change according to the information structure of the sentences to be generated. In the absence of any information regarding the information structure of a sentence (i.e., topic, focus, background, etc.), the constituents of the sentence obey a default order, but the order is almost freely changeable, depending on the constraints of the text flow or discourse. We have used a recursively structured finite state machine for handling the changes in constituent order, implemented as a right-linear grammar backbone. Our implementation environment is the GenKit system, developed at Carnegie Mellon University--Center for Machine Translation. Morphological realization has been implemented using an external morphological analysis/generation component which performs concrete morpheme selection and handles morphographemic processes.Comment: gzipped, uuencoded postscript fil

    Design and evaluation of an ontology based information extraction system for radiological reports

    Get PDF
    Cataloged from PDF version of article.This paper describes an information extraction system that extracts and converts the available information in free text Turkish radiology reports into a structured information model using manually created extraction rules and domain ontology. The ontology provides flexibility in the design of extraction rules, and determines the information model for the extracted semantic information. Although our information extraction system mainly concentrates on abdominal radiology reports, the system can be used in another field of medicine by adapting its ontology and extraction rule set. We achieved very high precision and recall results during the evaluation of the developed system with unseen radiology reports. (C) 2010 Elsevier Ltd. All rights reserved

    Using lexical chains for keyword extraction

    Get PDF
    Cataloged from PDF version of article.Keywords can be considered as condensed versions of documents and short forms of their summaries. In this paper, the problem of automatic extraction of keywords from documents is treated as a supervised learning task. A lexical chain holds a set of semantically related words of a text and it can be said that a lexical chain represents the semantic content of a portion of the text. Although lexical chains have been extensively used in text summarization, their usage for keyword extraction problem has not been fully investigated. In this paper, a keyword extraction technique that uses lexical chains is described, and encouraging results are obtained. (C) 2007 Elsevier Ltd. All rights reserved

    Computational model for heat transfer in the human eye using the finite element method

    Get PDF
    In this work a finite element model for the human eye presented. Thermal analysis was done in order to capture the temperature variation in the human eye. The model was created using advance finite element program ABAQUS. In the model each of the eye\u27s component (cornea, sclera, lens, iris, aqueous and vitreous humor) has own material property. Specific boundary conditions were used for the model. The model incorporates the interaction between eyes components. The Comparisons were done with the available experimental results. The results show that there is a temperature variation in the human eye components with the increasing the time. The front of the cornea and back of the cornea shows different temperature value. And the result shows that there is also temperature difference between the peak of the posterior surface of the cornea and the adjacent posterior surface of the cornea to the sclera

    A Plasticity-Damage Model for Plain Concrete

    Get PDF
    A plastic-damage model for plain concrete is developed in this work. The model uses two different yield criteria: one for plasticity and one for damage. In order to account both for compression and tension loadings, the damage criterion is divided into two parts: one for compression and a second for tension. The superscripts (+) and (-) in this work are used to represent tension and compression cases, respectively. The total stress is decomposed into tension and compressions components. The total strain is decomposed into elastic and plastic parts. The strain equivalence concept is used such that the strains in the effective (undamaged) and damaged configurations are equal to each other. The formulations are extended from the scalar damage to the second order damage tensor. The Lubliner model for plasticity is used in this work. A numerical algorithm is coded using the user subroutine UMAT and then implemented in the advanced finite element program ABAQUS. The numerical simulations are conducted for normal and high strength concrete. The proposed model is also used to compare between the high strength and normal strength concrete. In addition, the three point and four point notched beams are used in the analysis in order to obtain the damage evolution across the beams. Two different meshes, a coarse and a dense, are used for the beams analysis. Beam damage evolution for different displacements is shown at different steps of loading. In all the examples, the results are compared with available experimental data. The results show very good correlation with the experimental data. Damage evolution across the beams is very similar to the experimental crack band. This indicates the accuracy of the method. Computationally, the model is also efficient and consumes minimal computational time

    Generalizing predicates with string arguments

    Get PDF
    The least general generalization (LGG) of strings may cause an over-generalization in the generalization process of the clauses of predicates with string arguments. We propose a specific generalization (SG) for strings to reduce over-generalization. SGs of strings are used in the generalization of a set of strings representing the arguments of a set of positive examples of a predicate with string arguments. In order to create a SG of two strings, first, a unique match sequence between these strings is found. A unique match sequence of two strings consists of similarities and differences to represent similar parts and differing parts between those strings. The differences in the unique match sequence are replaced to create a SG of those strings. In the generalization process, a coverage algorithm based on SGs of strings or learning heuristics based on match sequences are used. © Springer Science + Business Media, LLC 2006

    Well-differentiated abdominal liposarcoma: experience of a tertiary care center

    Get PDF
    BACKGROUND: We presented abdominal liposarcoma cases diagnosed and managed in a tertiary care center and also conducted a literature review on main features of this tumor. METHODS: Chart reviews of eight cases were conducted, and clinical, surgical, histopathological, and follow-up data were recorded. RESULTS: Overall, complete surgical resection was performed with adjacent organ resection in 25% of cases, and radiotherapy was not administered. Recurrence was developed in only one case and died after 2 years and 3 months, and other cases are under follow-up without recurrence. Histopatological examinations revealed findings of well-differentiated liposarcoma. CONCLUSIONS: According to our surgical experience, the surgical margin positivity may not be a determining factor for the survival of patients with well-differentiated liposarcoma, and in the absence of macroscopic invasion, adjacent organ resection may not be required. Radiotherapy may not be preferred when complete resection of abdominal mass was achieved

    Natural language querying for video databases

    Get PDF
    The video databases have become popular in various areas due to the recent advances in technology. Video archive systems need user-friendly interfaces to retrieve video frames. In this paper, a user interface based on natural language processing (NLP) to a video database system is described. The video database is based on a content-based spatio-temporal video data model. The data model is focused on the semantic content which includes objects, activities, and spatial properties of objects. Spatio-temporal relationships between video objects and also trajectories of moving objects can be queried with this data model. In this video database system, a natural language interface enables flexible querying. The queries, which are given as English sentences, are parsed using link parser. The semantic representations of the queries are extracted from their syntactic structures using information extraction techniques. The extracted semantic representations are used to call the related parts of the underlying video database system to return the results of the queries. Not only exact matches but similar objects and activities are also returned from the database with the help of the conceptual ontology module. This module is implemented using a distance-based method of semantic similarity search on the semantic domain-independent ontology, WordNet. © 2008 Elsevier Inc. All rights reserved
    corecore