15 research outputs found

    Productive Vision: Methods for Automatic Image Comprehension

    Get PDF
    Image comprehension is the ability to summarize, translate, and answer basic questions about images. Using original techniques for scene object parsing, material labeling, and activity recognition, a system can gather information about the objects and actions in a scene. When this information is integrated into a deep knowledge base capable of inference, the system becomes capable of performing tasks that, when performed by students, are considered by educators to demonstrate comprehension. The vision components of the system consist of the following: object scene parsing by means of visual filters, material scene parsing by superpixel segmentation and kernel descriptors, and activity recognition by action grammars. These techniques are characterized and compared with the state-of-the-art in their respective fields. The output of the vision components is a list of assertions in a Cyc microtheory. By reasoning on these assertions and the rest of the Cyc knowledge base, the system is able to perform a variety of tasks, including the following: Recognize essential parts of objects are likely present in the scene despite not having an explicit detector for them. Recognize the likely presence of objects due to the presence of their essential parts. Improve estimates of both object and material labels by incorporating knowledge about the typical pairings. Label ambiguous objects with a more general label that encompasses both possible labelings. Answer questions about the scene that require inference and give justifications for the answers in natural language. Create a visual representation of the scene in a new medium. Recognize scene similarity even when there is little visual similarity

    Complex adaptive systems based data integration : theory and applications

    Get PDF
    Data Definition Languages (DDLs) have been created and used to represent data in programming languages and in database dictionaries. This representation includes descriptions in the form of data fields and relations in the form of a hierarchy, with the common exception of relational databases where relations are flat. Network computing created an environment that enables relatively easy and inexpensive exchange of data. What followed was the creation of new DDLs claiming better support for automatic data integration. It is uncertain from the literature if any real progress has been made toward achieving an ideal state or limit condition of automatic data integration. This research asserts that difficulties in accomplishing integration are indicative of socio-cultural systems in general and are caused by some measurable attributes common in DDLs. This research’s main contributions are: (1) a theory of data integration requirements to fully support automatic data integration from autonomous heterogeneous data sources; (2) the identification of measurable related abstract attributes (Variety, Tension, and Entropy); (3) the development of tools to measure them. The research uses a multi-theoretic lens to define and articulate these attributes and their measurements. The proposed theory is founded on the Law of Requisite Variety, Information Theory, Complex Adaptive Systems (CAS) theory, Sowa’s Meaning Preservation framework and Zipf distributions of words and meanings. Using the theory, the attributes, and their measures, this research proposes a framework for objectively evaluating the suitability of any data definition language with respect to degrees of automatic data integration. This research uses thirteen data structures constructed with various DDLs from the 1960\u27s to date. No DDL examined (and therefore no DDL similar to those examined) is designed to satisfy the law of requisite variety. No DDL examined is designed to support CAS evolutionary processes that could result in fully automated integration of heterogeneous data sources. There is no significant difference in measures of Variety, Tension, and Entropy among DDLs investigated in this research. A direction to overcome the common limitations discovered in this research is suggested and tested by proposing GlossoMote, a theoretical mathematically sound description language that satisfies the data integration theory requirements. The DDL, named GlossoMote, is not merely a new syntax, it is a drastic departure from existing DDL constructs. The feasibility of the approach is demonstrated with a small scale experiment and evaluated using the proposed assessment framework and other means. The promising results require additional research to evaluate GlossoMote’s approach commercial use potential

    Medical Informatics and Data Analysis

    Get PDF
    During recent years, the use of advanced data analysis methods has increased in clinical and epidemiological research. This book emphasizes the practical aspects of new data analysis methods, and provides insight into new challenges in biostatistics, epidemiology, health sciences, dentistry, and clinical medicine. This book provides a readable text, giving advice on the reporting of new data analytical methods and data presentation. The book consists of 13 articles. Each article is self-contained and may be read independently according to the needs of the reader. The book is essential reading for postgraduate students as well as researchers from medicine and other sciences where statistical data analysis plays a central role

    Ontology engineering and routing in distributed knowledge management applications

    Get PDF

    Ontology and HMAX Features-based Image Classification using Merged Classifiers

    No full text
    International audienceBag-of-Viusal-Words (BoVW) model has been widely used in the area of image classification, which rely on building visual vocabulary. Recently, attention has been shifted to the use of advanced architectures which are characterized by multilevel processing. HMAX model (Hierarchical Max-pooling model) has attracted a great deal of attention in image classification. Recent works, in image classification, consider the integration of onto-logies and semantic structures is useful to improve image classification. In this paper, we propose an approach of image classification based on ontology and HMAX features using merged classifiers. Our contribution resides in exploiting ontological relationships between image categories in line with training visual-feature classifiers, and by merging the outputs of hypernym-hyponym classifiers to lead to a better discrimination between classes. Our purpose is to improve image classification by using ontologies. Several strategies have been experimented and the obtained results have shown that our proposal improves image classification. Results based our ontology outperform results obtained by baseline methods without ontology. Moreover, the deep learning network Inception-v3 is experimented and compared with our method, classification results obtained by our method outperform Inception-v3 for some image classes
    corecore