3 research outputs found
Uso de ontologias para detecção de padrões de análise em modelos conceituais em bibliotecas digitais de componentes
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduaçõa em Ciência da Computação.Apresenta-se neste trabalho um método de detecção de padrões de analise (PA#s) em modelos conceituais utilizando ontologias. Um PA pode ter sido previsto ou não no momento em que o modelo conceitual foi concebido. Mesmo se a análise do sistema (fase onde surge o modelo conceitual) não for orientada pelos padrões de análise, é possível verificar a ocorrências destes dentro dos modelos produzidos. Esta ocorrência se dá a partir de algumas regras que são observadas e apresentadas neste trabalho. Para detectar PA em modelos conceituais o artefato essencial integrante deste método é uma ontologia. A ontologia como ferramenta para representar conhecimento tem como papel no CompogeMatch (método apresentado neste trabalho) identificar os conceitos existentes nos modelos submetidos ao método. Uma vez detectados os PAs existentes nos modelos, é possível criar índices a partir desses PA#s encontrados e utilizá-los como filtros indexados no processo de recuperação em bibliotecas digitais de componentes ou modelos conceituais de software. Uma alternativa às buscas por meio de palavras-chaves que apresentam algumas limitações, como por exemplo, não identificação de palavras sinônimas. Por fim, esta pesquisa indica como esse processo de busca pode trazer resultados superiores à busca por palavras-chaves quando o que está se procurando são modelos conceituais ou, mais precisamente, software
Bridging semantic gap: learning and integrating semantics for content-based retrieval
Digital cameras have entered ordinary homes and produced^incredibly large number
of photos. As a typical example of broad image domain, unconstrained consumer
photos vary significantly. Unlike professional or domain-specific images, the objects
in the photos are ill-posed, occluded, and cluttered with poor lighting, focus, and
exposure. Content-based image retrieval research has yet to bridge the semantic gap
between computable low-level information and high-level user interpretation.
In this thesis, we address the issue of semantic gap with a structured learning
framework to allow modular extraction of visual semantics. Semantic image regions
(e.g. face, building, sky etc) are learned statistically, detected directly from image
without segmentation, reconciled across multiple scales, and aggregated spatially to
form compact semantic index. To circumvent the ambiguity and subjectivity in a
query, a new query method that allows spatial arrangement of visual semantics is
proposed. A query is represented as a disjunctive normal form of visual query terms
and processed using fuzzy set operators.
A drawback of supervised learning is the manual labeling of regions as training
samples. In this thesis, a new learning framework to discover local semantic patterns
and to generate their samples for training with minimal human intervention has been
developed. The discovered patterns can be visualized and used in semantic indexing.
In addition, three new class-based indexing schemes are explored. The winnertake-
all scheme supports class-based image retrieval. The class relative scheme and
the local classification scheme compute inter-class memberships and local class patterns
as indexes for similarity matching respectively. A Bayesian formulation is
proposed to unify local and global indexes in image comparison and ranking that
resulted in superior image retrieval performance over those of single indexes.
Query-by-example experiments on 2400 consumer photos with 16 semantic queries
show that the proposed approaches have significantly better (18% to 55%) average
precisions than a high-dimension feature fusion approach. The thesis has paved
two promising research directions, namely the semantics design approach and the
semantics discovery approach. They form elegant dual frameworks that exploits
pattern classifiers in learning and integrating local and global image semantics
Toward a Visual Thesaurus
A thesaurus is a book containing synonyms in a given language; it provides similarity links when trying to retrieve articles or stories about a particular topic. A "visual thesaurus" works with pictures, not words. It aids in recognizing visually similar events, "visual synonyms," including both spatial and motion similarity. This paper describes a method for building such a tool, and recent research results in the MIT Media Lab which contribute toward this goal. The heart of the method is a learning system which gathers information by interacting with a user of a database. The learning system is also capable of incorporating audio and other perceptual information, ultimately constructing a representation of common sense knowledge. 1 Introduction Collections of digital imagery are growing at a rapid pace. The contexts are broad, including areas such as entertainment (e.g. searching for a funny movie scene), education (e.g. hunting down illustrations for a book report), science (e.g. ..