2,357 research outputs found
Structured Knowledge Representation for Image Retrieval
We propose a structured approach to the problem of retrieval of images by
content and present a description logic that has been devised for the semantic
indexing and retrieval of images containing complex objects. As other
approaches do, we start from low-level features extracted with image analysis
to detect and characterize regions in an image. However, in contrast with
feature-based approaches, we provide a syntax to describe segmented regions as
basic objects and complex objects as compositions of basic ones. Then we
introduce a companion extensional semantics for defining reasoning services,
such as retrieval, classification, and subsumption. These services can be used
for both exact and approximate matching, using similarity measures. Using our
logical approach as a formal specification, we implemented a complete
client-server image retrieval system, which allows a user to pose both queries
by sketch and queries by example. A set of experiments has been carried out on
a testbed of images to assess the retrieval capabilities of the system in
comparison with expert users ranking. Results are presented adopting a
well-established measure of quality borrowed from textual information
retrieval
Medical diagnoses with a Cartographic Oriented Model
The human body is composed of several systems and organs that have a specific and well located
position within it. Each organ is usually related to one or more physiological data. There is a subtle
spatial interdependency on human’s body structure and behaviour. Because of this, doctors usually
execute a spatial analysis when diagnosing a disease in a patient. The doctor has to combine patient’s
medical data performing some “implicit” algebraic map operation. Although this is true, most of the
models used to analyze, to process and to visualize these data, do not take into account the strong
spatial interdependency inherent to human body’s functioning. These models usually treat
morphological and physiological data in a full autonomous and isolated way. This happens because
they are not “spatially” oriented, and do not interpret the human body as a 3D map, being composed
by different parts and layers of information. The possibility of combining these layers using spatial
algebraic operations, introduces a new degree of information insight. The main goal of the CHUB
(Cartographic Human Body) model is to introduce a cartographic approach to help doctors to analyse,
visualize and diagnosis human’s body illnesses
Fuzzy spectral and spatial feature integration for classification of nonferrous materials in hyperspectral data
Hyperspectral data allows the construction of more elaborate models to sample the properties of the nonferrous materials than the standard RGB color representation. In this paper, the nonferrous waste materials are studied as they cannot be sorted by classical procedures due to their color, weight and shape similarities. The experimental results presented in this paper reveal that factors such as the various levels of oxidization of the waste materials and the slight differences in their chemical composition preclude the use of the spectral features in a simplistic manner for robust material classification. To address these problems, the proposed FUSSER (fuzzy spectral and spatial classifier) algorithm detailed in this paper merges the spectral and spatial features to obtain a combined feature vector that is able to better sample the properties of the nonferrous materials than the single pixel spectral features when applied to the construction of multivariate Gaussian distributions. This approach allows the implementation of statistical region merging techniques in order to increase the performance of the classification process. To achieve an efficient implementation, the dimensionality of the hyperspectral data is reduced by constructing bio-inspired spectral fuzzy sets that minimize the amount of redundant information contained in adjacent hyperspectral bands. The experimental results indicate that the proposed algorithm increased the overall classification rate from 44% using RGB data up to 98% when the spectral-spatial features are used for nonferrous material classification
Integrating an Agent-Based Model into a Web-Enabled Annual Brome Land Management System
The natural fire cycle in the Great Basin area of Nevada has shortened from every 50 to 60 years to 3 to 5 years, putting many natural ecosystems and occupied lands in danger. The spreading phenomenon of the invasive annual brome will be investigated to quantify this fire risk. It is renowned for its invasive nature, flammability, and the detrimental effects it has on native annual and perennial grasses. Based on vegetation classifications and dispersal characteristics, the rules for an agent-based model will be used to simulate the future extents. Agent Analyst software in conjunction with ArcGIS will integrate simulation results into a web-enabled decision support system for land manager
Annual Brome (Bromus tectorum) Wildfire Fuel Breaks: Web-Enabled GIS Wildfire Model Decision Support System
Annual brome (Bromus tectorum), also known as Cheatgrass, is a non-native invasive plant that has degraded rangeland, and wildlife habitat, and has increased the frequency and severity of wildfires in the Western United States. An ArcGIS Server web-enabled GIS decision support system has been developed to empower landowners and land managers to identify critical areas for placing wildfire suppression annual brome fuel break on their land in order to protect their lives and property. The model identifies the critical predictive annual brome habitat and wildfire threat parameters, and uses web services to input the relevant GIS data that represent the model parameters. The GIS analysis is geoprocessed remotely and the potential fire break locations are distributed as a map web service accessible by a web browser with a graphical user interface, on a thinclient computer, along with technical information about the installation of wildfire fuel breaks
Correctness Criteria for Function-Based Reclassifiers: A Language Based Approach
An emerging problem in systems security is controlling how a program uses the
data it has access to. Information Flow Control (ifc) propagates restrictions
on data by following the flow of information, for example if a secret value
flows to a public value, that value should be considered secret as well. A
common problem in ifc is reclassification of data, for instance to explicitly
make data less restricted. An ifc mechanism often has strict flow rules in
its normal operation, but reclassification by definition need to bypass these
restrictions.
This thesis proposes correctness criteria that aim to provide stronger semantic
guarantees for the behavior of reclassification functions. We first conduct a
survey on prior work in IFC, which concludes that little emphasis has been put
on crystallizing such criteria. We then define a set of criteria for reclassification
and implement a parser to enforce these criteria. If a piece of code is successfully
analyzed by the parser, then that code can be safely used to reclassify data. Rust
is emerging as one of the more prominent languages for systems programming
due to its memory safety, and we conjecture this can be analogously continued
to target ifc as well
An investigation into evolving support for component reuse
It is common in engineering disciplines for new product development to be based on a concept of reuse, i.e. based on a foundation of knowledge and pre-existing components familiar to the discipline's community. In Software Engineering, this concept is known as software reuse. Software reuse is considered essential if higher quality software and reduced development effort are to be achieved. A crucial part of any engineering development is access to tools that aid development. In software engineering this means having software support tools with which to construct software including tools to support effective software reuse. The evolutionary nature of software means that the foundation of knowledge and components on which new products can be developed must reflect the changes occurring in both the software engineering discipline and the domain in which the software is to function. Therefore, effective support tools, including those used in software reuse, must evolve to reflect changes in both software engineering and the varying domains that use software. This thesis contains a survey of the current understanding of software reuse. Software reuse is defined as the use of knowledge and work components of software that already exist in the development of new software. The survey reflects the belief that domain analysis and software tool support are essential in successful software reuse. The focus of the research is an investigation into the effects of a changing domain on the evolution of support for component-based reuse and domain analysis, and on the application of software reuse support methods and tools to another engineering discipline, namely roll design. To broaden understanding of a changing domain on the evolution of support for software reuse and domain analysis, a prototype for a reuse support environment has been developed for roll designers in the steel industry
Object-oriented programming in C# with dynamic classification.
Object-oriented programming language has gained popularity in recent years. However, some problems exist in object-oriented programming languages. It works well with static classification, but does not support object dynamic classification. Static classification means an object always and only belongs to one class during its life spans. In real-world applications, objects may belong to different classes rendering different roles certain times during the lifetime. Dynamic classification enables the changing of object classification over time. Objects can be classified and declassified into/from acquire and release class membership during runtime. In this thesis, many approaches to dynamic classification will be discussed in different implementing languages. Based on the thorough reviews of these approaches, we give a new approach. This approach combines the concept of object and roles and extends a class hierarchy with dynamic classification. The syntax of dynamic classification shows how to implement the function of dynamic classification in the object-oriented programming language. Finally, we present a preprocessor, by which a C♯ code including the extendable dynamic classification functions can be translated to standard C♯ code.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .W364. Source: Masters Abstracts International, Volume: 43-03, page: 0892. Adviser: Liwu Li. Thesis (M.Sc.)--University of Windsor (Canada), 2004
- …