118,951 research outputs found

    An Object-oriented expert system shell with image diagnosis.

    Get PDF
    by Chan Wai Kwong Samual.Thesis (M.Phil.)--Chinese University of Hong Kong.Bibliography: leaves R. 1-6.ACKNOWLEDGEMENTSABSTRACTTABLE OF CONTENTSChapter CHAPTER 1. --- OVERVIEWS --- p.1.1Chapter 1.1 --- Introduction --- p.1.1Chapter 1.2 --- Image Understanding and Artificial Intelligence --- p.1.3Chapter 1.3 --- Object-Oriented Programming and Artificial Intelligence --- p.1.6Chapter 1.4 --- Related Works --- p.1.8Chapter 1.5 --- Discussions and Outlines --- p.1.9Chapter CHAPTER 2. --- OBJECT-ORIENTED SOFTWARE SYSTEMS --- p.2.1Chapter 2.1 --- Introduction --- p.2.1Chapter 2.2 --- Traditional Software Systems --- p.2.1Chapter 2.3 --- Object-Oriented Software Systems --- p.2.2Chapter 2.4 --- Characteristics of an Object-Oriented Systems --- p.2.4Chapter 2.5 --- Knowledge Representation in Image Recognition --- p.2.9Chapter 2.5.1 --- Rule-Based System --- p.2.10Chapter 2.5.2 --- Structured Objects --- p.2.12Chapter 2.5.3 --- Object-Oriented Knowledge Management --- p.2.13Chapter 2.5.4 --- Object-Oriented Expert System Building Tools --- p.2.14Chapter 2.6 --- Concluding Remarks --- p.2.16Chapter CHAPTER 3. --- SYSTEM DESIGN AND ARCHITECTURE --- p.3.1Chapter 3.1 --- Introduction --- p.3.1Chapter 3.2 --- Inheritance and Recognition --- p.3.2Chapter 3.3 --- System Design --- p.3.9Chapter 3.4 --- System Architecture --- p.3.11Chapter 3.4.1 --- The Low Level Vision Kernel --- p.3.14Chapter 3.4.2 --- The High Level Vision Kernel --- p.3.15Chapter 3.4.3 --- User Consultation Kernel --- p.3.17Chapter 3.5 --- Structure of the Image Object Model --- p.3.17Chapter 3.5.1 --- Image Object Model in Object-Oriented Form --- p.3.19Chapter 3.5.2 --- Image Objects Hierarchy --- p.3.23Chapter 3.6 --- Reasoning in OOI --- p.3.26Chapter 3.7 --- Concluding Remarks --- p.3.27Chapter CHAPTER 4. --- CONTROL AND STRATEGIES --- p.4.1Chapter 4.1 --- Introduction --- p.4.1Chapter 4.2 --- Consultation Class Objects --- p.4.4Chapter 4.2.1 --- Audience --- p.4.5Chapter 4.2.2 --- Intrinsic Hypothesis (IH_object) --- p.4.5Chapter 4.2.3 --- Priority Table (PT_object) --- p.4.6Chapter 4.3 --- Operation Objects --- p.4.7Chapter 4.3.1 --- Scheme Scheduler (SS一object) --- p.4.7Chapter 4.3.2 --- Task Scheduler (TS_object) --- p.4.7Chapter 4.4 --- Taxonomy of Image Objects in OOI --- p.4.8Chapter 4.4.1 --- Object Template --- p.4.8Chapter 4.4.2 --- Attributes --- p.4.9Chapter 4.4.3 --- Tasks and Life Cycles --- p.4.9Chapter 4.4.4 --- Object Security --- p.4.10Chapter 4.5 --- Message Passing --- p.4.11Chapter 4.6 --- Strategies --- p.4.12Chapter 4.6.1 --- The Bottom-Up Approach --- p.4.15Chapter 4.6.2 --- The Top-Down Approach --- p.4.18Chapter 4.7 --- Concluding Remarks --- p.4.19Chapter CHAPTER 5. --- IMAGE PROCESSING ALGORITHMS --- p.5.1Chapter 5.1 --- Introduction --- p.5.1Chapter 5.2 --- Image Enhancement --- p.5.2Chapter 5.2.1 --- Spatial Filtering --- p.5.2Chapter 5.2.2 --- Feature Enhancement --- p.5.5Chapter 5.3 --- Pixel Classification --- p.5-7Chapter 5.4 --- Edge Detection Methods --- p.5.9Chapter 5.4.1 --- Local Gradient Operators --- p.5.9Chapter 5.4.2 --- Zero Crossing Method --- p.5.12Chapter 5.5 --- Regional Approaches in Segmentation --- p.5.13Chapter 5.5.1 --- Multi-level Threshold Method --- p.5.13Chapter 5.5.2 --- Region Growing --- p.5.15Chapter 5.6 --- Image Processing Techniques in Medical Domain --- p.5.17Chapter 5.7 --- Concluding Remarks --- p.5.18Chapter CHAPTER 6. --- PICTORIAL DATA MANAGEMENT IN OOI --- p.6.1Chapter 6.1 --- Introduction --- p.6.1Chapter 6.2 --- Description of Basic Properties --- p.6.1Chapter 6.3 --- Description of Relations --- p.6.7Chapter 6.3.1 --- Relational Database of Pictorial Data --- p.6.7Chapter 6.3.2 --- Relational Graphs and Relational Databases --- p.6.10Chapter 6.4 --- Access Functions in Image Objects --- p.6.14Chapter 6.4.1 --- Basic Access Functions --- p.6.14Chapter 6.4.2 --- User Accessible Functions in Objects --- p.6.15Chapter 6.5 --- Image Functions --- p.6.16Chapter 6.5.1 --- Unary Image operations --- p.6.16Chapter 6.5.2 --- Binary Relation Operations --- p.6.19Chapter 6.5.3 --- Update Operations --- p.6.20Chapter 6.6 --- Concluding Remarks --- p.6.21Chapter CHAPTER 7. --- KNOWLEDGE MANAGEMENT --- p.7.1Chapter 7.1 --- Introduction --- p.7.1Chapter 7.2 --- Knowledge in A Domain Knowledge Base --- p.7.1Chapter 7.2.1 --- Structure of Rules --- p.7.2Chapter 7.2.2 --- Hypothesis Generation --- p.7.6Chapter 7.2.3 --- Inference Engine --- p.7.8Chapter 7.3 --- Model Based Reasoning in OOI --- p.7.9Chapter 7.3.1 --- Merging and Labelling --- p.7.9Chapter 7.3.2 --- Vision Model --- p.7.11Chapter 7.4 --- Fuzzy Reasoning --- p.7.12Chapter 7.5 --- Concluding Remarks --- p.7.15Chapter CHAPTER 8. --- KNOWLEDGE ACQUISITION AND USER INTERFACES --- p.8.1Chapter 8.1 --- Introduction --- p.8.1Chapter 8.2 --- Knowledge Acquisition Subsystem --- p.8.3Chapter 8.2.1 --- Rule Management Module --- p.8.3Chapter 8.2.2 --- Attribute Management Module --- p.8.4Chapter 8.2.3 --- Model Management Module --- p.8.8Chapter 8.2.4 --- Methods of Knowledge Encoding and Acquisition --- p.8.9Chapter 8.3 --- User Interface in OOI --- p.8.11Chapter 8.3.1 --- Screen Layout --- p.8.13Chapter 8.3.2 --- Menus and Options --- p.8.15Chapter 8.4 --- Concluding Remarks --- p.8.20Chapter CHAPTER 9. --- IMPLEMENTATION AND RESULTS --- p.9.1Chapter 9.1 --- Introduction --- p.9.1Chapter 9.2 --- Using Expanded Memory --- p.9.2Chapter 9.3 --- ESCUM --- p.9.3Chapter 9.3.1 --- General Description --- p.9.3Chapter 9.3.2 --- Cervical Intraepithelial Neoplasia (CIN) --- p.9.4Chapter 9.3.3 --- Development of ESCUM --- p.9.5Chapter 9.4 --- Results --- p.9.12Chapter 9.5 --- Concluding Remarks --- p.9.13Chapter CHAPTER 10. --- CONCLUSION --- p.10.1Chapter 10.1 --- Summary --- p.10.1Chapter 10.2 --- Areas of Future Work --- p.10.5Chapter APPENDIX A. --- Rule Base of ESCUM --- p.A1Chapter APPENDIX B. --- Glossary for Objected-Oriented Programming --- p.B1REFERENCES --- p.R

    Designing Software Architectures As a Composition of Specializations of Knowledge Domains

    Get PDF
    This paper summarizes our experimental research and software development activities in designing robust, adaptable and reusable software architectures. Several years ago, based on our previous experiences in object-oriented software development, we made the following assumption: ‘A software architecture should be a composition of specializations of knowledge domains’. To verify this assumption we carried out three pilot projects. In addition to the application of some popular domain analysis techniques such as use cases, we identified the invariant compositional structures of the software architectures and the related knowledge domains. Knowledge domains define the boundaries of the adaptability and reusability capabilities of software systems. Next, knowledge domains were mapped to object-oriented concepts. We experienced that some aspects of knowledge could not be directly modeled in terms of object-oriented concepts. In this paper we describe our approach, the pilot projects, the experienced problems and the adopted solutions for realizing the software architectures. We conclude the paper with the lessons that we learned from this experience

    Cascaded Segmentation-Detection Networks for Word-Level Text Spotting

    Full text link
    We introduce an algorithm for word-level text spotting that is able to accurately and reliably determine the bounding regions of individual words of text "in the wild". Our system is formed by the cascade of two convolutional neural networks. The first network is fully convolutional and is in charge of detecting areas containing text. This results in a very reliable but possibly inaccurate segmentation of the input image. The second network (inspired by the popular YOLO architecture) analyzes each segment produced in the first stage, and predicts oriented rectangular regions containing individual words. No post-processing (e.g. text line grouping) is necessary. With execution time of 450 ms for a 1000-by-560 image on a Titan X GPU, our system achieves the highest score to date among published algorithms on the ICDAR 2015 Incidental Scene Text dataset benchmark.Comment: 7 pages, 8 figure

    IMPLEMENTATION OF A LOCALIZATION-ORIENTED HRI FOR WALKING ROBOTS IN THE ROBOCUP ENVIRONMENT

    Get PDF
    This paper presents the design and implementation of a human–robot interface capable of evaluating robot localization performance and maintaining full control of robot behaviors in the RoboCup domain. The system consists of legged robots, behavior modules, an overhead visual tracking system, and a graphic user interface. A human–robot communication framework is designed for executing cooperative and competitive processing tasks between users and robots by using object oriented and modularized software architecture, operability, and functionality. Some experimental results are presented to show the performance of the proposed system based on simulated and real-time information. </jats:p

    A Neural Network Architecture for Figure-ground Separation of Connected Scenic Figures

    Full text link
    A neural network model, called an FBF network, is proposed for automatic parallel separation of multiple image figures from each other and their backgrounds in noisy grayscale or multi-colored images. The figures can then be processed in parallel by an array of self-organizing Adaptive Resonance Theory (ART) neural networks for automatic target recognition. An FBF network can automatically separate the disconnected but interleaved spirals that Minsky and Papert introduced in their book Perceptrons. The network's design also clarifies why humans cannot rapidly separate interleaved spirals, yet can rapidly detect conjunctions of disparity and color, or of disparity and motion, that distinguish target figures from surrounding distractors. Figure-ground separation is accomplished by iterating operations of a Feature Contour System (FCS) and a Boundary Contour System (BCS) in the order FCS-BCS-FCS, hence the term FBF, that have been derived from an analysis of biological vision. The FCS operations include the use of nonlinear shunting networks to compensate for variable illumination and nonlinear diffusion networks to control filling-in. A key new feature of an FBF network is the use of filling-in for figure-ground separation. The BCS operations include oriented filters joined to competitive and cooperative interactions designed to detect, regularize, and complete boundaries in up to 50 percent noise, while suppressing the noise. A modified CORT-X filter is described which uses both on-cells and off-cells to generate a boundary segmentation from a noisy image.Air Force Office of Scientific Research (90-0175); Army Research Office (DAAL-03-88-K0088); Defense Advanced Research Projects Agency (90-0083); Hughes Research Laboratories (S1-804481-D, S1-903136); American Society for Engineering Educatio

    The What-And-Where Filter: A Spatial Mapping Neural Network for Object Recognition and Image Understanding

    Full text link
    The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.Advanced Research Projects Agency (ONR-N00014-92-J-4015, AFOSR 90-0083); British Petroleum (89-A-1204); National Science Foundation (IRI-90-00530, Graduate Fellowship); Office of Naval Research (N00014-91-J-4100, N00014-95-1-0409, N00014-95-1-0657); Air Force Office of Scientific Research (F49620-92-J-0499, F49620-92-J-0334

    Object-Oriented Dynamics Learning through Multi-Level Abstraction

    Full text link
    Object-based approaches for learning action-conditioned dynamics has demonstrated promise for generalization and interpretability. However, existing approaches suffer from structural limitations and optimization difficulties for common environments with multiple dynamic objects. In this paper, we present a novel self-supervised learning framework, called Multi-level Abstraction Object-oriented Predictor (MAOP), which employs a three-level learning architecture that enables efficient object-based dynamics learning from raw visual observations. We also design a spatial-temporal relational reasoning mechanism for MAOP to support instance-level dynamics learning and handle partial observability. Our results show that MAOP significantly outperforms previous methods in terms of sample efficiency and generalization over novel environments for learning environment models. We also demonstrate that learned dynamics models enable efficient planning in unseen environments, comparable to true environment models. In addition, MAOP learns semantically and visually interpretable disentangled representations.Comment: Accepted to the Thirthy-Fourth AAAI Conference On Artificial Intelligence (AAAI), 202

    Multi Resonant Boundary Contour System

    Full text link

    Slovenian Virtual Gallery on the Internet

    Get PDF
    The Slovenian Virtual Gallery (SVG) is a World Wide Web based multimedia collection of pictures, text, clickable-maps and video clips presenting Slovenian fine art from the gothic period up to the present days. Part of SVG is a virtual gallery space where pictures hang on the walls while another part is devoted to current exhibitions of selected Slovenian art galleries. The first version of this application was developed in the first half of 1995. It was based on a file system for storing all the data and custom developed software for search, automatic generation of HTML documents, scaling of pictures and remote management of the system. Due to the fast development of Web related tools a new version of SVG was developed in 1997 based on object-oriented relational database server technology. Both implementations are presented and compared in this article with issues related to the transion between the two versions. At the end, we will also discuss some extensions to SVG. We will present the GUI (Graphical User Interface) developed specially for presentation of current exhibitions over the Web which is based on GlobalView panoramic navigation extension to developed Internet Video Server (IVS). And since SVG operates with a lot of image data, we will confront with the problem of Image Content Retrieval
    corecore