11,545 research outputs found

    An X-Windows Toolkit for knowledge acquisition and representation based on conceptual structures

    Get PDF
    This paper describes GET (Graph Editor and Tools), a tool based on Sowa's conceptual structures, which can be used for generic knowledge acquisition and representation. The system enabled the acquisition of semantic information (restrictions) for a lexicon used by a semantic interpreter for Portuguese sentences featuring some deduction capabilities. GET also enables the graphical representation of conceptual relations by incorporating an X-Windows based editor

    Principles and Concepts of Agent-Based Modelling for Developing Geospatial Simulations

    Get PDF
    The aim of this paper is to outline fundamental concepts and principles of the Agent-Based Modelling (ABM) paradigm, with particular reference to the development of geospatial simulations. The paper begins with a brief definition of modelling, followed by a classification of model types, and a comment regarding a shift (in certain circumstances) towards modelling systems at the individual-level. In particular, automata approaches (e.g. Cellular Automata, CA, and ABM) have been particularly popular, with ABM moving to the fore. A definition of agents and agent-based models is given; identifying their advantages and disadvantages, especially in relation to geospatial modelling. The potential use of agent-based models is discussed, and how-to instructions for developing an agent-based model are provided. Types of simulation / modelling systems available for ABM are defined, supplemented with criteria to consider before choosing a particular system for a modelling endeavour. Information pertaining to a selection of simulation / modelling systems (Swarm, MASON, Repast, StarLogo, NetLogo, OBEUS, AgentSheets and AnyLogic) is provided, categorised by their licensing policy (open source, shareware / freeware and proprietary systems). The evaluation (i.e. verification, calibration, validation and analysis) of agent-based models and their output is examined, and noteworthy applications are discussed.Geographical Information Systems (GIS) are a particularly useful medium for representing model input and output of a geospatial nature. However, GIS are not well suited to dynamic modelling (e.g. ABM). In particular, problems of representing time and change within GIS are highlighted. Consequently, this paper explores the opportunity of linking (through coupling or integration / embedding) a GIS with a simulation / modelling system purposely built, and therefore better suited to supporting the requirements of ABM. This paper concludes with a synthesis of the discussion that has proceeded. The aim of this paper is to outline fundamental concepts and principles of the Agent-Based Modelling (ABM) paradigm, with particular reference to the development of geospatial simulations. The paper begins with a brief definition of modelling, followed by a classification of model types, and a comment regarding a shift (in certain circumstances) towards modelling systems at the individual-level. In particular, automata approaches (e.g. Cellular Automata, CA, and ABM) have been particularly popular, with ABM moving to the fore. A definition of agents and agent-based models is given; identifying their advantages and disadvantages, especially in relation to geospatial modelling. The potential use of agent-based models is discussed, and how-to instructions for developing an agent-based model are provided. Types of simulation / modelling systems available for ABM are defined, supplemented with criteria to consider before choosing a particular system for a modelling endeavour. Information pertaining to a selection of simulation / modelling systems (Swarm, MASON, Repast, StarLogo, NetLogo, OBEUS, AgentSheets and AnyLogic) is provided, categorised by their licensing policy (open source, shareware / freeware and proprietary systems). The evaluation (i.e. verification, calibration, validation and analysis) of agent-based models and their output is examined, and noteworthy applications are discussed.Geographical Information Systems (GIS) are a particularly useful medium for representing model input and output of a geospatial nature. However, GIS are not well suited to dynamic modelling (e.g. ABM). In particular, problems of representing time and change within GIS are highlighted. Consequently, this paper explores the opportunity of linking (through coupling or integration / embedding) a GIS with a simulation / modelling system purposely built, and therefore better suited to supporting the requirements of ABM. This paper concludes with a synthesis of the discussion that has proceeded

    An intelligent Geographic Information System for design

    Get PDF
    Recent advances in geographic information systems (GIS) and artificial intelligence (AI) techniques have been summarised, concentrating on the theoretical aspects of their construction and use. Existing projects combining AI and GIS have also been discussed, with attention paid to the interfacing methods used and problems uncovered by the approaches. AI and GIS have been combined in this research to create an intelligent GIS for design. This has been applied to off-shore pipeline route design. The system was tested using data from a real pipeline design project. [Continues.

    Automatic detection of accommodation steps as an indicator of knowledge maturing

    Get PDF
    Jointly working on shared digital artifacts – such as wikis – is a well-tried method of developing knowledge collectively within a group or organization. Our assumption is that such knowledge maturing is an accommodation process that can be measured by taking the writing process itself into account. This paper describes the development of a tool that detects accommodation automatically with the help of machine learning algorithms. We applied a software framework for task detection to the automatic identification of accommodation processes within a wiki. To set up the learning algorithms and test its performance, we conducted an empirical study, in which participants had to contribute to a wiki and, at the same time, identify their own tasks. Two domain experts evaluated the participants’ micro-tasks with regard to accommodation. We then applied an ontology-based task detection approach that identified accommodation with a rate of 79.12%. The potential use of our tool for measuring knowledge maturing online is discussed

    NiftyNet: a deep-learning platform for medical imaging

    Get PDF
    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this application requires substantial implementation effort. Thus, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. NiftyNet provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D and 3D images and computational graphs by default. We present 3 illustrative medical image analysis applications built using NiftyNet: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. NiftyNet enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6 figures; Update includes additional applications, updated author list and formatting for journal submissio

    Supporting strategic design of workplace environments with case-based reasoning

    Get PDF
    XVII+279hlm.;24c

    Retrieval from an image knowledge base

    Get PDF
    With advances in computer technology, images and image databases are becoming increasingly important. Retrievals of images in current image database systems have been designed using keyword searches. These carefully designed and handcrafted systems are very efficient given the application domain they are built for. Unfortunately, they are not adaptable to other domains, not expandable for other uses of the existing information and are not very forgiving to their users. The appearance of full-text search provides for a more general search given textual documents. However, pictorial images contain a vast amount of information that is difficult to catalog in a general way. Further this classification needs to be dynamic providing for flexible searching capability. The searching should allow for more than a pre-programmed set of search parameters, as exact searches make the image database quite useless for a search that was not designed into the original database. Further the incorporation of knowledge along with the images is difficult. Development of an image knowledge base along with content-based retrieval techniques is the focus of this thesis. Using an artificial intelligence technique called case-based reasoning, images can be retrieved with a degree of flexibility. Each image would be classified by user entered attributes about the image called descriptors. These descriptors would also have a degree-of-importance parameter. This parameter would indicate the relative importance or certainty of that descriptor. These descriptors are collected as the case for the image and stored in frames Each image can vary as to the amount of attribute information they contain. Retrieval of an image from the knowledge base begins with the entry of new descriptors for the desired image. Along with the descriptors are the degree-of-importance parameter. The degree-of-importance would indicate the requirement for the desired image to match that descriptor. Again, a variable number of descriptors can be entered. After all criteria are entered, the system will search for cases that have any level of matching. The system will use the degree-of-importance both in the knowledge base about the candidate image(s) and the degree-of-importance on the search criteria to order the images. The ordering process will use weighted summations to present a relatively small list of candidate images. To demonstrate and validate the concepts outlined, a prototype of the system has been developed. This prototype includes the primary architectural components of a potentially real product. Architectural areas addressed are: the storage of the knowledge, storage and access to a large number of high-resolution images, means of searching or interrogating the knowledge base, and the actual display of images. The prototype is called the Smart Photo Album It is an electronic filing system for 35mm pictures taken by the average photographer on up to the photo-journalist. It allows for multiple ways of indexing the pictures of any subject matter. Retrieval from the knowledge base provides relative matches to the given search criteria. Although this application is relatively simple, the basis of the system can be easily extended to include a more sophisticated knowledge base and reasoning process as, for example, would be used for a medical diagnostic application in the field of dermatology

    ImageJ2: ImageJ for the next generation of scientific image data

    Full text link
    ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs

    Development of a manufacturing feature-based design system

    Get PDF
    Traditional CAD systems are based on the serial approach of the product development cycle: the design process is not integrated with other activities and thus it can not provide information for subsequent phases of product development. In order to eliminate this problem, many modern CAD systems allow the composition of designs from building blocks of higher level of abstraction called features. Although features used in current systems tend to be named after manufacturing processes, they do not, in reality, provide valuable manufacturing data. Apart from the obvious disadvantage that process engineers need to re-evaluate the design and capture the intent of the designer, this approach also prohibits early detection of possible manufacturing problems. This research attempts to bring the design and manufacturing phases together by implementing manufacturing features. A design is composed entirely in a bottom-up manner using manufacturable entities in the same way as they would be produced during the manufacturing phase. Each feature consists of parameterised geometry, manufacturing information (including machine tool, cutting tools, cutting conditions, fixtures, and relative cost information), design limitations, functionality rules, and design-for-manufacture rules. The designer selects features from a hierarchical feature library. Upon insertion of a feature, the system ensures that no functionality or manufacturing rules are violated. If a feature is modified, the system validates the feature by making sure that it remains consistent with its original functionality and design-for-manufacture rules are re-applied. The system also allows analysis of designs, from a manufacturing point of view, that were not composed using features. In order to reduce the complexity of the system, design functionality and design-for manufacture rules are organised into a hierarchical system and are pointed to the appropriate entries of the feature hierarchy. The system makes it possible to avoid costly designs by eliminating possible manufacturing problems early in the product development cycle. It also makes computer-aided process planning feasible. The system is developed as an extension of a commercially available CAD/CAM system (Pro/Engineer), and at its current stage only deals with machining features. However, using the same principles, it can be expanded to cover other kinds of manufacturing processes
    corecore