8,500 research outputs found

    Application and systems software in Ada: Development experiences

    Get PDF
    In its most basic sense software development involves describing the tasks to be solved, including the given objects and the operations to be performed on those objects. Unfortunately, the way people describe objects and operations usually bears little resemblance to source code in most contemporary computer languages. There are two ways around this problem. One is to allow users to describe what they want the computer to do in everyday, typically imprecise English. The PRODOC methodology and software development environment is based on a second more flexible and possibly even easier to use approach. Rather than hiding program structure, PRODOC represents such structure graphically using visual programming techniques. In addition, the program terminology used in PRODOC may be customized so as to match the way human experts in any given application area naturally describe the relevant data and operations. The PRODOC methodology is described in detail

    BiofilmQuant: A Computer-Assisted Tool for Dental Biofilm Quantification

    Full text link
    Dental biofilm is the deposition of microbial material over a tooth substratum. Several methods have recently been reported in the literature for biofilm quantification; however, at best they provide a barely automated solution requiring significant input needed from the human expert. On the contrary, state-of-the-art automatic biofilm methods fail to make their way into clinical practice because of the lack of effective mechanism to incorporate human input to handle praxis or misclassified regions. Manual delineation, the current gold standard, is time consuming and subject to expert bias. In this paper, we introduce a new semi-automated software tool, BiofilmQuant, for dental biofilm quantification in quantitative light-induced fluorescence (QLF) images. The software uses a robust statistical modeling approach to automatically segment the QLF image into three classes (background, biofilm, and tooth substratum) based on the training data. This initial segmentation has shown a high degree of consistency and precision on more than 200 test QLF dental scans. Further, the proposed software provides the clinicians full control to fix any misclassified areas using a single click. In addition, BiofilmQuant also provides a complete solution for the longitudinal quantitative analysis of biofilm of the full set of teeth, providing greater ease of usability.Comment: 4 pages, 4 figures, 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2014

    The parser generator as a general purpose tool

    Get PDF
    The parser generator has proven to be an extremely useful, general purpose tool. It can be used effectively by programmers having only a knowledge of grammars and no training at all in the theory of formal parsing. Some of the application areas for which a table-driven parser can be used include interactive, query languages, menu systems, translators, and programming support tools. Each of these is illustrated by an example grammar

    The nature and evaluation of commercial expert system building tools, revision 1

    Get PDF
    This memorandum reviews the factors that constitute an Expert System Building Tool (ESBT) and evaluates current tools in terms of these factors. Evaluation of these tools is based on their structure and their alternative forms of knowledge representation, inference mechanisms and developer end-user interfaces. Next, functional capabilities, such as diagnosis and design, are related to alternative forms of mechanization. The characteristics and capabilities of existing commercial tools are then reviewed in terms of these criteria

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Learning Aerial Image Segmentation from Online Maps

    Get PDF
    This study deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data-hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps which can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale, publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN

    Timetabling in constraint logic programming

    Get PDF
    In this paper we describe the timetabling problem and its solvability in a Constraint Logic Programming Language. A solution to the problem has been developed and implemented in ECLiPSe, since it deals with finite domains, it has well-defined interfaces between basic building blocks and supports good debugging facilities. The implemented timetable was based on the existing, currently used, timetables at the School of Informatics at out university. It integrates constraints concerning room and period availability
    • …
    corecore